OpenCV  4.10.0-dev
Open Source Computer Vision
Loading...
Searching...
No Matches
Deep Neural Network module

Topics

 Partial List of Implemented Layers
 
 Utilities for New Layers Registration
 

Detailed Description

This module contains:

Functionality of this module is designed only for forward pass computations (i.e. network testing). A network training is in principle not supported.

Classes

class  cv::dnn::BackendNode
 Derivatives of this class encapsulates functions of certain backends. More...
 
class  cv::dnn::BackendWrapper
 Derivatives of this class wraps cv::Mat for different backends and targets. More...
 
class  cv::dnn::ClassificationModel
 This class represents high-level API for classification models. More...
 
class  cv::dnn::DetectionModel
 This class represents high-level API for object detection networks. More...
 
class  cv::dnn::Dict
 This class implements name-value dictionary, values are instances of DictValue. More...
 
struct  cv::dnn::DictValue
 This struct stores the scalar value (or array) of one of the following type: double, cv::String or int64. More...
 
struct  cv::dnn::Image2BlobParams
 Processing params of image to blob. More...
 
class  cv::dnn::KeypointsModel
 This class represents high-level API for keypoints models. More...
 
class  cv::dnn::Layer
 This interface class allows to build new Layers - are building blocks of networks. More...
 
class  cv::dnn::LayerParams
 This class provides all data needed to initialize layer. More...
 
class  cv::dnn::Model
 This class is presented high-level API for neural networks. More...
 
class  cv::dnn::Net
 This class allows to create and manipulate comprehensive artificial neural networks. More...
 
class  cv::dnn::SegmentationModel
 This class represents high-level API for segmentation models. More...
 
class  cv::dnn::TextDetectionModel
 Base class for text detection networks. More...
 
class  cv::dnn::TextDetectionModel_DB
 This class represents high-level API for text detection DL networks compatible with DB model. More...
 
class  cv::dnn::TextDetectionModel_EAST
 This class represents high-level API for text detection DL networks compatible with EAST model. More...
 
class  cv::dnn::TextRecognitionModel
 This class represents high-level API for text recognition networks. More...
 

Typedefs

typedef std::map< std::string, std::vector< LayerFactory::Constructor > > cv::dnn::LayerFactory_Impl
 
typedef std::vector< int > cv::dnn::MatShape
 

Enumerations

enum  cv::dnn::Backend {
  cv::dnn::DNN_BACKEND_DEFAULT = 0 ,
  cv::dnn::DNN_BACKEND_HALIDE ,
  cv::dnn::DNN_BACKEND_INFERENCE_ENGINE ,
  cv::dnn::DNN_BACKEND_OPENCV ,
  cv::dnn::DNN_BACKEND_VKCOM ,
  cv::dnn::DNN_BACKEND_CUDA ,
  cv::dnn::DNN_BACKEND_WEBNN ,
  cv::dnn::DNN_BACKEND_TIMVX ,
  cv::dnn::DNN_BACKEND_CANN
}
 Enum of computation backends supported by layers. More...
 
enum  cv::dnn::DataLayout {
  cv::dnn::DNN_LAYOUT_UNKNOWN = 0 ,
  cv::dnn::DNN_LAYOUT_ND = 1 ,
  cv::dnn::DNN_LAYOUT_NCHW = 2 ,
  cv::dnn::DNN_LAYOUT_NCDHW = 3 ,
  cv::dnn::DNN_LAYOUT_NHWC = 4 ,
  cv::dnn::DNN_LAYOUT_NDHWC = 5 ,
  cv::dnn::DNN_LAYOUT_PLANAR = 6
}
 Enum of data layout for model inference. More...
 
enum  cv::dnn::ImagePaddingMode {
  cv::dnn::DNN_PMODE_NULL = 0 ,
  cv::dnn::DNN_PMODE_CROP_CENTER = 1 ,
  cv::dnn::DNN_PMODE_LETTERBOX = 2
}
 Enum of image processing mode. To facilitate the specialization pre-processing requirements of the dnn model. For example, the letter box often used in the Yolo series of models. More...
 
enum class  cv::dnn::SoftNMSMethod {
  cv::dnn::SoftNMSMethod::SOFTNMS_LINEAR = 1 ,
  cv::dnn::SoftNMSMethod::SOFTNMS_GAUSSIAN = 2
}
 Enum of Soft NMS methods. More...
 
enum  cv::dnn::Target {
  cv::dnn::DNN_TARGET_CPU = 0 ,
  cv::dnn::DNN_TARGET_OPENCL ,
  cv::dnn::DNN_TARGET_OPENCL_FP16 ,
  cv::dnn::DNN_TARGET_MYRIAD ,
  cv::dnn::DNN_TARGET_VULKAN ,
  cv::dnn::DNN_TARGET_FPGA ,
  cv::dnn::DNN_TARGET_CUDA ,
  cv::dnn::DNN_TARGET_CUDA_FP16 ,
  cv::dnn::DNN_TARGET_HDDL ,
  cv::dnn::DNN_TARGET_NPU ,
  cv::dnn::DNN_TARGET_CPU_FP16
}
 Enum of target devices for computations. More...
 

Functions

Mat cv::dnn::blobFromImage (InputArray image, double scalefactor=1.0, const Size &size=Size(), const Scalar &mean=Scalar(), bool swapRB=false, bool crop=false, int ddepth=CV_32F)
 Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels.
 
void cv::dnn::blobFromImage (InputArray image, OutputArray blob, double scalefactor=1.0, const Size &size=Size(), const Scalar &mean=Scalar(), bool swapRB=false, bool crop=false, int ddepth=CV_32F)
 Creates 4-dimensional blob from image.
 
Mat cv::dnn::blobFromImages (InputArrayOfArrays images, double scalefactor=1.0, Size size=Size(), const Scalar &mean=Scalar(), bool swapRB=false, bool crop=false, int ddepth=CV_32F)
 Creates 4-dimensional blob from series of images. Optionally resizes and crops images from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels.
 
void cv::dnn::blobFromImages (InputArrayOfArrays images, OutputArray blob, double scalefactor=1.0, Size size=Size(), const Scalar &mean=Scalar(), bool swapRB=false, bool crop=false, int ddepth=CV_32F)
 Creates 4-dimensional blob from series of images.
 
Mat cv::dnn::blobFromImagesWithParams (InputArrayOfArrays images, const Image2BlobParams &param=Image2BlobParams())
 Creates 4-dimensional blob from series of images with given params.
 
void cv::dnn::blobFromImagesWithParams (InputArrayOfArrays images, OutputArray blob, const Image2BlobParams &param=Image2BlobParams())
 
Mat cv::dnn::blobFromImageWithParams (InputArray image, const Image2BlobParams &param=Image2BlobParams())
 Creates 4-dimensional blob from image with given params.
 
void cv::dnn::blobFromImageWithParams (InputArray image, OutputArray blob, const Image2BlobParams &param=Image2BlobParams())
 
void cv::dnn::enableModelDiagnostics (bool isDiagnosticsMode)
 Enables detailed logging of the DNN model loading with CV DNN API.
 
std::vector< std::pair< Backend, Target > > cv::dnn::getAvailableBackends ()
 
std::vector< Targetcv::dnn::getAvailableTargets (dnn::Backend be)
 
LayerFactory_Implcv::dnn::getLayerFactoryImpl ()
 
Mutexcv::dnn::getLayerFactoryMutex ()
 Get the mutex guarding LayerFactory_Impl, see getLayerFactoryImpl() function.
 
void cv::dnn::imagesFromBlob (const cv::Mat &blob_, OutputArrayOfArrays images_)
 Parse a 4D blob and output the images it contains as 2D arrays through a simpler data structure (std::vector<cv::Mat>).
 
void cv::dnn::NMSBoxes (const std::vector< Rect > &bboxes, const std::vector< float > &scores, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 Performs non maximum suppression given boxes and corresponding scores.
 
void cv::dnn::NMSBoxes (const std::vector< Rect2d > &bboxes, const std::vector< float > &scores, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 
void cv::dnn::NMSBoxes (const std::vector< RotatedRect > &bboxes, const std::vector< float > &scores, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 
void cv::dnn::NMSBoxesBatched (const std::vector< Rect > &bboxes, const std::vector< float > &scores, const std::vector< int > &class_ids, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 Performs batched non maximum suppression on given boxes and corresponding scores across different classes.
 
void cv::dnn::NMSBoxesBatched (const std::vector< Rect2d > &bboxes, const std::vector< float > &scores, const std::vector< int > &class_ids, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 
Net cv::dnn::readNet (const String &framework, const std::vector< uchar > &bufferModel, const std::vector< uchar > &bufferConfig=std::vector< uchar >())
 Read deep learning network represented in one of the supported formats.
 
Net cv::dnn::readNet (CV_WRAP_FILE_PATH const String &model, CV_WRAP_FILE_PATH const String &config="", const String &framework="")
 Read deep learning network represented in one of the supported formats.
 
Net cv::dnn::readNetFromCaffe (const char *bufferProto, size_t lenProto, const char *bufferModel=NULL, size_t lenModel=0)
 Reads a network model stored in Caffe model in memory.
 
Net cv::dnn::readNetFromCaffe (const std::vector< uchar > &bufferProto, const std::vector< uchar > &bufferModel=std::vector< uchar >())
 Reads a network model stored in Caffe model in memory.
 
Net cv::dnn::readNetFromCaffe (CV_WRAP_FILE_PATH const String &prototxt, CV_WRAP_FILE_PATH const String &caffeModel=String())
 Reads a network model stored in Caffe framework's format.
 
Net cv::dnn::readNetFromDarknet (const char *bufferCfg, size_t lenCfg, const char *bufferModel=NULL, size_t lenModel=0)
 Reads a network model stored in Darknet model files.
 
Net cv::dnn::readNetFromDarknet (const std::vector< uchar > &bufferCfg, const std::vector< uchar > &bufferModel=std::vector< uchar >())
 Reads a network model stored in Darknet model files.
 
Net cv::dnn::readNetFromDarknet (CV_WRAP_FILE_PATH const String &cfgFile, CV_WRAP_FILE_PATH const String &darknetModel=String())
 Reads a network model stored in Darknet model files.
 
Net cv::dnn::readNetFromModelOptimizer (const std::vector< uchar > &bufferModelConfig, const std::vector< uchar > &bufferWeights)
 Load a network from Intel's Model Optimizer intermediate representation.
 
Net cv::dnn::readNetFromModelOptimizer (const uchar *bufferModelConfigPtr, size_t bufferModelConfigSize, const uchar *bufferWeightsPtr, size_t bufferWeightsSize)
 Load a network from Intel's Model Optimizer intermediate representation.
 
Net cv::dnn::readNetFromModelOptimizer (CV_WRAP_FILE_PATH const String &xml, CV_WRAP_FILE_PATH const String &bin="")
 Load a network from Intel's Model Optimizer intermediate representation.
 
Net cv::dnn::readNetFromONNX (const char *buffer, size_t sizeBuffer)
 Reads a network model from ONNX in-memory buffer.
 
Net cv::dnn::readNetFromONNX (const std::vector< uchar > &buffer)
 Reads a network model from ONNX in-memory buffer.
 
Net cv::dnn::readNetFromONNX (CV_WRAP_FILE_PATH const String &onnxFile)
 Reads a network model ONNX.
 
Net cv::dnn::readNetFromTensorflow (const char *bufferModel, size_t lenModel, const char *bufferConfig=NULL, size_t lenConfig=0)
 Reads a network model stored in TensorFlow framework's format.
 
Net cv::dnn::readNetFromTensorflow (const std::vector< uchar > &bufferModel, const std::vector< uchar > &bufferConfig=std::vector< uchar >())
 Reads a network model stored in TensorFlow framework's format.
 
Net cv::dnn::readNetFromTensorflow (CV_WRAP_FILE_PATH const String &model, CV_WRAP_FILE_PATH const String &config=String())
 Reads a network model stored in TensorFlow framework's format.
 
Net cv::dnn::readNetFromTFLite (const char *bufferModel, size_t lenModel)
 Reads a network model stored in TFLite framework's format.
 
Net cv::dnn::readNetFromTFLite (const std::vector< uchar > &bufferModel)
 Reads a network model stored in TFLite framework's format.
 
Net cv::dnn::readNetFromTFLite (CV_WRAP_FILE_PATH const String &model)
 Reads a network model stored in TFLite framework's format.
 
Net cv::dnn::readNetFromTorch (CV_WRAP_FILE_PATH const String &model, bool isBinary=true, bool evaluate=true)
 Reads a network model stored in Torch7 framework's format.
 
Mat cv::dnn::readTensorFromONNX (CV_WRAP_FILE_PATH const String &path)
 Creates blob from .pb file.
 
Mat cv::dnn::readTorchBlob (const String &filename, bool isBinary=true)
 Loads blob which was serialized as torch.Tensor object of Torch7 framework.
 
void cv::dnn::shrinkCaffeModel (CV_WRAP_FILE_PATH const String &src, CV_WRAP_FILE_PATH const String &dst, const std::vector< String > &layersTypes=std::vector< String >())
 Convert all weights of Caffe network to half precision floating point.
 
void cv::dnn::softNMSBoxes (const std::vector< Rect > &bboxes, const std::vector< float > &scores, std::vector< float > &updated_scores, const float score_threshold, const float nms_threshold, std::vector< int > &indices, size_t top_k=0, const float sigma=0.5, SoftNMSMethod method=SoftNMSMethod::SOFTNMS_GAUSSIAN)
 Performs soft non maximum suppression given boxes and corresponding scores. Reference: https://arxiv.org/abs/1704.04503.
 
void cv::dnn::writeTextGraph (CV_WRAP_FILE_PATH const String &model, CV_WRAP_FILE_PATH const String &output)
 Create a text representation for a binary network stored in protocol buffer format.
 

Typedef Documentation

◆ LayerFactory_Impl

typedef std::map<std::string, std::vector<LayerFactory::Constructor> > cv::dnn::LayerFactory_Impl

◆ MatShape

typedef std::vector<int> cv::dnn::MatShape

#include <opencv2/dnn/dnn.hpp>

Enumeration Type Documentation

◆ Backend

#include <opencv2/dnn/dnn.hpp>

Enum of computation backends supported by layers.

See also
Net::setPreferableBackend
Enumerator
DNN_BACKEND_DEFAULT 
Python: cv.dnn.DNN_BACKEND_DEFAULT

DNN_BACKEND_DEFAULT equals to OPENCV_DNN_BACKEND_DEFAULT, which can be defined using CMake or a configuration parameter.

DNN_BACKEND_HALIDE 
Python: cv.dnn.DNN_BACKEND_HALIDE
DNN_BACKEND_INFERENCE_ENGINE 
Python: cv.dnn.DNN_BACKEND_INFERENCE_ENGINE

Intel OpenVINO computational backend

Note
Tutorial how to build OpenCV with OpenVINO: OpenCV usage with OpenVINO
DNN_BACKEND_OPENCV 
Python: cv.dnn.DNN_BACKEND_OPENCV
DNN_BACKEND_VKCOM 
Python: cv.dnn.DNN_BACKEND_VKCOM
DNN_BACKEND_CUDA 
Python: cv.dnn.DNN_BACKEND_CUDA
DNN_BACKEND_WEBNN 
Python: cv.dnn.DNN_BACKEND_WEBNN
DNN_BACKEND_TIMVX 
Python: cv.dnn.DNN_BACKEND_TIMVX
DNN_BACKEND_CANN 
Python: cv.dnn.DNN_BACKEND_CANN

◆ DataLayout

#include <opencv2/dnn/dnn.hpp>

Enum of data layout for model inference.

See also
Image2BlobParams
Enumerator
DNN_LAYOUT_UNKNOWN 
Python: cv.dnn.DNN_LAYOUT_UNKNOWN
DNN_LAYOUT_ND 
Python: cv.dnn.DNN_LAYOUT_ND

OpenCV data layout for 2D data.

DNN_LAYOUT_NCHW 
Python: cv.dnn.DNN_LAYOUT_NCHW

OpenCV data layout for 4D data.

DNN_LAYOUT_NCDHW 
Python: cv.dnn.DNN_LAYOUT_NCDHW

OpenCV data layout for 5D data.

DNN_LAYOUT_NHWC 
Python: cv.dnn.DNN_LAYOUT_NHWC

Tensorflow-like data layout for 4D data.

DNN_LAYOUT_NDHWC 
Python: cv.dnn.DNN_LAYOUT_NDHWC

Tensorflow-like data layout for 5D data.

DNN_LAYOUT_PLANAR 
Python: cv.dnn.DNN_LAYOUT_PLANAR

Tensorflow-like data layout, it should only be used at tf or tflite model parsing.

◆ ImagePaddingMode

#include <opencv2/dnn/dnn.hpp>

Enum of image processing mode. To facilitate the specialization pre-processing requirements of the dnn model. For example, the letter box often used in the Yolo series of models.

See also
Image2BlobParams
Enumerator
DNN_PMODE_NULL 
Python: cv.dnn.DNN_PMODE_NULL
DNN_PMODE_CROP_CENTER 
Python: cv.dnn.DNN_PMODE_CROP_CENTER
DNN_PMODE_LETTERBOX 
Python: cv.dnn.DNN_PMODE_LETTERBOX

◆ SoftNMSMethod

enum class cv::dnn::SoftNMSMethod
strong

#include <opencv2/dnn/dnn.hpp>

Enum of Soft NMS methods.

See also
softNMSBoxes
Enumerator
SOFTNMS_LINEAR 
SOFTNMS_GAUSSIAN 

◆ Target

#include <opencv2/dnn/dnn.hpp>

Enum of target devices for computations.

See also
Net::setPreferableTarget
Enumerator
DNN_TARGET_CPU 
Python: cv.dnn.DNN_TARGET_CPU
DNN_TARGET_OPENCL 
Python: cv.dnn.DNN_TARGET_OPENCL
DNN_TARGET_OPENCL_FP16 
Python: cv.dnn.DNN_TARGET_OPENCL_FP16
DNN_TARGET_MYRIAD 
Python: cv.dnn.DNN_TARGET_MYRIAD
DNN_TARGET_VULKAN 
Python: cv.dnn.DNN_TARGET_VULKAN
DNN_TARGET_FPGA 
Python: cv.dnn.DNN_TARGET_FPGA

FPGA device with CPU fallbacks using Inference Engine's Heterogeneous plugin.

DNN_TARGET_CUDA 
Python: cv.dnn.DNN_TARGET_CUDA
DNN_TARGET_CUDA_FP16 
Python: cv.dnn.DNN_TARGET_CUDA_FP16
DNN_TARGET_HDDL 
Python: cv.dnn.DNN_TARGET_HDDL
DNN_TARGET_NPU 
Python: cv.dnn.DNN_TARGET_NPU
DNN_TARGET_CPU_FP16 
Python: cv.dnn.DNN_TARGET_CPU_FP16

Function Documentation

◆ blobFromImage() [1/2]

Mat cv::dnn::blobFromImage ( InputArray image,
double scalefactor = 1.0,
const Size & size = Size(),
const Scalar & mean = Scalar(),
bool swapRB = false,
bool crop = false,
int ddepth = CV_32F )
Python:
cv.dnn.blobFromImage(image[, scalefactor[, size[, mean[, swapRB[, crop[, ddepth]]]]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels.

Parameters
imageinput image (with 1-, 3- or 4-channels).
scalefactormultiplier for images values.
sizespatial size for output image
meanscalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true.
swapRBflag which indicates that swap first and last channels in 3-channel image is necessary.
cropflag which indicates whether image will be cropped after resize or not
ddepthDepth of output blob. Choose CV_32F or CV_8U.

if crop is true, input image is resized so one side after resize is equal to corresponding dimension in size and another one is equal or larger. Then, crop from the center is performed. If crop is false, direct resize without cropping and preserving aspect ratio is performed.

Returns
4-dimensional Mat with NCHW dimensions order.
Note
The order and usage of scalefactor and mean are (input - mean) * scalefactor.

◆ blobFromImage() [2/2]

void cv::dnn::blobFromImage ( InputArray image,
OutputArray blob,
double scalefactor = 1.0,
const Size & size = Size(),
const Scalar & mean = Scalar(),
bool swapRB = false,
bool crop = false,
int ddepth = CV_32F )
Python:
cv.dnn.blobFromImage(image[, scalefactor[, size[, mean[, swapRB[, crop[, ddepth]]]]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from image.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ blobFromImages() [1/2]

Mat cv::dnn::blobFromImages ( InputArrayOfArrays images,
double scalefactor = 1.0,
Size size = Size(),
const Scalar & mean = Scalar(),
bool swapRB = false,
bool crop = false,
int ddepth = CV_32F )
Python:
cv.dnn.blobFromImages(images[, scalefactor[, size[, mean[, swapRB[, crop[, ddepth]]]]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from series of images. Optionally resizes and crops images from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels.

Parameters
imagesinput images (all with 1-, 3- or 4-channels).
sizespatial size for output image
meanscalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true.
scalefactormultiplier for images values.
swapRBflag which indicates that swap first and last channels in 3-channel image is necessary.
cropflag which indicates whether image will be cropped after resize or not
ddepthDepth of output blob. Choose CV_32F or CV_8U.

if crop is true, input image is resized so one side after resize is equal to corresponding dimension in size and another one is equal or larger. Then, crop from the center is performed. If crop is false, direct resize without cropping and preserving aspect ratio is performed.

Returns
4-dimensional Mat with NCHW dimensions order.
Note
The order and usage of scalefactor and mean are (input - mean) * scalefactor.

◆ blobFromImages() [2/2]

void cv::dnn::blobFromImages ( InputArrayOfArrays images,
OutputArray blob,
double scalefactor = 1.0,
Size size = Size(),
const Scalar & mean = Scalar(),
bool swapRB = false,
bool crop = false,
int ddepth = CV_32F )
Python:
cv.dnn.blobFromImages(images[, scalefactor[, size[, mean[, swapRB[, crop[, ddepth]]]]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from series of images.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ blobFromImagesWithParams() [1/2]

Mat cv::dnn::blobFromImagesWithParams ( InputArrayOfArrays images,
const Image2BlobParams & param = Image2BlobParams() )
Python:
cv.dnn.blobFromImagesWithParams(images[, param]) -> retval
cv.dnn.blobFromImagesWithParams(images[, blob[, param]]) -> blob

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from series of images with given params.

This function is an extension of blobFromImages to meet more image preprocess needs. Given input image and preprocessing parameters, and function outputs the blob.

Parameters
imagesinput image (all with 1-, 3- or 4-channels).
paramstruct of Image2BlobParams, contains all parameters needed by processing of image to blob.
Returns
4-dimensional Mat.

◆ blobFromImagesWithParams() [2/2]

void cv::dnn::blobFromImagesWithParams ( InputArrayOfArrays images,
OutputArray blob,
const Image2BlobParams & param = Image2BlobParams() )
Python:
cv.dnn.blobFromImagesWithParams(images[, param]) -> retval
cv.dnn.blobFromImagesWithParams(images[, blob[, param]]) -> blob

#include <opencv2/dnn/dnn.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ blobFromImageWithParams() [1/2]

Mat cv::dnn::blobFromImageWithParams ( InputArray image,
const Image2BlobParams & param = Image2BlobParams() )
Python:
cv.dnn.blobFromImageWithParams(image[, param]) -> retval
cv.dnn.blobFromImageWithParams(image[, blob[, param]]) -> blob

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from image with given params.

This function is an extension of blobFromImage to meet more image preprocess needs. Given input image and preprocessing parameters, and function outputs the blob.

Parameters
imageinput image (all with 1-, 3- or 4-channels).
paramstruct of Image2BlobParams, contains all parameters needed by processing of image to blob.
Returns
4-dimensional Mat.

◆ blobFromImageWithParams() [2/2]

void cv::dnn::blobFromImageWithParams ( InputArray image,
OutputArray blob,
const Image2BlobParams & param = Image2BlobParams() )
Python:
cv.dnn.blobFromImageWithParams(image[, param]) -> retval
cv.dnn.blobFromImageWithParams(image[, blob[, param]]) -> blob

#include <opencv2/dnn/dnn.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ enableModelDiagnostics()

void cv::dnn::enableModelDiagnostics ( bool isDiagnosticsMode)

#include <opencv2/dnn/dnn.hpp>

Enables detailed logging of the DNN model loading with CV DNN API.

Parameters
[in]isDiagnosticsModeIndicates whether diagnostic mode should be set.

Diagnostic mode provides detailed logging of the model loading stage to explore potential problems (ex.: not implemented layer type).

Note
In diagnostic mode series of assertions will be skipped, it can lead to the expected application crash.

◆ getAvailableBackends()

std::vector< std::pair< Backend, Target > > cv::dnn::getAvailableBackends ( )

#include <opencv2/dnn/dnn.hpp>

◆ getAvailableTargets()

std::vector< Target > cv::dnn::getAvailableTargets ( dnn::Backend be)
Python:
cv.dnn.getAvailableTargets(be) -> retval

#include <opencv2/dnn/dnn.hpp>

◆ getLayerFactoryImpl()

LayerFactory_Impl & cv::dnn::getLayerFactoryImpl ( )

#include <opencv2/dnn/layer_reg.private.hpp>

Register layer types of DNN model.

Note
In order to thread-safely access the factory, see getLayerFactoryMutex() function.

◆ getLayerFactoryMutex()

Mutex & cv::dnn::getLayerFactoryMutex ( )

#include <opencv2/dnn/layer_reg.private.hpp>

Get the mutex guarding LayerFactory_Impl, see getLayerFactoryImpl() function.

◆ imagesFromBlob()

void cv::dnn::imagesFromBlob ( const cv::Mat & blob_,
OutputArrayOfArrays images_ )
Python:
cv.dnn.imagesFromBlob(blob_[, images_]) -> images_

#include <opencv2/dnn/dnn.hpp>

Parse a 4D blob and output the images it contains as 2D arrays through a simpler data structure (std::vector<cv::Mat>).

Parameters
[in]blob_4 dimensional array (images, channels, height, width) in floating point precision (CV_32F) from which you would like to extract the images.
[out]images_array of 2D Mat containing the images extracted from the blob in floating point precision (CV_32F). They are non normalized neither mean added. The number of returned images equals the first dimension of the blob (batch size). Every image has a number of channels equals to the second dimension of the blob (depth).

◆ NMSBoxes() [1/3]

void cv::dnn::NMSBoxes ( const std::vector< Rect > & bboxes,
const std::vector< float > & scores,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxes(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices
cv.dnn.NMSBoxesRotated(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

Performs non maximum suppression given boxes and corresponding scores.

Parameters
bboxesa set of bounding boxes to apply NMS.
scoresa set of corresponding confidences.
score_thresholda threshold used to filter boxes by score.
nms_thresholda threshold used in non maximum suppression.
indicesthe kept indices of bboxes after NMS.
etaa coefficient in adaptive threshold formula: \(nms\_threshold_{i+1}=eta\cdot nms\_threshold_i\).
top_kif >0, keep at most top_k picked indices.

◆ NMSBoxes() [2/3]

void cv::dnn::NMSBoxes ( const std::vector< Rect2d > & bboxes,
const std::vector< float > & scores,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxes(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices
cv.dnn.NMSBoxesRotated(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

◆ NMSBoxes() [3/3]

void cv::dnn::NMSBoxes ( const std::vector< RotatedRect > & bboxes,
const std::vector< float > & scores,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxes(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices
cv.dnn.NMSBoxesRotated(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

◆ NMSBoxesBatched() [1/2]

void cv::dnn::NMSBoxesBatched ( const std::vector< Rect > & bboxes,
const std::vector< float > & scores,
const std::vector< int > & class_ids,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxesBatched(bboxes, scores, class_ids, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

Performs batched non maximum suppression on given boxes and corresponding scores across different classes.

Parameters
bboxesa set of bounding boxes to apply NMS.
scoresa set of corresponding confidences.
class_idsa set of corresponding class ids. Ids are integer and usually start from 0.
score_thresholda threshold used to filter boxes by score.
nms_thresholda threshold used in non maximum suppression.
indicesthe kept indices of bboxes after NMS.
etaa coefficient in adaptive threshold formula: \(nms\_threshold_{i+1}=eta\cdot nms\_threshold_i\).
top_kif >0, keep at most top_k picked indices.

◆ NMSBoxesBatched() [2/2]

void cv::dnn::NMSBoxesBatched ( const std::vector< Rect2d > & bboxes,
const std::vector< float > & scores,
const std::vector< int > & class_ids,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxesBatched(bboxes, scores, class_ids, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

◆ readNet() [1/2]

Net cv::dnn::readNet ( const String & framework,
const std::vector< uchar > & bufferModel,
const std::vector< uchar > & bufferConfig = std::vector< uchar >() )
Python:
cv.dnn.readNet(model[, config[, framework]]) -> retval
cv.dnn.readNet(framework, bufferModel[, bufferConfig]) -> retval

#include <opencv2/dnn/dnn.hpp>

Read deep learning network represented in one of the supported formats.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
[in]frameworkName of origin framework.
[in]bufferModelA buffer with a content of binary file with weights
[in]bufferConfigA buffer with a content of text file contains network configuration.
Returns
Net object.

◆ readNet() [2/2]

Net cv::dnn::readNet ( CV_WRAP_FILE_PATH const String & model,
CV_WRAP_FILE_PATH const String & config = "",
const String & framework = "" )
Python:
cv.dnn.readNet(model[, config[, framework]]) -> retval
cv.dnn.readNet(framework, bufferModel[, bufferConfig]) -> retval

#include <opencv2/dnn/dnn.hpp>

Read deep learning network represented in one of the supported formats.

Parameters
[in]modelBinary file contains trained weights. The following file extensions are expected for models from different frameworks:
[in]configText file contains network configuration. It could be a file with the following extensions:
[in]frameworkExplicit framework name tag to determine a format.
Returns
Net object.

This function automatically detects an origin framework of trained model and calls an appropriate function such readNetFromCaffe, readNetFromTensorflow, readNetFromTorch or readNetFromDarknet. An order of model and config arguments does not matter.

◆ readNetFromCaffe() [1/3]

Net cv::dnn::readNetFromCaffe ( const char * bufferProto,
size_t lenProto,
const char * bufferModel = NULL,
size_t lenModel = 0 )
Python:
cv.dnn.readNetFromCaffe(prototxt[, caffeModel]) -> retval
cv.dnn.readNetFromCaffe(bufferProto[, bufferModel]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Caffe model in memory.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
bufferProtobuffer containing the content of the .prototxt file
lenProtolength of bufferProto
bufferModelbuffer containing the content of the .caffemodel file
lenModellength of bufferModel
Returns
Net object.

◆ readNetFromCaffe() [2/3]

Net cv::dnn::readNetFromCaffe ( const std::vector< uchar > & bufferProto,
const std::vector< uchar > & bufferModel = std::vector< uchar >() )
Python:
cv.dnn.readNetFromCaffe(prototxt[, caffeModel]) -> retval
cv.dnn.readNetFromCaffe(bufferProto[, bufferModel]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Caffe model in memory.

Parameters
bufferProtobuffer containing the content of the .prototxt file
bufferModelbuffer containing the content of the .caffemodel file
Returns
Net object.

◆ readNetFromCaffe() [3/3]

Net cv::dnn::readNetFromCaffe ( CV_WRAP_FILE_PATH const String & prototxt,
CV_WRAP_FILE_PATH const String & caffeModel = String() )
Python:
cv.dnn.readNetFromCaffe(prototxt[, caffeModel]) -> retval
cv.dnn.readNetFromCaffe(bufferProto[, bufferModel]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Caffe framework's format.

Parameters
prototxtpath to the .prototxt file with text description of the network architecture.
caffeModelpath to the .caffemodel file with learned network.
Returns
Net object.

◆ readNetFromDarknet() [1/3]

Net cv::dnn::readNetFromDarknet ( const char * bufferCfg,
size_t lenCfg,
const char * bufferModel = NULL,
size_t lenModel = 0 )
Python:
cv.dnn.readNetFromDarknet(cfgFile[, darknetModel]) -> retval
cv.dnn.readNetFromDarknet(bufferCfg[, bufferModel]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Darknet model files.

Parameters
bufferCfgA buffer contains a content of .cfg file with text description of the network architecture.
lenCfgNumber of bytes to read from bufferCfg
bufferModelA buffer contains a content of .weights file with learned network.
lenModelNumber of bytes to read from bufferModel
Returns
Net object.

◆ readNetFromDarknet() [2/3]

Net cv::dnn::readNetFromDarknet ( const std::vector< uchar > & bufferCfg,
const std::vector< uchar > & bufferModel = std::vector< uchar >() )
Python:
cv.dnn.readNetFromDarknet(cfgFile[, darknetModel]) -> retval
cv.dnn.readNetFromDarknet(bufferCfg[, bufferModel]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Darknet model files.

Parameters
bufferCfgA buffer contains a content of .cfg file with text description of the network architecture.
bufferModelA buffer contains a content of .weights file with learned network.
Returns
Net object.

◆ readNetFromDarknet() [3/3]

Net cv::dnn::readNetFromDarknet ( CV_WRAP_FILE_PATH const String & cfgFile,
CV_WRAP_FILE_PATH const String & darknetModel = String() )
Python:
cv.dnn.readNetFromDarknet(cfgFile[, darknetModel]) -> retval
cv.dnn.readNetFromDarknet(bufferCfg[, bufferModel]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Darknet model files.

Parameters
cfgFilepath to the .cfg file with text description of the network architecture.
darknetModelpath to the .weights file with learned network.
Returns
Network object that ready to do forward, throw an exception in failure cases.

◆ readNetFromModelOptimizer() [1/3]

Net cv::dnn::readNetFromModelOptimizer ( const std::vector< uchar > & bufferModelConfig,
const std::vector< uchar > & bufferWeights )
Python:
cv.dnn.readNetFromModelOptimizer(xml[, bin]) -> retval
cv.dnn.readNetFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval

#include <opencv2/dnn/dnn.hpp>

Load a network from Intel's Model Optimizer intermediate representation.

Parameters
[in]bufferModelConfigBuffer contains XML configuration with network's topology.
[in]bufferWeightsBuffer contains binary data with trained weights.
Returns
Net object. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.

◆ readNetFromModelOptimizer() [2/3]

Net cv::dnn::readNetFromModelOptimizer ( const uchar * bufferModelConfigPtr,
size_t bufferModelConfigSize,
const uchar * bufferWeightsPtr,
size_t bufferWeightsSize )
Python:
cv.dnn.readNetFromModelOptimizer(xml[, bin]) -> retval
cv.dnn.readNetFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval

#include <opencv2/dnn/dnn.hpp>

Load a network from Intel's Model Optimizer intermediate representation.

Parameters
[in]bufferModelConfigPtrPointer to buffer which contains XML configuration with network's topology.
[in]bufferModelConfigSizeBinary size of XML configuration data.
[in]bufferWeightsPtrPointer to buffer which contains binary data with trained weights.
[in]bufferWeightsSizeBinary size of trained weights data.
Returns
Net object. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.

◆ readNetFromModelOptimizer() [3/3]

Net cv::dnn::readNetFromModelOptimizer ( CV_WRAP_FILE_PATH const String & xml,
CV_WRAP_FILE_PATH const String & bin = "" )
Python:
cv.dnn.readNetFromModelOptimizer(xml[, bin]) -> retval
cv.dnn.readNetFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval

#include <opencv2/dnn/dnn.hpp>

Load a network from Intel's Model Optimizer intermediate representation.

Parameters
[in]xmlXML configuration file with network's topology.
[in]binBinary file with trained weights.
Returns
Net object. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.

◆ readNetFromONNX() [1/3]

Net cv::dnn::readNetFromONNX ( const char * buffer,
size_t sizeBuffer )
Python:
cv.dnn.readNetFromONNX(onnxFile) -> retval
cv.dnn.readNetFromONNX(buffer) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model from ONNX in-memory buffer.

Parameters
buffermemory address of the first byte of the buffer.
sizeBuffersize of the buffer.
Returns
Network object that ready to do forward, throw an exception in failure cases.

◆ readNetFromONNX() [2/3]

Net cv::dnn::readNetFromONNX ( const std::vector< uchar > & buffer)
Python:
cv.dnn.readNetFromONNX(onnxFile) -> retval
cv.dnn.readNetFromONNX(buffer) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model from ONNX in-memory buffer.

Parameters
bufferin-memory buffer that stores the ONNX model bytes.
Returns
Network object that ready to do forward, throw an exception in failure cases.

◆ readNetFromONNX() [3/3]

Net cv::dnn::readNetFromONNX ( CV_WRAP_FILE_PATH const String & onnxFile)
Python:
cv.dnn.readNetFromONNX(onnxFile) -> retval
cv.dnn.readNetFromONNX(buffer) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model ONNX.

Parameters
onnxFilepath to the .onnx file with text description of the network architecture.
Returns
Network object that ready to do forward, throw an exception in failure cases.

◆ readNetFromTensorflow() [1/3]

Net cv::dnn::readNetFromTensorflow ( const char * bufferModel,
size_t lenModel,
const char * bufferConfig = NULL,
size_t lenConfig = 0 )
Python:
cv.dnn.readNetFromTensorflow(model[, config]) -> retval
cv.dnn.readNetFromTensorflow(bufferModel[, bufferConfig]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TensorFlow framework's format.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
bufferModelbuffer containing the content of the pb file
lenModellength of bufferModel
bufferConfigbuffer containing the content of the pbtxt file
lenConfiglength of bufferConfig

◆ readNetFromTensorflow() [2/3]

Net cv::dnn::readNetFromTensorflow ( const std::vector< uchar > & bufferModel,
const std::vector< uchar > & bufferConfig = std::vector< uchar >() )
Python:
cv.dnn.readNetFromTensorflow(model[, config]) -> retval
cv.dnn.readNetFromTensorflow(bufferModel[, bufferConfig]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TensorFlow framework's format.

Parameters
bufferModelbuffer containing the content of the pb file
bufferConfigbuffer containing the content of the pbtxt file
Returns
Net object.

◆ readNetFromTensorflow() [3/3]

Net cv::dnn::readNetFromTensorflow ( CV_WRAP_FILE_PATH const String & model,
CV_WRAP_FILE_PATH const String & config = String() )
Python:
cv.dnn.readNetFromTensorflow(model[, config]) -> retval
cv.dnn.readNetFromTensorflow(bufferModel[, bufferConfig]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TensorFlow framework's format.

Parameters
modelpath to the .pb file with binary protobuf description of the network architecture
configpath to the .pbtxt file that contains text graph definition in protobuf format. Resulting Net object is built by text graph using weights from a binary one that let us make it more flexible.
Returns
Net object.

◆ readNetFromTFLite() [1/3]

Net cv::dnn::readNetFromTFLite ( const char * bufferModel,
size_t lenModel )
Python:
cv.dnn.readNetFromTFLite(model) -> retval
cv.dnn.readNetFromTFLite(bufferModel) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TFLite framework's format.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
bufferModelbuffer containing the content of the tflite file
lenModellength of bufferModel

◆ readNetFromTFLite() [2/3]

Net cv::dnn::readNetFromTFLite ( const std::vector< uchar > & bufferModel)
Python:
cv.dnn.readNetFromTFLite(model) -> retval
cv.dnn.readNetFromTFLite(bufferModel) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TFLite framework's format.

Parameters
bufferModelbuffer containing the content of the tflite file
Returns
Net object.

◆ readNetFromTFLite() [3/3]

Net cv::dnn::readNetFromTFLite ( CV_WRAP_FILE_PATH const String & model)
Python:
cv.dnn.readNetFromTFLite(model) -> retval
cv.dnn.readNetFromTFLite(bufferModel) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TFLite framework's format.

Parameters
modelpath to the .tflite file with binary flatbuffers description of the network architecture
Returns
Net object.

◆ readNetFromTorch()

Net cv::dnn::readNetFromTorch ( CV_WRAP_FILE_PATH const String & model,
bool isBinary = true,
bool evaluate = true )
Python:
cv.dnn.readNetFromTorch(model[, isBinary[, evaluate]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Torch7 framework's format.

Parameters
modelpath to the file, dumped from Torch by using torch.save() function.
isBinaryspecifies whether the network was serialized in ascii mode or binary.
evaluatespecifies testing phase of network. If true, it's similar to evaluate() method in Torch.
Returns
Net object.
Note
Ascii mode of Torch serializer is more preferable, because binary mode extensively use long type of C language, which has various bit-length on different systems.

The loading file must contain serialized nn.Module object with importing network. Try to eliminate a custom objects from serialazing data to avoid importing errors.

List of supported layers (i.e. object instances derived from Torch nn.Module class):

  • nn.Sequential
  • nn.Parallel
  • nn.Concat
  • nn.Linear
  • nn.SpatialConvolution
  • nn.SpatialMaxPooling, nn.SpatialAveragePooling
  • nn.ReLU, nn.TanH, nn.Sigmoid
  • nn.Reshape
  • nn.SoftMax, nn.LogSoftMax

Also some equivalents of these classes from cunn, cudnn, and fbcunn may be successfully imported.

◆ readTensorFromONNX()

Mat cv::dnn::readTensorFromONNX ( CV_WRAP_FILE_PATH const String & path)
Python:
cv.dnn.readTensorFromONNX(path) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates blob from .pb file.

Parameters
pathto the .pb file with input tensor.
Returns
Mat.

◆ readTorchBlob()

Mat cv::dnn::readTorchBlob ( const String & filename,
bool isBinary = true )
Python:
cv.dnn.readTorchBlob(filename[, isBinary]) -> retval

#include <opencv2/dnn/dnn.hpp>

Loads blob which was serialized as torch.Tensor object of Torch7 framework.

Warning
This function has the same limitations as readNetFromTorch().

◆ shrinkCaffeModel()

void cv::dnn::shrinkCaffeModel ( CV_WRAP_FILE_PATH const String & src,
CV_WRAP_FILE_PATH const String & dst,
const std::vector< String > & layersTypes = std::vector< String >() )
Python:
cv.dnn.shrinkCaffeModel(src, dst[, layersTypes]) -> None

#include <opencv2/dnn/dnn.hpp>

Convert all weights of Caffe network to half precision floating point.

Parameters
srcPath to origin model from Caffe framework contains single precision floating point weights (usually has .caffemodel extension).
dstPath to destination model with updated weights.
layersTypesSet of layers types which parameters will be converted. By default, converts only Convolutional and Fully-Connected layers' weights.
Note
Shrinked model has no origin float32 weights so it can't be used in origin Caffe framework anymore. However the structure of data is taken from NVidia's Caffe fork: https://github.com/NVIDIA/caffe. So the resulting model may be used there.

◆ softNMSBoxes()

void cv::dnn::softNMSBoxes ( const std::vector< Rect > & bboxes,
const std::vector< float > & scores,
std::vector< float > & updated_scores,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
size_t top_k = 0,
const float sigma = 0.5,
SoftNMSMethod method = SoftNMSMethod::SOFTNMS_GAUSSIAN )
Python:
cv.dnn.softNMSBoxes(bboxes, scores, score_threshold, nms_threshold[, top_k[, sigma[, method]]]) -> updated_scores, indices

#include <opencv2/dnn/dnn.hpp>

Performs soft non maximum suppression given boxes and corresponding scores. Reference: https://arxiv.org/abs/1704.04503.

Parameters
bboxesa set of bounding boxes to apply Soft NMS.
scoresa set of corresponding confidences.
updated_scoresa set of corresponding updated confidences.
score_thresholda threshold used to filter boxes by score.
nms_thresholda threshold used in non maximum suppression.
indicesthe kept indices of bboxes after NMS.
top_kkeep at most top_k picked indices.
sigmaparameter of Gaussian weighting.
methodGaussian or linear.
See also
SoftNMSMethod

◆ writeTextGraph()

void cv::dnn::writeTextGraph ( CV_WRAP_FILE_PATH const String & model,
CV_WRAP_FILE_PATH const String & output )
Python:
cv.dnn.writeTextGraph(model, output) -> None

#include <opencv2/dnn/dnn.hpp>

Create a text representation for a binary network stored in protocol buffer format.

Parameters
[in]modelA path to binary network.
[in]outputA path to output text file to be created.
Note
To reduce output file size, trained weights are not included.