OpenCV 5.0.0-pre
Open Source Computer Vision
Loading...
Searching...
No Matches
Deep Neural Network module

Topics

 Partial List of Implemented Layers
 
 Utilities for New Layers Registration
 

Detailed Description

This module contains:

Functionality of this module is designed only for forward pass computations (i.e. network testing). A network training is in principle not supported.

Classes

struct  cv::dnn::Arg
 
struct  cv::dnn::ArgData
 
class  cv::dnn::BackendNode
 Derivatives of this class encapsulates functions of certain backends. More...
 
class  cv::dnn::BackendWrapper
 Derivatives of this class wraps cv::Mat for different backends and targets. More...
 
class  cv::dnn::ClassificationModel
 This class represents high-level API for classification models. More...
 
class  cv::dnn::DetectionModel
 This class represents high-level API for object detection networks. More...
 
class  cv::dnn::Dict
 This class implements name-value dictionary, values are instances of DictValue. More...
 
struct  cv::dnn::DictValue
 This struct stores the scalar value (or array) of one of the following type: double, cv::String or int64. More...
 
class  cv::dnn::Graph
 Represents graph or subgraph of a model. The graph (in mathematical terms it's rather a multigraph) is represented as a topologically-sorted linear sequence of operations. Each operation is a smart pointer to a Layer (some of its derivative class instance), which includes a list of inputs and outputs, as well as an optional list of subgraphs (e.g. 'If' contains 2 subgraphs). More...
 
struct  cv::dnn::Image2BlobParams
 Processing params of image to blob. More...
 
class  cv::dnn::KeypointsModel
 This class represents high-level API for keypoints models. More...
 
class  cv::dnn::Layer
 This interface class allows to build new Layers - are building blocks of networks. More...
 
class  cv::dnn::LayerParams
 This class provides all data needed to initialize layer. More...
 
class  cv::dnn::Model
 This class is presented high-level API for neural networks. More...
 
class  cv::dnn::Net
 This class allows to create and manipulate comprehensive artificial neural networks. More...
 
class  cv::dnn::SegmentationModel
 This class represents high-level API for segmentation models. More...
 
class  cv::dnn::TextDetectionModel
 Base class for text detection networks. More...
 
class  cv::dnn::TextDetectionModel_DB
 This class represents high-level API for text detection DL networks compatible with DB model. More...
 
class  cv::dnn::TextDetectionModel_EAST
 This class represents high-level API for text detection DL networks compatible with EAST model. More...
 
class  cv::dnn::TextRecognitionModel
 This class represents high-level API for text recognition networks. More...
 

Typedefs

typedef std::map< std::string, std::vector< LayerFactory::Constructor > > cv::dnn::LayerFactory_Impl
 
typedef int cv::dnn::MatType
 

Enumerations

enum  cv::dnn::ArgKind {
  cv::dnn::DNN_ARG_EMPTY =0 ,
  cv::dnn::DNN_ARG_CONST =1 ,
  cv::dnn::DNN_ARG_INPUT =2 ,
  cv::dnn::DNN_ARG_OUTPUT =3 ,
  cv::dnn::DNN_ARG_TEMP =4 ,
  cv::dnn::DNN_ARG_PATTERN =5
}
 
enum  cv::dnn::Backend {
  cv::dnn::DNN_BACKEND_DEFAULT = 0 ,
  cv::dnn::DNN_BACKEND_INFERENCE_ENGINE = 2 ,
  cv::dnn::DNN_BACKEND_OPENCV ,
  cv::dnn::DNN_BACKEND_VKCOM ,
  cv::dnn::DNN_BACKEND_CUDA ,
  cv::dnn::DNN_BACKEND_WEBNN ,
  cv::dnn::DNN_BACKEND_TIMVX ,
  cv::dnn::DNN_BACKEND_CANN
}
 Enum of computation backends supported by layers. More...
 
enum  cv::dnn::EngineType {
  cv::dnn::ENGINE_CLASSIC =1 ,
  cv::dnn::ENGINE_NEW =2 ,
  cv::dnn::ENGINE_AUTO =3
}
 
enum  cv::dnn::ImagePaddingMode {
  cv::dnn::DNN_PMODE_NULL = 0 ,
  cv::dnn::DNN_PMODE_CROP_CENTER = 1 ,
  cv::dnn::DNN_PMODE_LETTERBOX = 2
}
 Enum of image processing mode. To facilitate the specialization pre-processing requirements of the dnn model. For example, the letter box often used in the Yolo series of models. More...
 
enum  cv::dnn::ModelFormat {
  cv::dnn::DNN_MODEL_GENERIC = 0 ,
  cv::dnn::DNN_MODEL_ONNX = 1 ,
  cv::dnn::DNN_MODEL_TF = 2 ,
  cv::dnn::DNN_MODEL_TFLITE = 3 ,
  cv::dnn::DNN_MODEL_CAFFE = 4
}
 
enum  cv::dnn::ProfilingMode {
  cv::dnn::DNN_PROFILE_NONE = 0 ,
  cv::dnn::DNN_PROFILE_SUMMARY = 1 ,
  cv::dnn::DNN_PROFILE_DETAILED = 2
}
 
enum class  cv::dnn::SoftNMSMethod {
  cv::dnn::SoftNMSMethod::SOFTNMS_LINEAR = 1 ,
  cv::dnn::SoftNMSMethod::SOFTNMS_GAUSSIAN = 2
}
 Enum of Soft NMS methods. More...
 
enum  cv::dnn::Target {
  cv::dnn::DNN_TARGET_CPU = 0 ,
  cv::dnn::DNN_TARGET_OPENCL ,
  cv::dnn::DNN_TARGET_OPENCL_FP16 ,
  cv::dnn::DNN_TARGET_MYRIAD ,
  cv::dnn::DNN_TARGET_VULKAN ,
  cv::dnn::DNN_TARGET_FPGA ,
  cv::dnn::DNN_TARGET_CUDA ,
  cv::dnn::DNN_TARGET_CUDA_FP16 ,
  cv::dnn::DNN_TARGET_HDDL ,
  cv::dnn::DNN_TARGET_NPU ,
  cv::dnn::DNN_TARGET_CPU_FP16
}
 Enum of target devices for computations. More...
 
enum  cv::dnn::TracingMode {
  cv::dnn::DNN_TRACE_NONE = 0 ,
  cv::dnn::DNN_TRACE_ALL = 1 ,
  cv::dnn::DNN_TRACE_OP = 2
}
 

Functions

std::string cv::dnn::argKindToString (ArgKind kind)
 
Mat cv::dnn::blobFromImage (InputArray image, double scalefactor=1.0, const Size &size=Size(), const Scalar &mean=Scalar(), bool swapRB=false, bool crop=false, int ddepth=CV_32F)
 Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels.
 
void cv::dnn::blobFromImage (InputArray image, OutputArray blob, double scalefactor=1.0, const Size &size=Size(), const Scalar &mean=Scalar(), bool swapRB=false, bool crop=false, int ddepth=CV_32F)
 Creates 4-dimensional blob from image.
 
Mat cv::dnn::blobFromImages (InputArrayOfArrays images, double scalefactor=1.0, Size size=Size(), const Scalar &mean=Scalar(), bool swapRB=false, bool crop=false, int ddepth=CV_32F)
 Creates 4-dimensional blob from series of images. Optionally resizes and crops images from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels.
 
void cv::dnn::blobFromImages (InputArrayOfArrays images, OutputArray blob, double scalefactor=1.0, Size size=Size(), const Scalar &mean=Scalar(), bool swapRB=false, bool crop=false, int ddepth=CV_32F)
 Creates 4-dimensional blob from series of images.
 
Mat cv::dnn::blobFromImagesWithParams (InputArrayOfArrays images, const Image2BlobParams &param=Image2BlobParams())
 Creates 4-dimensional blob from series of images with given params.
 
void cv::dnn::blobFromImagesWithParams (InputArrayOfArrays images, OutputArray blob, const Image2BlobParams &param=Image2BlobParams())
 
Mat cv::dnn::blobFromImageWithParams (InputArray image, const Image2BlobParams &param=Image2BlobParams())
 Creates 4-dimensional blob from image with given params.
 
void cv::dnn::blobFromImageWithParams (InputArray image, OutputArray blob, const Image2BlobParams &param=Image2BlobParams())
 
void cv::dnn::enableModelDiagnostics (bool isDiagnosticsMode)
 Enables detailed logging of the DNN model loading with CV DNN API.
 
std::vector< std::pair< Backend, Target > > cv::dnn::getAvailableBackends ()
 
std::vector< Targetcv::dnn::getAvailableTargets (dnn::Backend be)
 
LayerFactory_Implcv::dnn::getLayerFactoryImpl ()
 
Mutexcv::dnn::getLayerFactoryMutex ()
 Get the mutex guarding LayerFactory_Impl, see getLayerFactoryImpl() function.
 
void cv::dnn::imagesFromBlob (const cv::Mat &blob_, OutputArrayOfArrays images_)
 Parse a 4D blob and output the images it contains as 2D arrays through a simpler data structure (std::vector<cv::Mat>).
 
std::string cv::dnn::modelFormatToString (ModelFormat modelFormat)
 
void cv::dnn::NMSBoxes (const std::vector< Rect > &bboxes, const std::vector< float > &scores, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 Performs non maximum suppression given boxes and corresponding scores.
 
void cv::dnn::NMSBoxes (const std::vector< Rect2d > &bboxes, const std::vector< float > &scores, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 
void cv::dnn::NMSBoxes (const std::vector< RotatedRect > &bboxes, const std::vector< float > &scores, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 
void cv::dnn::NMSBoxesBatched (const std::vector< Rect > &bboxes, const std::vector< float > &scores, const std::vector< int > &class_ids, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 Performs batched non maximum suppression on given boxes and corresponding scores across different classes.
 
void cv::dnn::NMSBoxesBatched (const std::vector< Rect2d > &bboxes, const std::vector< float > &scores, const std::vector< int > &class_ids, const float score_threshold, const float nms_threshold, std::vector< int > &indices, const float eta=1.f, const int top_k=0)
 
Net cv::dnn::readNet (const String &framework, const std::vector< uchar > &bufferModel, const std::vector< uchar > &bufferConfig=std::vector< uchar >(), int engine=ENGINE_AUTO)
 Read deep learning network represented in one of the supported formats.
 
Net cv::dnn::readNet (CV_WRAP_FILE_PATH const String &model, CV_WRAP_FILE_PATH const String &config="", const String &framework="", int engine=ENGINE_AUTO)
 Read deep learning network represented in one of the supported formats.
 
Net cv::dnn::readNetFromCaffe (const char *bufferProto, size_t lenProto, const char *bufferModel=NULL, size_t lenModel=0, int engine=ENGINE_AUTO)
 Reads a network model stored in Caffe model in memory.
 
Net cv::dnn::readNetFromCaffe (const std::vector< uchar > &bufferProto, const std::vector< uchar > &bufferModel=std::vector< uchar >(), int engine=ENGINE_AUTO)
 Reads a network model stored in Caffe model in memory.
 
Net cv::dnn::readNetFromCaffe (CV_WRAP_FILE_PATH const String &prototxt, CV_WRAP_FILE_PATH const String &caffeModel=String(), int engine=ENGINE_AUTO)
 Reads a network model stored in Caffe framework's format.
 
Net cv::dnn::readNetFromDarknet (const char *bufferCfg, size_t lenCfg, const char *bufferModel=NULL, size_t lenModel=0)
 Reads a network model stored in Darknet model files.
 
Net cv::dnn::readNetFromDarknet (const std::vector< uchar > &bufferCfg, const std::vector< uchar > &bufferModel=std::vector< uchar >())
 Reads a network model stored in Darknet model files.
 
Net cv::dnn::readNetFromDarknet (CV_WRAP_FILE_PATH const String &cfgFile, CV_WRAP_FILE_PATH const String &darknetModel=String())
 Reads a network model stored in Darknet model files.
 
Net cv::dnn::readNetFromModelOptimizer (const std::vector< uchar > &bufferModelConfig, const std::vector< uchar > &bufferWeights)
 Load a network from Intel's Model Optimizer intermediate representation.
 
Net cv::dnn::readNetFromModelOptimizer (const uchar *bufferModelConfigPtr, size_t bufferModelConfigSize, const uchar *bufferWeightsPtr, size_t bufferWeightsSize)
 Load a network from Intel's Model Optimizer intermediate representation.
 
Net cv::dnn::readNetFromModelOptimizer (CV_WRAP_FILE_PATH const String &xml, CV_WRAP_FILE_PATH const String &bin="")
 Load a network from Intel's Model Optimizer intermediate representation.
 
Net cv::dnn::readNetFromONNX (const char *buffer, size_t sizeBuffer, int engine=ENGINE_AUTO)
 Reads a network model from ONNX in-memory buffer.
 
Net cv::dnn::readNetFromONNX (const std::vector< uchar > &buffer, int engine=ENGINE_AUTO)
 Reads a network model from ONNX in-memory buffer.
 
Net cv::dnn::readNetFromONNX (CV_WRAP_FILE_PATH const String &onnxFile, int engine=ENGINE_AUTO)
 Reads a network model ONNX.
 
Net cv::dnn::readNetFromTensorflow (const char *bufferModel, size_t lenModel, const char *bufferConfig=NULL, size_t lenConfig=0, int engine=ENGINE_AUTO, const std::vector< String > &extraOutputs=std::vector< String >())
 Reads a network model stored in TensorFlow framework's format.
 
Net cv::dnn::readNetFromTensorflow (const std::vector< uchar > &bufferModel, const std::vector< uchar > &bufferConfig=std::vector< uchar >(), int engine=ENGINE_AUTO, const std::vector< String > &extraOutputs=std::vector< String >())
 Reads a network model stored in TensorFlow framework's format.
 
Net cv::dnn::readNetFromTensorflow (CV_WRAP_FILE_PATH const String &model, CV_WRAP_FILE_PATH const String &config=String(), int engine=ENGINE_AUTO, const std::vector< String > &extraOutputs=std::vector< String >())
 Reads a network model stored in TensorFlow framework's format.
 
Net cv::dnn::readNetFromTFLite (const char *bufferModel, size_t lenModel, int engine=ENGINE_AUTO)
 Reads a network model stored in TFLite framework's format.
 
Net cv::dnn::readNetFromTFLite (const std::vector< uchar > &bufferModel, int engine=ENGINE_AUTO)
 Reads a network model stored in TFLite framework's format.
 
Net cv::dnn::readNetFromTFLite (CV_WRAP_FILE_PATH const String &model, int engine=ENGINE_AUTO)
 Reads a network model stored in TFLite framework's format.
 
Mat cv::dnn::readTensorFromONNX (CV_WRAP_FILE_PATH const String &path)
 Creates blob from .pb file.
 
void cv::dnn::shrinkCaffeModel (CV_WRAP_FILE_PATH const String &src, CV_WRAP_FILE_PATH const String &dst, const std::vector< String > &layersTypes=std::vector< String >())
 Convert all weights of Caffe network to half precision floating point.
 
void cv::dnn::softNMSBoxes (const std::vector< Rect > &bboxes, const std::vector< float > &scores, std::vector< float > &updated_scores, const float score_threshold, const float nms_threshold, std::vector< int > &indices, size_t top_k=0, const float sigma=0.5, SoftNMSMethod method=SoftNMSMethod::SOFTNMS_GAUSSIAN)
 Performs soft non maximum suppression given boxes and corresponding scores. Reference: https://arxiv.org/abs/1704.04503.
 
void cv::dnn::writeTextGraph (CV_WRAP_FILE_PATH const String &model, CV_WRAP_FILE_PATH const String &output)
 Create a text representation for a binary network stored in protocol buffer format.
 

Typedef Documentation

◆ LayerFactory_Impl

typedef std::map<std::string, std::vector<LayerFactory::Constructor> > cv::dnn::LayerFactory_Impl

◆ MatType

typedef int cv::dnn::MatType

#include <opencv2/dnn/dnn.hpp>

Enumeration Type Documentation

◆ ArgKind

#include <opencv2/dnn/dnn.hpp>

Enumerator
DNN_ARG_EMPTY 
Python: cv.dnn.DNN_ARG_EMPTY

valid only for Arg.idx==0. It's "no-arg"

DNN_ARG_CONST 
Python: cv.dnn.DNN_ARG_CONST

a constant argument.

DNN_ARG_INPUT 
Python: cv.dnn.DNN_ARG_INPUT

input of the whole model. Before Net::forward() or in Net::forward() all inputs must be set

DNN_ARG_OUTPUT 
Python: cv.dnn.DNN_ARG_OUTPUT

output of the model.

DNN_ARG_TEMP 
Python: cv.dnn.DNN_ARG_TEMP

intermediate result, a result of some operation and input to some other operation(s).

DNN_ARG_PATTERN 
Python: cv.dnn.DNN_ARG_PATTERN

not used for now

◆ Backend

#include <opencv2/dnn/dnn.hpp>

Enum of computation backends supported by layers.

See also
Net::setPreferableBackend
Enumerator
DNN_BACKEND_DEFAULT 
Python: cv.dnn.DNN_BACKEND_DEFAULT

DNN_BACKEND_DEFAULT equals to OPENCV_DNN_BACKEND_DEFAULT, which can be defined using CMake or a configuration parameter.

DNN_BACKEND_INFERENCE_ENGINE 
Python: cv.dnn.DNN_BACKEND_INFERENCE_ENGINE

Intel OpenVINO computational backend

Note
Tutorial how to build OpenCV with OpenVINO: OpenCV usage with OpenVINO
DNN_BACKEND_OPENCV 
Python: cv.dnn.DNN_BACKEND_OPENCV
DNN_BACKEND_VKCOM 
Python: cv.dnn.DNN_BACKEND_VKCOM
DNN_BACKEND_CUDA 
Python: cv.dnn.DNN_BACKEND_CUDA
DNN_BACKEND_WEBNN 
Python: cv.dnn.DNN_BACKEND_WEBNN
DNN_BACKEND_TIMVX 
Python: cv.dnn.DNN_BACKEND_TIMVX
DNN_BACKEND_CANN 
Python: cv.dnn.DNN_BACKEND_CANN

◆ EngineType

#include <opencv2/dnn/dnn.hpp>

Enumerator
ENGINE_CLASSIC 
Python: cv.dnn.ENGINE_CLASSIC

Force use the old dnn engine similar to 4.x branch.

ENGINE_NEW 
Python: cv.dnn.ENGINE_NEW

Force use the new dnn engine. The engine does not support non CPU back-ends for now.

ENGINE_AUTO 
Python: cv.dnn.ENGINE_AUTO

Try to use the new engine and then fall back to the classic version.

◆ ImagePaddingMode

#include <opencv2/dnn/dnn.hpp>

Enum of image processing mode. To facilitate the specialization pre-processing requirements of the dnn model. For example, the letter box often used in the Yolo series of models.

See also
Image2BlobParams
Enumerator
DNN_PMODE_NULL 
Python: cv.dnn.DNN_PMODE_NULL
DNN_PMODE_CROP_CENTER 
Python: cv.dnn.DNN_PMODE_CROP_CENTER
DNN_PMODE_LETTERBOX 
Python: cv.dnn.DNN_PMODE_LETTERBOX

◆ ModelFormat

#include <opencv2/dnn/dnn.hpp>

Enumerator
DNN_MODEL_GENERIC 
Python: cv.dnn.DNN_MODEL_GENERIC

Some generic model format.

DNN_MODEL_ONNX 
Python: cv.dnn.DNN_MODEL_ONNX

ONNX model.

DNN_MODEL_TF 
Python: cv.dnn.DNN_MODEL_TF

TF model.

DNN_MODEL_TFLITE 
Python: cv.dnn.DNN_MODEL_TFLITE

TFLite model.

DNN_MODEL_CAFFE 
Python: cv.dnn.DNN_MODEL_CAFFE

Caffe model.

◆ ProfilingMode

#include <opencv2/dnn/dnn.hpp>

Enumerator
DNN_PROFILE_NONE 
Python: cv.dnn.DNN_PROFILE_NONE

Don't do any profiling.

DNN_PROFILE_SUMMARY 
Python: cv.dnn.DNN_PROFILE_SUMMARY

Collect the summary statistics by layer type (e.g. all "Conv2D" or all "Add") and print it in the end, sorted by the execution time (most expensive layers first). Note that it may introduce some overhead and cause slowdown, especially in the case of non-CPU backends.

DNN_PROFILE_DETAILED 
Python: cv.dnn.DNN_PROFILE_DETAILED

Print execution time of each single layer. Note that it may introduce some overhead and cause slowdown, especially in the case of non-CPU backends.

◆ SoftNMSMethod

enum class cv::dnn::SoftNMSMethod
strong

#include <opencv2/dnn/dnn.hpp>

Enum of Soft NMS methods.

See also
softNMSBoxes
Enumerator
SOFTNMS_LINEAR 
SOFTNMS_GAUSSIAN 

◆ Target

#include <opencv2/dnn/dnn.hpp>

Enum of target devices for computations.

See also
Net::setPreferableTarget
Enumerator
DNN_TARGET_CPU 
Python: cv.dnn.DNN_TARGET_CPU
DNN_TARGET_OPENCL 
Python: cv.dnn.DNN_TARGET_OPENCL
DNN_TARGET_OPENCL_FP16 
Python: cv.dnn.DNN_TARGET_OPENCL_FP16
DNN_TARGET_MYRIAD 
Python: cv.dnn.DNN_TARGET_MYRIAD
DNN_TARGET_VULKAN 
Python: cv.dnn.DNN_TARGET_VULKAN
DNN_TARGET_FPGA 
Python: cv.dnn.DNN_TARGET_FPGA

FPGA device with CPU fallbacks using Inference Engine's Heterogeneous plugin.

DNN_TARGET_CUDA 
Python: cv.dnn.DNN_TARGET_CUDA
DNN_TARGET_CUDA_FP16 
Python: cv.dnn.DNN_TARGET_CUDA_FP16
DNN_TARGET_HDDL 
Python: cv.dnn.DNN_TARGET_HDDL
DNN_TARGET_NPU 
Python: cv.dnn.DNN_TARGET_NPU
DNN_TARGET_CPU_FP16 
Python: cv.dnn.DNN_TARGET_CPU_FP16

◆ TracingMode

#include <opencv2/dnn/dnn.hpp>

Enumerator
DNN_TRACE_NONE 
Python: cv.dnn.DNN_TRACE_NONE

Don't trace anything.

DNN_TRACE_ALL 
Python: cv.dnn.DNN_TRACE_ALL

Print all executed operations along with the output tensors, more or less compatible with ONNX Runtime.

DNN_TRACE_OP 
Python: cv.dnn.DNN_TRACE_OP

Print all executed operations. Types and shapes of all inputs and outputs are printed, but the content is not.

Function Documentation

◆ argKindToString()

std::string cv::dnn::argKindToString ( ArgKind kind)

#include <opencv2/dnn/dnn.hpp>

◆ blobFromImage() [1/2]

Mat cv::dnn::blobFromImage ( InputArray image,
double scalefactor = 1.0,
const Size & size = Size(),
const Scalar & mean = Scalar(),
bool swapRB = false,
bool crop = false,
int ddepth = CV_32F )
Python:
cv.dnn.blobFromImage(image[, scalefactor[, size[, mean[, swapRB[, crop[, ddepth]]]]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from image. Optionally resizes and crops image from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels.

Parameters
imageinput image (with 1-, 3- or 4-channels).
scalefactormultiplier for images values.
sizespatial size for output image
meanscalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true.
swapRBflag which indicates that swap first and last channels in 3-channel image is necessary.
cropflag which indicates whether image will be cropped after resize or not
ddepthDepth of output blob. Choose CV_32F or CV_8U.

if crop is true, input image is resized so one side after resize is equal to corresponding dimension in size and another one is equal or larger. Then, crop from the center is performed. If crop is false, direct resize without cropping and preserving aspect ratio is performed.

Returns
4-dimensional Mat with NCHW dimensions order.
Note
The order and usage of scalefactor and mean are (input - mean) * scalefactor.

◆ blobFromImage() [2/2]

void cv::dnn::blobFromImage ( InputArray image,
OutputArray blob,
double scalefactor = 1.0,
const Size & size = Size(),
const Scalar & mean = Scalar(),
bool swapRB = false,
bool crop = false,
int ddepth = CV_32F )
Python:
cv.dnn.blobFromImage(image[, scalefactor[, size[, mean[, swapRB[, crop[, ddepth]]]]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from image.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ blobFromImages() [1/2]

Mat cv::dnn::blobFromImages ( InputArrayOfArrays images,
double scalefactor = 1.0,
Size size = Size(),
const Scalar & mean = Scalar(),
bool swapRB = false,
bool crop = false,
int ddepth = CV_32F )
Python:
cv.dnn.blobFromImages(images[, scalefactor[, size[, mean[, swapRB[, crop[, ddepth]]]]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from series of images. Optionally resizes and crops images from center, subtract mean values, scales values by scalefactor, swap Blue and Red channels.

Parameters
imagesinput images (all with 1-, 3- or 4-channels).
sizespatial size for output image
meanscalar with mean values which are subtracted from channels. Values are intended to be in (mean-R, mean-G, mean-B) order if image has BGR ordering and swapRB is true.
scalefactormultiplier for images values.
swapRBflag which indicates that swap first and last channels in 3-channel image is necessary.
cropflag which indicates whether image will be cropped after resize or not
ddepthDepth of output blob. Choose CV_32F or CV_8U.

if crop is true, input image is resized so one side after resize is equal to corresponding dimension in size and another one is equal or larger. Then, crop from the center is performed. If crop is false, direct resize without cropping and preserving aspect ratio is performed.

Returns
4-dimensional Mat with NCHW dimensions order.
Note
The order and usage of scalefactor and mean are (input - mean) * scalefactor.

◆ blobFromImages() [2/2]

void cv::dnn::blobFromImages ( InputArrayOfArrays images,
OutputArray blob,
double scalefactor = 1.0,
Size size = Size(),
const Scalar & mean = Scalar(),
bool swapRB = false,
bool crop = false,
int ddepth = CV_32F )
Python:
cv.dnn.blobFromImages(images[, scalefactor[, size[, mean[, swapRB[, crop[, ddepth]]]]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from series of images.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ blobFromImagesWithParams() [1/2]

Mat cv::dnn::blobFromImagesWithParams ( InputArrayOfArrays images,
const Image2BlobParams & param = Image2BlobParams() )
Python:
cv.dnn.blobFromImagesWithParams(images[, param]) -> retval
cv.dnn.blobFromImagesWithParams(images[, blob[, param]]) -> blob

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from series of images with given params.

This function is an extension of blobFromImages to meet more image preprocess needs. Given input image and preprocessing parameters, and function outputs the blob.

Parameters
imagesinput image (all with 1-, 3- or 4-channels).
paramstruct of Image2BlobParams, contains all parameters needed by processing of image to blob.
Returns
4-dimensional Mat.

◆ blobFromImagesWithParams() [2/2]

void cv::dnn::blobFromImagesWithParams ( InputArrayOfArrays images,
OutputArray blob,
const Image2BlobParams & param = Image2BlobParams() )
Python:
cv.dnn.blobFromImagesWithParams(images[, param]) -> retval
cv.dnn.blobFromImagesWithParams(images[, blob[, param]]) -> blob

#include <opencv2/dnn/dnn.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ blobFromImageWithParams() [1/2]

Mat cv::dnn::blobFromImageWithParams ( InputArray image,
const Image2BlobParams & param = Image2BlobParams() )
Python:
cv.dnn.blobFromImageWithParams(image[, param]) -> retval
cv.dnn.blobFromImageWithParams(image[, blob[, param]]) -> blob

#include <opencv2/dnn/dnn.hpp>

Creates 4-dimensional blob from image with given params.

This function is an extension of blobFromImage to meet more image preprocess needs. Given input image and preprocessing parameters, and function outputs the blob.

Parameters
imageinput image (all with 1-, 3- or 4-channels).
paramstruct of Image2BlobParams, contains all parameters needed by processing of image to blob.
Returns
4-dimensional Mat.

◆ blobFromImageWithParams() [2/2]

void cv::dnn::blobFromImageWithParams ( InputArray image,
OutputArray blob,
const Image2BlobParams & param = Image2BlobParams() )
Python:
cv.dnn.blobFromImageWithParams(image[, param]) -> retval
cv.dnn.blobFromImageWithParams(image[, blob[, param]]) -> blob

#include <opencv2/dnn/dnn.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ enableModelDiagnostics()

void cv::dnn::enableModelDiagnostics ( bool isDiagnosticsMode)

#include <opencv2/dnn/dnn.hpp>

Enables detailed logging of the DNN model loading with CV DNN API.

Parameters
[in]isDiagnosticsModeIndicates whether diagnostic mode should be set.

Diagnostic mode provides detailed logging of the model loading stage to explore potential problems (ex.: not implemented layer type).

Note
In diagnostic mode series of assertions will be skipped, it can lead to the expected application crash.

◆ getAvailableBackends()

std::vector< std::pair< Backend, Target > > cv::dnn::getAvailableBackends ( )

#include <opencv2/dnn/dnn.hpp>

◆ getAvailableTargets()

std::vector< Target > cv::dnn::getAvailableTargets ( dnn::Backend be)
Python:
cv.dnn.getAvailableTargets(be) -> retval

#include <opencv2/dnn/dnn.hpp>

◆ getLayerFactoryImpl()

LayerFactory_Impl & cv::dnn::getLayerFactoryImpl ( )

#include <opencv2/dnn/layer_reg.private.hpp>

Register layer types of DNN model.

Note
In order to thread-safely access the factory, see getLayerFactoryMutex() function.

◆ getLayerFactoryMutex()

Mutex & cv::dnn::getLayerFactoryMutex ( )

#include <opencv2/dnn/layer_reg.private.hpp>

Get the mutex guarding LayerFactory_Impl, see getLayerFactoryImpl() function.

◆ imagesFromBlob()

void cv::dnn::imagesFromBlob ( const cv::Mat & blob_,
OutputArrayOfArrays images_ )
Python:
cv.dnn.imagesFromBlob(blob_[, images_]) -> images_

#include <opencv2/dnn/dnn.hpp>

Parse a 4D blob and output the images it contains as 2D arrays through a simpler data structure (std::vector<cv::Mat>).

Parameters
[in]blob_4 dimensional array (images, channels, height, width) in floating point precision (CV_32F) from which you would like to extract the images.
[out]images_array of 2D Mat containing the images extracted from the blob in floating point precision (CV_32F). They are non normalized neither mean added. The number of returned images equals the first dimension of the blob (batch size). Every image has a number of channels equals to the second dimension of the blob (depth).

◆ modelFormatToString()

std::string cv::dnn::modelFormatToString ( ModelFormat modelFormat)

#include <opencv2/dnn/dnn.hpp>

◆ NMSBoxes() [1/3]

void cv::dnn::NMSBoxes ( const std::vector< Rect > & bboxes,
const std::vector< float > & scores,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxes(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices
cv.dnn.NMSBoxesRotated(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

Performs non maximum suppression given boxes and corresponding scores.

Parameters
bboxesa set of bounding boxes to apply NMS.
scoresa set of corresponding confidences.
score_thresholda threshold used to filter boxes by score.
nms_thresholda threshold used in non maximum suppression.
indicesthe kept indices of bboxes after NMS.
etaa coefficient in adaptive threshold formula: \(nms\_threshold_{i+1}=eta\cdot nms\_threshold_i\).
top_kif >0, keep at most top_k picked indices.

◆ NMSBoxes() [2/3]

void cv::dnn::NMSBoxes ( const std::vector< Rect2d > & bboxes,
const std::vector< float > & scores,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxes(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices
cv.dnn.NMSBoxesRotated(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

◆ NMSBoxes() [3/3]

void cv::dnn::NMSBoxes ( const std::vector< RotatedRect > & bboxes,
const std::vector< float > & scores,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxes(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices
cv.dnn.NMSBoxesRotated(bboxes, scores, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

◆ NMSBoxesBatched() [1/2]

void cv::dnn::NMSBoxesBatched ( const std::vector< Rect > & bboxes,
const std::vector< float > & scores,
const std::vector< int > & class_ids,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxesBatched(bboxes, scores, class_ids, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

Performs batched non maximum suppression on given boxes and corresponding scores across different classes.

Parameters
bboxesa set of bounding boxes to apply NMS.
scoresa set of corresponding confidences.
class_idsa set of corresponding class ids. Ids are integer and usually start from 0.
score_thresholda threshold used to filter boxes by score.
nms_thresholda threshold used in non maximum suppression.
indicesthe kept indices of bboxes after NMS.
etaa coefficient in adaptive threshold formula: \(nms\_threshold_{i+1}=eta\cdot nms\_threshold_i\).
top_kif >0, keep at most top_k picked indices.

◆ NMSBoxesBatched() [2/2]

void cv::dnn::NMSBoxesBatched ( const std::vector< Rect2d > & bboxes,
const std::vector< float > & scores,
const std::vector< int > & class_ids,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
const float eta = 1.f,
const int top_k = 0 )
Python:
cv.dnn.NMSBoxesBatched(bboxes, scores, class_ids, score_threshold, nms_threshold[, eta[, top_k]]) -> indices

#include <opencv2/dnn/dnn.hpp>

◆ readNet() [1/2]

Net cv::dnn::readNet ( const String & framework,
const std::vector< uchar > & bufferModel,
const std::vector< uchar > & bufferConfig = std::vector< uchar >(),
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNet(model[, config[, framework[, engine]]]) -> retval
cv.dnn.readNet(framework, bufferModel[, bufferConfig[, engine]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Read deep learning network represented in one of the supported formats.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
[in]frameworkName of origin framework.
[in]bufferModelA buffer with a content of binary file with weights
[in]bufferConfigA buffer with a content of text file contains network configuration.
engineselect DNN engine to be used. With auto selection the new engine is used first and falls back to classic. Please pay attention that the new DNN does not support non-CPU back-ends for now. Use ENGINE_CLASSIC if you want to use other back-ends.
Returns
Net object.

◆ readNet() [2/2]

Net cv::dnn::readNet ( CV_WRAP_FILE_PATH const String & model,
CV_WRAP_FILE_PATH const String & config = "",
const String & framework = "",
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNet(model[, config[, framework[, engine]]]) -> retval
cv.dnn.readNet(framework, bufferModel[, bufferConfig[, engine]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Read deep learning network represented in one of the supported formats.

Parameters
[in]modelBinary file contains trained weights. The following file extensions are expected for models from different frameworks:
[in]configText file contains network configuration. It could be a file with the following extensions:
[in]frameworkExplicit framework name tag to determine a format.
[in]engineselect DNN engine to be used. With auto selection the new engine is used first and falls back to classic. Please pay attention that the new DNN does not support non-CPU back-ends for now. Use ENGINE_CLASSIC if you want to use other back-ends.
Returns
Net object.

This function automatically detects an origin framework of trained model and calls an appropriate function such readNetFromCaffe, readNetFromTensorflow or readNetFromDarknet. An order of model and config arguments does not matter.

◆ readNetFromCaffe() [1/3]

Net cv::dnn::readNetFromCaffe ( const char * bufferProto,
size_t lenProto,
const char * bufferModel = NULL,
size_t lenModel = 0,
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNetFromCaffe(prototxt[, caffeModel[, engine]]) -> retval
cv.dnn.readNetFromCaffe(bufferProto[, bufferModel[, engine]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Caffe model in memory.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
bufferProtobuffer containing the content of the .prototxt file
lenProtolength of bufferProto
bufferModelbuffer containing the content of the .caffemodel file
lenModellength of bufferModel
engineselect DNN engine to be used. With auto selection the new engine is used. Please pay attention that the new DNN does not support non-CPU back-ends for now.
Returns
Net object.

◆ readNetFromCaffe() [2/3]

Net cv::dnn::readNetFromCaffe ( const std::vector< uchar > & bufferProto,
const std::vector< uchar > & bufferModel = std::vector< uchar >(),
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNetFromCaffe(prototxt[, caffeModel[, engine]]) -> retval
cv.dnn.readNetFromCaffe(bufferProto[, bufferModel[, engine]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Caffe model in memory.

Parameters
bufferProtobuffer containing the content of the .prototxt file
bufferModelbuffer containing the content of the .caffemodel file
engineselect DNN engine to be used. With auto selection the new engine is used. Please pay attention that the new DNN does not support non-CPU back-ends for now.
Returns
Net object.

◆ readNetFromCaffe() [3/3]

Net cv::dnn::readNetFromCaffe ( CV_WRAP_FILE_PATH const String & prototxt,
CV_WRAP_FILE_PATH const String & caffeModel = String(),
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNetFromCaffe(prototxt[, caffeModel[, engine]]) -> retval
cv.dnn.readNetFromCaffe(bufferProto[, bufferModel[, engine]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Caffe framework's format.

Parameters
prototxtpath to the .prototxt file with text description of the network architecture.
caffeModelpath to the .caffemodel file with learned network.
engineselect DNN engine to be used. With auto selection the new engine is used. Please pay attention that the new DNN does not support non-CPU back-ends for now.
Returns
Net object.

◆ readNetFromDarknet() [1/3]

Net cv::dnn::readNetFromDarknet ( const char * bufferCfg,
size_t lenCfg,
const char * bufferModel = NULL,
size_t lenModel = 0 )
Python:
cv.dnn.readNetFromDarknet(cfgFile[, darknetModel]) -> retval
cv.dnn.readNetFromDarknet(bufferCfg[, bufferModel]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Darknet model files.

Parameters
bufferCfgA buffer contains a content of .cfg file with text description of the network architecture.
lenCfgNumber of bytes to read from bufferCfg
bufferModelA buffer contains a content of .weights file with learned network.
lenModelNumber of bytes to read from bufferModel
Returns
Net object.

◆ readNetFromDarknet() [2/3]

Net cv::dnn::readNetFromDarknet ( const std::vector< uchar > & bufferCfg,
const std::vector< uchar > & bufferModel = std::vector< uchar >() )
Python:
cv.dnn.readNetFromDarknet(cfgFile[, darknetModel]) -> retval
cv.dnn.readNetFromDarknet(bufferCfg[, bufferModel]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Darknet model files.

Parameters
bufferCfgA buffer contains a content of .cfg file with text description of the network architecture.
bufferModelA buffer contains a content of .weights file with learned network.
Returns
Net object.

◆ readNetFromDarknet() [3/3]

Net cv::dnn::readNetFromDarknet ( CV_WRAP_FILE_PATH const String & cfgFile,
CV_WRAP_FILE_PATH const String & darknetModel = String() )
Python:
cv.dnn.readNetFromDarknet(cfgFile[, darknetModel]) -> retval
cv.dnn.readNetFromDarknet(bufferCfg[, bufferModel]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in Darknet model files.

Parameters
cfgFilepath to the .cfg file with text description of the network architecture.
darknetModelpath to the .weights file with learned network.
Returns
Network object that ready to do forward, throw an exception in failure cases.

◆ readNetFromModelOptimizer() [1/3]

Net cv::dnn::readNetFromModelOptimizer ( const std::vector< uchar > & bufferModelConfig,
const std::vector< uchar > & bufferWeights )
Python:
cv.dnn.readNetFromModelOptimizer(xml[, bin]) -> retval
cv.dnn.readNetFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval

#include <opencv2/dnn/dnn.hpp>

Load a network from Intel's Model Optimizer intermediate representation.

Parameters
[in]bufferModelConfigBuffer contains XML configuration with network's topology.
[in]bufferWeightsBuffer contains binary data with trained weights.
Returns
Net object. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.

◆ readNetFromModelOptimizer() [2/3]

Net cv::dnn::readNetFromModelOptimizer ( const uchar * bufferModelConfigPtr,
size_t bufferModelConfigSize,
const uchar * bufferWeightsPtr,
size_t bufferWeightsSize )
Python:
cv.dnn.readNetFromModelOptimizer(xml[, bin]) -> retval
cv.dnn.readNetFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval

#include <opencv2/dnn/dnn.hpp>

Load a network from Intel's Model Optimizer intermediate representation.

Parameters
[in]bufferModelConfigPtrPointer to buffer which contains XML configuration with network's topology.
[in]bufferModelConfigSizeBinary size of XML configuration data.
[in]bufferWeightsPtrPointer to buffer which contains binary data with trained weights.
[in]bufferWeightsSizeBinary size of trained weights data.
Returns
Net object. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.

◆ readNetFromModelOptimizer() [3/3]

Net cv::dnn::readNetFromModelOptimizer ( CV_WRAP_FILE_PATH const String & xml,
CV_WRAP_FILE_PATH const String & bin = "" )
Python:
cv.dnn.readNetFromModelOptimizer(xml[, bin]) -> retval
cv.dnn.readNetFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval

#include <opencv2/dnn/dnn.hpp>

Load a network from Intel's Model Optimizer intermediate representation.

Parameters
[in]xmlXML configuration file with network's topology.
[in]binBinary file with trained weights.
Returns
Net object. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.

◆ readNetFromONNX() [1/3]

Net cv::dnn::readNetFromONNX ( const char * buffer,
size_t sizeBuffer,
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNetFromONNX(onnxFile[, engine]) -> retval
cv.dnn.readNetFromONNX(buffer[, engine]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model from ONNX in-memory buffer.

Parameters
buffermemory address of the first byte of the buffer.
sizeBuffersize of the buffer.
engineselect DNN engine to be used. With auto selection the new engine is used first and falls back to classic.
Returns
Network object that ready to do forward, throw an exception in failure cases.

◆ readNetFromONNX() [2/3]

Net cv::dnn::readNetFromONNX ( const std::vector< uchar > & buffer,
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNetFromONNX(onnxFile[, engine]) -> retval
cv.dnn.readNetFromONNX(buffer[, engine]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model from ONNX in-memory buffer.

Parameters
bufferin-memory buffer that stores the ONNX model bytes.
engineselect DNN engine to be used. With auto selection the new engine is used first and falls back to classic. Please pay attention that the new DNN does not support non-CPU back-ends for now.
Returns
Network object that ready to do forward, throw an exception in failure cases.

◆ readNetFromONNX() [3/3]

Net cv::dnn::readNetFromONNX ( CV_WRAP_FILE_PATH const String & onnxFile,
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNetFromONNX(onnxFile[, engine]) -> retval
cv.dnn.readNetFromONNX(buffer[, engine]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model ONNX.

Parameters
onnxFilepath to the .onnx file with text description of the network architecture.
engineselect DNN engine to be used. With auto selection the new engine is used first and falls back to classic. Please pay attention that the new DNN does not support non-CPU back-ends for now.
Returns
Network object that ready to do forward, throw an exception in failure cases.

◆ readNetFromTensorflow() [1/3]

Net cv::dnn::readNetFromTensorflow ( const char * bufferModel,
size_t lenModel,
const char * bufferConfig = NULL,
size_t lenConfig = 0,
int engine = ENGINE_AUTO,
const std::vector< String > & extraOutputs = std::vector< String >() )
Python:
cv.dnn.readNetFromTensorflow(model[, config[, engine[, extraOutputs]]]) -> retval
cv.dnn.readNetFromTensorflow(bufferModel[, bufferConfig[, engine[, extraOutputs]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TensorFlow framework's format.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
bufferModelbuffer containing the content of the pb file
lenModellength of bufferModel
bufferConfigbuffer containing the content of the pbtxt file
lenConfiglength of bufferConfig
engineselect DNN engine to be used. With auto selection the new engine is used.
extraOutputsspecify model outputs explicitly, in addition to the outputs the graph analyzer finds. Please pay attention that the new DNN does not support non-CPU back-ends for now.

◆ readNetFromTensorflow() [2/3]

Net cv::dnn::readNetFromTensorflow ( const std::vector< uchar > & bufferModel,
const std::vector< uchar > & bufferConfig = std::vector< uchar >(),
int engine = ENGINE_AUTO,
const std::vector< String > & extraOutputs = std::vector< String >() )
Python:
cv.dnn.readNetFromTensorflow(model[, config[, engine[, extraOutputs]]]) -> retval
cv.dnn.readNetFromTensorflow(bufferModel[, bufferConfig[, engine[, extraOutputs]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TensorFlow framework's format.

Parameters
bufferModelbuffer containing the content of the pb file
bufferConfigbuffer containing the content of the pbtxt file
engineselect DNN engine to be used. With auto selection the new engine is used.
extraOutputsspecify model outputs explicitly, in addition to the outputs the graph analyzer finds. Please pay attention that the new DNN does not support non-CPU back-ends for now.
Returns
Net object.

◆ readNetFromTensorflow() [3/3]

Net cv::dnn::readNetFromTensorflow ( CV_WRAP_FILE_PATH const String & model,
CV_WRAP_FILE_PATH const String & config = String(),
int engine = ENGINE_AUTO,
const std::vector< String > & extraOutputs = std::vector< String >() )
Python:
cv.dnn.readNetFromTensorflow(model[, config[, engine[, extraOutputs]]]) -> retval
cv.dnn.readNetFromTensorflow(bufferModel[, bufferConfig[, engine[, extraOutputs]]]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TensorFlow framework's format.

Parameters
modelpath to the .pb file with binary protobuf description of the network architecture
configpath to the .pbtxt file that contains text graph definition in protobuf format. Resulting Net object is built by text graph using weights from a binary one that let us make it more flexible.
engineselect DNN engine to be used. With auto selection the new engine is used.
extraOutputsspecify model outputs explicitly, in addition to the outputs the graph analyzer finds. Please pay attention that the new DNN does not support non-CPU back-ends for now.
Returns
Net object.

◆ readNetFromTFLite() [1/3]

Net cv::dnn::readNetFromTFLite ( const char * bufferModel,
size_t lenModel,
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNetFromTFLite(model[, engine]) -> retval
cv.dnn.readNetFromTFLite(bufferModel[, engine]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TFLite framework's format.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
bufferModelbuffer containing the content of the tflite file
lenModellength of bufferModel
engineselect DNN engine to be used. With auto selection the new engine is used first and falls back to classic. Please pay attention that the new DNN does not support non-CPU back-ends for now.

◆ readNetFromTFLite() [2/3]

Net cv::dnn::readNetFromTFLite ( const std::vector< uchar > & bufferModel,
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNetFromTFLite(model[, engine]) -> retval
cv.dnn.readNetFromTFLite(bufferModel[, engine]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TFLite framework's format.

Parameters
bufferModelbuffer containing the content of the tflite file
engineselect DNN engine to be used. With auto selection the new engine is used first and falls back to classic. Please pay attention that the new DNN does not support non-CPU back-ends for now.
Returns
Net object.

◆ readNetFromTFLite() [3/3]

Net cv::dnn::readNetFromTFLite ( CV_WRAP_FILE_PATH const String & model,
int engine = ENGINE_AUTO )
Python:
cv.dnn.readNetFromTFLite(model[, engine]) -> retval
cv.dnn.readNetFromTFLite(bufferModel[, engine]) -> retval

#include <opencv2/dnn/dnn.hpp>

Reads a network model stored in TFLite framework's format.

Parameters
modelpath to the .tflite file with binary flatbuffers description of the network architecture
engineselect DNN engine to be used. With auto selection the new engine is used first and falls back to classic. Please pay attention that the new DNN does not support non-CPU back-ends for now.
Returns
Net object.

◆ readTensorFromONNX()

Mat cv::dnn::readTensorFromONNX ( CV_WRAP_FILE_PATH const String & path)
Python:
cv.dnn.readTensorFromONNX(path) -> retval

#include <opencv2/dnn/dnn.hpp>

Creates blob from .pb file.

Parameters
pathto the .pb file with input tensor.
Returns
Mat.

◆ shrinkCaffeModel()

void cv::dnn::shrinkCaffeModel ( CV_WRAP_FILE_PATH const String & src,
CV_WRAP_FILE_PATH const String & dst,
const std::vector< String > & layersTypes = std::vector< String >() )
Python:
cv.dnn.shrinkCaffeModel(src, dst[, layersTypes]) -> None

#include <opencv2/dnn/dnn.hpp>

Convert all weights of Caffe network to half precision floating point.

Parameters
srcPath to origin model from Caffe framework contains single precision floating point weights (usually has .caffemodel extension).
dstPath to destination model with updated weights.
layersTypesSet of layers types which parameters will be converted. By default, converts only Convolutional and Fully-Connected layers' weights.
Note
Shrinked model has no origin float32 weights so it can't be used in origin Caffe framework anymore. However the structure of data is taken from NVidia's Caffe fork: https://github.com/NVIDIA/caffe. So the resulting model may be used there.

◆ softNMSBoxes()

void cv::dnn::softNMSBoxes ( const std::vector< Rect > & bboxes,
const std::vector< float > & scores,
std::vector< float > & updated_scores,
const float score_threshold,
const float nms_threshold,
std::vector< int > & indices,
size_t top_k = 0,
const float sigma = 0.5,
SoftNMSMethod method = SoftNMSMethod::SOFTNMS_GAUSSIAN )
Python:
cv.dnn.softNMSBoxes(bboxes, scores, score_threshold, nms_threshold[, top_k[, sigma[, method]]]) -> updated_scores, indices

#include <opencv2/dnn/dnn.hpp>

Performs soft non maximum suppression given boxes and corresponding scores. Reference: https://arxiv.org/abs/1704.04503.

Parameters
bboxesa set of bounding boxes to apply Soft NMS.
scoresa set of corresponding confidences.
updated_scoresa set of corresponding updated confidences.
score_thresholda threshold used to filter boxes by score.
nms_thresholda threshold used in non maximum suppression.
indicesthe kept indices of bboxes after NMS.
top_kkeep at most top_k picked indices.
sigmaparameter of Gaussian weighting.
methodGaussian or linear.
See also
SoftNMSMethod

◆ writeTextGraph()

void cv::dnn::writeTextGraph ( CV_WRAP_FILE_PATH const String & model,
CV_WRAP_FILE_PATH const String & output )
Python:
cv.dnn.writeTextGraph(model, output) -> None

#include <opencv2/dnn/dnn.hpp>

Create a text representation for a binary network stored in protocol buffer format.

Parameters
[in]modelA path to binary network.
[in]outputA path to output text file to be created.
Note
To reduce output file size, trained weights are not included.