OpenCV  5.0.0alpha
Open Source Computer Vision
Loading...
Searching...
No Matches
cv::dnn::Net Class Reference

This class allows to create and manipulate comprehensive artificial neural networks. More...

#include <opencv2/dnn/dnn.hpp>

Collaboration diagram for cv::dnn::Net:

Public Types

typedef DictValue LayerId
 Container for strings and integers.
 

Public Member Functions

 Net ()
 Default constructor.
 
 ~Net ()
 Destructor frees the net only if there aren't references to the net anymore.
 
int addLayer (const String &name, const String &type, const int &dtype, LayerParams &params)
 Adds new layer to the net.
 
int addLayer (const String &name, const String &type, LayerParams &params)
 
int addLayerToPrev (const String &name, const String &type, const int &dtype, LayerParams &params)
 Adds new layer and connects its first input to the first output of previously added layer.
 
int addLayerToPrev (const String &name, const String &type, LayerParams &params)
 
const ArgDataargData (Arg arg) const
 
ArgKind argKind (Arg arg) const
 
const std::string & argName (Arg arg) const
 
MatargTensor (Arg arg) const
 
int argType (Arg arg) const
 
void connect (int outLayerId, int outNum, int inpLayerId, int inpNum)
 Connects #outNum output of the first layer to #inNum input of the second layer.
 
void connect (String outPin, String inpPin)
 Connects output of the first layer to input of the second layer.
 
String dump ()
 Dump net to String.
 
std::ostream & dumpArg (std::ostream &strm, Arg arg, int indent, bool comma=true, bool dump_details=false) const
 
std::ostream & dumpDim (std::ostream &strm, int value) const
 
void dumpToFile (CV_WRAP_FILE_PATH const String &path)
 Dump net structure, hyperparameters, backend, target and fusion to dot file.
 
void dumpToPbtxt (CV_WRAP_FILE_PATH const String &path)
 Dump net structure, hyperparameters, backend, target and fusion to pbtxt file.
 
void dumpToStream (std::ostream &strm) const
 Dump net structure, hyperparameters, backend, target and fusion to the specified output stream.
 
bool empty () const
 
void enableFusion (bool fusion)
 Enables or disables layer fusion in the network.
 
void enableWinograd (bool useWinograd)
 Enables or disables the Winograd compute branch. The Winograd compute branch can speed up 3x3 Convolution at a small loss of accuracy.
 
int findDim (const std::string &name, bool insert=false)
 
Mat forward (const String &outputName=String())
 Runs forward pass to compute output of layer with name outputName.
 
void forward (CV_ND OutputArrayOfArrays outputBlobs, const std::vector< String > &outBlobNames)
 Runs forward pass to compute outputs of layers listed in outBlobNames.
 
void forward (CV_ND OutputArrayOfArrays outputBlobs, const String &outputName=String())
 Runs forward pass to compute output of layer with name outputName.
 
void forward (std::vector< std::vector< Mat > > &outputBlobs, const std::vector< String > &outBlobNames)
 Runs forward pass to compute outputs of layers listed in outBlobNames.
 
AsyncArray forwardAsync (const String &outputName=String())
 Runs forward pass to compute output of layer with name outputName.
 
Arg getArg (const std::string &name)
 
int64 getFLOPS (const int layerId, const MatShape &netInputShape, const int &netInputType) const
 
int64 getFLOPS (const int layerId, const std::vector< MatShape > &netInputShapes, const std::vector< int > &netInputTypes) const
 
int64 getFLOPS (const MatShape &netInputShape, const int &netInputType) const
 
int64 getFLOPS (const std::vector< MatShape > &netInputShapes, const std::vector< int > &netInputTypes) const
 Computes FLOP for whole loaded model with specified input shapes.
 
Impl * getImpl () const
 
Impl & getImplRef () const
 
Ptr< LayergetLayer (const LayerId &layerId) const
 
Ptr< LayergetLayer (const String &layerName) const
 
Ptr< LayergetLayer (int layerId) const
 Returns pointer to layer with specified id or name which the network use.
 
int getLayerId (const String &layer) const
 Converts string name of the layer to the integer identifier.
 
std::vector< Ptr< Layer > > getLayerInputs (int layerId) const
 Returns pointers to input layers of specific layer.
 
std::vector< StringgetLayerNames () const
 
int getLayersCount (const String &layerType) const
 Returns count of layers of specified type.
 
void getLayerShapes (const MatShape &netInputShape, const int &netInputType, const int layerId, std::vector< MatShape > &inLayerShapes, std::vector< MatShape > &outLayerShapes) const
 Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary.
 
void getLayerShapes (const std::vector< MatShape > &netInputShapes, const std::vector< int > &netInputTypes, const int layerId, std::vector< MatShape > &inLayerShapes, std::vector< MatShape > &outLayerShapes) const
 
void getLayersShapes (const MatShape &netInputShape, const int &netInputType, std::vector< int > &layersIds, std::vector< std::vector< MatShape > > &inLayersShapes, std::vector< std::vector< MatShape > > &outLayersShapes) const
 
void getLayersShapes (const std::vector< MatShape > &netInputShapes, const std::vector< int > &netInputTypes, std::vector< int > &layersIds, std::vector< std::vector< MatShape > > &inLayersShapes, std::vector< std::vector< MatShape > > &outLayersShapes) const
 Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary.
 
void getLayerTypes (std::vector< String > &layersTypes) const
 Returns list of types for layer used in model.
 
Ptr< GraphgetMainGraph () const
 
void getMemoryConsumption (const int layerId, const MatShape &netInputShape, const int &netInputType, size_t &weights, size_t &blobs) const
 
void getMemoryConsumption (const int layerId, const std::vector< MatShape > &netInputShapes, const std::vector< int > &netInputTypes, size_t &weights, size_t &blobs) const
 
void getMemoryConsumption (const MatShape &netInputShape, const int &netInputType, size_t &weights, size_t &blobs) const
 
void getMemoryConsumption (const MatShape &netInputShape, const int &netInputType, std::vector< int > &layerIds, std::vector< size_t > &weights, std::vector< size_t > &blobs) const
 
void getMemoryConsumption (const std::vector< MatShape > &netInputShapes, const std::vector< int > &netInputTypes, size_t &weights, size_t &blobs) const
 Computes bytes number which are required to store all weights and intermediate blobs for model.
 
void getMemoryConsumption (const std::vector< MatShape > &netInputShapes, const std::vector< int > &netInputTypes, std::vector< int > &layerIds, std::vector< size_t > &weights, std::vector< size_t > &blobs) const
 Computes bytes number which are required to store all weights and intermediate blobs for each layer.
 
ModelFormat getModelFormat () const
 Retrieve the current model format, see DNN_MODEL_*.
 
Mat getParam (const String &layerName, int numParam=0) const
 
Mat getParam (int layer, int numParam=0) const
 Returns parameter blob of the layer.
 
int64 getPerfProfile (std::vector< double > &timings)
 Returns overall time for inference and timings (in ticks) for layers.
 
ProfilingMode getProfilingMode () const
 Retrieve the current profiling mode.
 
TracingMode getTracingMode () const
 Retrieve the current tracing mode.
 
std::vector< int > getUnconnectedOutLayers () const
 Returns indexes of layers with unconnected outputs.
 
std::vector< StringgetUnconnectedOutLayersNames () const
 Returns names of layers with unconnected outputs.
 
bool haveArg (const std::string &name) const
 
bool isConstArg (Arg arg) const
 
int registerOutput (const std::string &outputName, int layerId, int outputPort)
 Registers network output with name.
 
void setInput (CV_ND InputArray blob, const String &name="", double scalefactor=1.0, const Scalar &mean=Scalar())
 Sets the new input value for the network.
 
void setInputShape (const String &inputName, const MatShape &shape)
 Specify shape of network input.
 
void setInputsNames (const std::vector< String > &inputBlobNames)
 Sets outputs names of the network input pseudo layer.
 
void setParam (const String &layerName, int numParam, CV_ND const Mat &blob)
 
void setParam (int layer, int numParam, CV_ND const Mat &blob)
 Sets the new value for the learned param of the layer.
 
void setPreferableBackend (int backendId)
 Ask network to use specific computation backend where it supported.
 
void setPreferableTarget (int targetId)
 Ask network to make computations on specific target device.
 
void setProfilingMode (ProfilingMode profilingMode)
 Set the profiling mode.
 
void setTracingMode (TracingMode tracingMode)
 Set the tracing mode.
 

Static Public Member Functions

static Net readFromModelOptimizer (const std::vector< uchar > &bufferModelConfig, const std::vector< uchar > &bufferWeights)
 Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).
 
static Net readFromModelOptimizer (const uchar *bufferModelConfigPtr, size_t bufferModelConfigSize, const uchar *bufferWeightsPtr, size_t bufferWeightsSize)
 Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).
 
static Net readFromModelOptimizer (CV_WRAP_FILE_PATH const String &xml, CV_WRAP_FILE_PATH const String &bin)
 Create a network from Intel's Model Optimizer intermediate representation (IR).
 

Protected Attributes

Ptr< Impl > impl
 

Friends

class accessor::DnnNetAccessor
 

Detailed Description

This class allows to create and manipulate comprehensive artificial neural networks.

Neural network is presented as directed acyclic graph (DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs.

Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id.

This class supports reference counting of its instances, i. e. copies point to the same instance.

Examples
samples/dnn/colorization.cpp, and samples/dnn/openpose.cpp.

Member Typedef Documentation

◆ LayerId

Container for strings and integers.

Deprecated
Use getLayerId() with int result.

Constructor & Destructor Documentation

◆ Net()

cv::dnn::Net::Net ( )
Python:
cv.dnn.Net() -> <dnn_Net object>

Default constructor.

◆ ~Net()

cv::dnn::Net::~Net ( )

Destructor frees the net only if there aren't references to the net anymore.

Member Function Documentation

◆ addLayer() [1/2]

int cv::dnn::Net::addLayer ( const String & name,
const String & type,
const int & dtype,
LayerParams & params )
Python:
cv.dnn.Net.addLayer(name, type, dtype, params) -> retval

Adds new layer to the net.

Parameters
nameunique name of the adding layer.
typetypename of the adding layer (type must be registered in LayerRegister).
dtypedatatype of output blobs.
paramsparameters which will be used to initialize the creating layer.
Returns
unique identifier of created layer, or -1 if a failure will happen.

◆ addLayer() [2/2]

int cv::dnn::Net::addLayer ( const String & name,
const String & type,
LayerParams & params )
Python:
cv.dnn.Net.addLayer(name, type, dtype, params) -> retval

◆ addLayerToPrev() [1/2]

int cv::dnn::Net::addLayerToPrev ( const String & name,
const String & type,
const int & dtype,
LayerParams & params )
Python:
cv.dnn.Net.addLayerToPrev(name, type, dtype, params) -> retval

Adds new layer and connects its first input to the first output of previously added layer.

See also
addLayer()

◆ addLayerToPrev() [2/2]

int cv::dnn::Net::addLayerToPrev ( const String & name,
const String & type,
LayerParams & params )
Python:
cv.dnn.Net.addLayerToPrev(name, type, dtype, params) -> retval

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ argData()

const ArgData & cv::dnn::Net::argData ( Arg arg) const

◆ argKind()

ArgKind cv::dnn::Net::argKind ( Arg arg) const

◆ argName()

const std::string & cv::dnn::Net::argName ( Arg arg) const

◆ argTensor()

Mat & cv::dnn::Net::argTensor ( Arg arg) const

◆ argType()

int cv::dnn::Net::argType ( Arg arg) const

◆ connect() [1/2]

void cv::dnn::Net::connect ( int outLayerId,
int outNum,
int inpLayerId,
int inpNum )
Python:
cv.dnn.Net.connect(outPin, inpPin) -> None

Connects #outNum output of the first layer to #inNum input of the second layer.

Parameters
outLayerIdidentifier of the first layer
outNumnumber of the first layer output
inpLayerIdidentifier of the second layer
inpNumnumber of the second layer input

◆ connect() [2/2]

void cv::dnn::Net::connect ( String outPin,
String inpPin )
Python:
cv.dnn.Net.connect(outPin, inpPin) -> None

Connects output of the first layer to input of the second layer.

Parameters
outPindescriptor of the first layer output.
inpPindescriptor of the second layer input.

Descriptors have the following template <layer_name>[.input_number]:

  • the first part of the template layer_name is string name of the added layer. If this part is empty then the network input pseudo layer will be used;
  • the second optional part of the template input_number is either number of the layer input, either label one. If this part is omitted then the first layer input will be used.

    See also
    setNetInputs(), Layer::inputNameToIndex(), Layer::outputNameToIndex()

◆ dump()

String cv::dnn::Net::dump ( )
Python:
cv.dnn.Net.dump() -> retval

Dump net to String.

Returns
String with structure, hyperparameters, backend, target and fusion Call method after setInput(). To see correct backend, target and fusion run after forward().

◆ dumpArg()

std::ostream & cv::dnn::Net::dumpArg ( std::ostream & strm,
Arg arg,
int indent,
bool comma = true,
bool dump_details = false ) const

◆ dumpDim()

std::ostream & cv::dnn::Net::dumpDim ( std::ostream & strm,
int value ) const

◆ dumpToFile()

void cv::dnn::Net::dumpToFile ( CV_WRAP_FILE_PATH const String & path)
Python:
cv.dnn.Net.dumpToFile(path) -> None

Dump net structure, hyperparameters, backend, target and fusion to dot file.

Parameters
pathpath to output file with .dot extension
See also
dump()

◆ dumpToPbtxt()

void cv::dnn::Net::dumpToPbtxt ( CV_WRAP_FILE_PATH const String & path)
Python:
cv.dnn.Net.dumpToPbtxt(path) -> None

Dump net structure, hyperparameters, backend, target and fusion to pbtxt file.

Parameters
pathpath to output file with .pbtxt extension

Use Netron (https://netron.app) to open the target file to visualize the model. Call method after setInput(). To see correct backend, target and fusion run after forward().

◆ dumpToStream()

void cv::dnn::Net::dumpToStream ( std::ostream & strm) const

Dump net structure, hyperparameters, backend, target and fusion to the specified output stream.

Parameters
strmthe target stream

◆ empty()

bool cv::dnn::Net::empty ( ) const
Python:
cv.dnn.Net.empty() -> retval

Returns true if there are no layers in the network.

◆ enableFusion()

void cv::dnn::Net::enableFusion ( bool fusion)
Python:
cv.dnn.Net.enableFusion(fusion) -> None

Enables or disables layer fusion in the network.

Parameters
fusiontrue to enable the fusion, false to disable. The fusion is enabled by default.

◆ enableWinograd()

void cv::dnn::Net::enableWinograd ( bool useWinograd)
Python:
cv.dnn.Net.enableWinograd(useWinograd) -> None

Enables or disables the Winograd compute branch. The Winograd compute branch can speed up 3x3 Convolution at a small loss of accuracy.

Parameters
useWinogradtrue to enable the Winograd compute branch. The default is true.

◆ findDim()

int cv::dnn::Net::findDim ( const std::string & name,
bool insert = false )

◆ forward() [1/4]

Mat cv::dnn::Net::forward ( const String & outputName = String())
Python:
cv.dnn.Net.forward([, outputName]) -> retval
cv.dnn.Net.forward([, outputBlobs[, outputName]]) -> outputBlobs
cv.dnn.Net.forward(outBlobNames[, outputBlobs]) -> outputBlobs
cv.dnn.Net.forwardAndRetrieve(outBlobNames) -> outputBlobs

Runs forward pass to compute output of layer with name outputName.

Parameters
outputNamename for layer which output is needed to get
Returns
blob for first output of specified layer.

By default runs forward pass for the whole network.

Examples
samples/dnn/colorization.cpp, and samples/dnn/openpose.cpp.

◆ forward() [2/4]

void cv::dnn::Net::forward ( CV_ND OutputArrayOfArrays outputBlobs,
const std::vector< String > & outBlobNames )
Python:
cv.dnn.Net.forward([, outputName]) -> retval
cv.dnn.Net.forward([, outputBlobs[, outputName]]) -> outputBlobs
cv.dnn.Net.forward(outBlobNames[, outputBlobs]) -> outputBlobs
cv.dnn.Net.forwardAndRetrieve(outBlobNames) -> outputBlobs

Runs forward pass to compute outputs of layers listed in outBlobNames.

Parameters
outputBlobscontains blobs for first outputs of specified layers.
outBlobNamesnames for layers which outputs are needed to get

◆ forward() [3/4]

void cv::dnn::Net::forward ( CV_ND OutputArrayOfArrays outputBlobs,
const String & outputName = String() )
Python:
cv.dnn.Net.forward([, outputName]) -> retval
cv.dnn.Net.forward([, outputBlobs[, outputName]]) -> outputBlobs
cv.dnn.Net.forward(outBlobNames[, outputBlobs]) -> outputBlobs
cv.dnn.Net.forwardAndRetrieve(outBlobNames) -> outputBlobs

Runs forward pass to compute output of layer with name outputName.

Parameters
outputBlobscontains all output blobs for specified layer.
outputNamename for layer which output is needed to get

If outputName is empty, runs forward pass for the whole network.

◆ forward() [4/4]

void cv::dnn::Net::forward ( std::vector< std::vector< Mat > > & outputBlobs,
const std::vector< String > & outBlobNames )
Python:
cv.dnn.Net.forward([, outputName]) -> retval
cv.dnn.Net.forward([, outputBlobs[, outputName]]) -> outputBlobs
cv.dnn.Net.forward(outBlobNames[, outputBlobs]) -> outputBlobs
cv.dnn.Net.forwardAndRetrieve(outBlobNames) -> outputBlobs

Runs forward pass to compute outputs of layers listed in outBlobNames.

Parameters
outputBlobscontains all output blobs for each layer specified in outBlobNames.
outBlobNamesnames for layers which outputs are needed to get

◆ forwardAsync()

AsyncArray cv::dnn::Net::forwardAsync ( const String & outputName = String())
Python:
cv.dnn.Net.forwardAsync([, outputName]) -> retval

Runs forward pass to compute output of layer with name outputName.

Parameters
outputNamename for layer which output is needed to get

By default runs forward pass for the whole network.

This is an asynchronous version of forward(const String&). dnn::DNN_BACKEND_INFERENCE_ENGINE backend is required.

◆ getArg()

Arg cv::dnn::Net::getArg ( const std::string & name)

◆ getFLOPS() [1/4]

int64 cv::dnn::Net::getFLOPS ( const int layerId,
const MatShape & netInputShape,
const int & netInputType ) const
Python:
cv.dnn.Net.getFLOPS(netInputShapes, netInputTypes) -> retval

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ getFLOPS() [2/4]

int64 cv::dnn::Net::getFLOPS ( const int layerId,
const std::vector< MatShape > & netInputShapes,
const std::vector< int > & netInputTypes ) const
Python:
cv.dnn.Net.getFLOPS(netInputShapes, netInputTypes) -> retval

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ getFLOPS() [3/4]

int64 cv::dnn::Net::getFLOPS ( const MatShape & netInputShape,
const int & netInputType ) const
Python:
cv.dnn.Net.getFLOPS(netInputShapes, netInputTypes) -> retval

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. These overloads should be deprecated

◆ getFLOPS() [4/4]

int64 cv::dnn::Net::getFLOPS ( const std::vector< MatShape > & netInputShapes,
const std::vector< int > & netInputTypes ) const
Python:
cv.dnn.Net.getFLOPS(netInputShapes, netInputTypes) -> retval

Computes FLOP for whole loaded model with specified input shapes.

Parameters
netInputShapesvector of shapes for all net inputs.
netInputTypesvector of types for all net inputs.
Returns
computed FLOP.

◆ getImpl()

Impl * cv::dnn::Net::getImpl ( ) const
inline

◆ getImplRef()

Impl & cv::dnn::Net::getImplRef ( ) const
inline

◆ getLayer() [1/3]

Ptr< Layer > cv::dnn::Net::getLayer ( const LayerId & layerId) const
Python:
cv.dnn.Net.getLayer(layerId) -> retval
cv.dnn.Net.getLayer(layerName) -> retval

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Deprecated
to be removed

◆ getLayer() [2/3]

Ptr< Layer > cv::dnn::Net::getLayer ( const String & layerName) const
inline
Python:
cv.dnn.Net.getLayer(layerId) -> retval
cv.dnn.Net.getLayer(layerName) -> retval

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Deprecated
Use int getLayerId(const String &layer)
Here is the call graph for this function:

◆ getLayer() [3/3]

Ptr< Layer > cv::dnn::Net::getLayer ( int layerId) const
Python:
cv.dnn.Net.getLayer(layerId) -> retval
cv.dnn.Net.getLayer(layerName) -> retval

Returns pointer to layer with specified id or name which the network use.

◆ getLayerId()

int cv::dnn::Net::getLayerId ( const String & layer) const
Python:
cv.dnn.Net.getLayerId(layer) -> retval

Converts string name of the layer to the integer identifier.

Returns
id of the layer, or -1 if the layer wasn't found.

◆ getLayerInputs()

std::vector< Ptr< Layer > > cv::dnn::Net::getLayerInputs ( int layerId) const

Returns pointers to input layers of specific layer.

◆ getLayerNames()

std::vector< String > cv::dnn::Net::getLayerNames ( ) const
Python:
cv.dnn.Net.getLayerNames() -> retval

◆ getLayersCount()

int cv::dnn::Net::getLayersCount ( const String & layerType) const
Python:
cv.dnn.Net.getLayersCount(layerType) -> retval

Returns count of layers of specified type.

Parameters
layerTypetype.
Returns
count of layers

◆ getLayerShapes() [1/2]

void cv::dnn::Net::getLayerShapes ( const MatShape & netInputShape,
const int & netInputType,
const int layerId,
std::vector< MatShape > & inLayerShapes,
std::vector< MatShape > & outLayerShapes ) const
Python:
cv.dnn.Net.getLayerShapes(netInputShapes, netInputTypes, layerId) -> inLayerShapes, outLayerShapes

Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary.

Parameters
netInputShapeshape input blob in net input layer.
netInputTypeinput type in net input layer.
layerIdid for layer.
inLayerShapesoutput parameter for input layers shapes; order is the same as in layersIds
outLayerShapesoutput parameter for output layers shapes; order is the same as in layersIds

This overload should be deprecated

◆ getLayerShapes() [2/2]

void cv::dnn::Net::getLayerShapes ( const std::vector< MatShape > & netInputShapes,
const std::vector< int > & netInputTypes,
const int layerId,
std::vector< MatShape > & inLayerShapes,
std::vector< MatShape > & outLayerShapes ) const
Python:
cv.dnn.Net.getLayerShapes(netInputShapes, netInputTypes, layerId) -> inLayerShapes, outLayerShapes

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

The only overload of getLayerShapes that should be kept in 5.x

◆ getLayersShapes() [1/2]

void cv::dnn::Net::getLayersShapes ( const MatShape & netInputShape,
const int & netInputType,
std::vector< int > & layersIds,
std::vector< std::vector< MatShape > > & inLayersShapes,
std::vector< std::vector< MatShape > > & outLayersShapes ) const

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

This overload should be deprecated

◆ getLayersShapes() [2/2]

void cv::dnn::Net::getLayersShapes ( const std::vector< MatShape > & netInputShapes,
const std::vector< int > & netInputTypes,
std::vector< int > & layersIds,
std::vector< std::vector< MatShape > > & inLayersShapes,
std::vector< std::vector< MatShape > > & outLayersShapes ) const

Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary.

Parameters
netInputShapesshapes for all input blobs in net input layer.
netInputTypestypes for all input blobs in net input layer.
layersIdsoutput parameter for layer IDs.
inLayersShapesoutput parameter for input layers shapes; order is the same as in layersIds
outLayersShapesoutput parameter for output layers shapes; order is the same as in layersIds.

This overload should be deprecated

◆ getLayerTypes()

void cv::dnn::Net::getLayerTypes ( std::vector< String > & layersTypes) const
Python:
cv.dnn.Net.getLayerTypes() -> layersTypes

Returns list of types for layer used in model.

Parameters
layersTypesoutput parameter for returning types.

◆ getMainGraph()

Ptr< Graph > cv::dnn::Net::getMainGraph ( ) const

◆ getMemoryConsumption() [1/6]

void cv::dnn::Net::getMemoryConsumption ( const int layerId,
const MatShape & netInputShape,
const int & netInputType,
size_t & weights,
size_t & blobs ) const
Python:
cv.dnn.Net.getMemoryConsumption(netInputShapes, netInputTypes) -> weights, blobs

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. It should be deprecated

◆ getMemoryConsumption() [2/6]

void cv::dnn::Net::getMemoryConsumption ( const int layerId,
const std::vector< MatShape > & netInputShapes,
const std::vector< int > & netInputTypes,
size_t & weights,
size_t & blobs ) const
Python:
cv.dnn.Net.getMemoryConsumption(netInputShapes, netInputTypes) -> weights, blobs

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. It should be deprecated

◆ getMemoryConsumption() [3/6]

void cv::dnn::Net::getMemoryConsumption ( const MatShape & netInputShape,
const int & netInputType,
size_t & weights,
size_t & blobs ) const
Python:
cv.dnn.Net.getMemoryConsumption(netInputShapes, netInputTypes) -> weights, blobs

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. It should be deprecated

◆ getMemoryConsumption() [4/6]

void cv::dnn::Net::getMemoryConsumption ( const MatShape & netInputShape,
const int & netInputType,
std::vector< int > & layerIds,
std::vector< size_t > & weights,
std::vector< size_t > & blobs ) const
Python:
cv.dnn.Net.getMemoryConsumption(netInputShapes, netInputTypes) -> weights, blobs

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

It should be deprecated

◆ getMemoryConsumption() [5/6]

void cv::dnn::Net::getMemoryConsumption ( const std::vector< MatShape > & netInputShapes,
const std::vector< int > & netInputTypes,
size_t & weights,
size_t & blobs ) const
Python:
cv.dnn.Net.getMemoryConsumption(netInputShapes, netInputTypes) -> weights, blobs

Computes bytes number which are required to store all weights and intermediate blobs for model.

Parameters
netInputShapesvector of shapes for all net inputs.
netInputTypesvector of types for all net inputs.
weightsoutput parameter to store resulting bytes for weights.
blobsoutput parameter to store resulting bytes for intermediate blobs.

◆ getMemoryConsumption() [6/6]

void cv::dnn::Net::getMemoryConsumption ( const std::vector< MatShape > & netInputShapes,
const std::vector< int > & netInputTypes,
std::vector< int > & layerIds,
std::vector< size_t > & weights,
std::vector< size_t > & blobs ) const
Python:
cv.dnn.Net.getMemoryConsumption(netInputShapes, netInputTypes) -> weights, blobs

Computes bytes number which are required to store all weights and intermediate blobs for each layer.

Parameters
netInputShapesvector of shapes for all net inputs.
netInputTypesvector of types for all net inputs.
layerIdsoutput vector to save layer IDs.
weightsoutput parameter to store resulting bytes for weights.
blobsoutput parameter to store resulting bytes for intermediate blobs.

It should be deprecated

◆ getModelFormat()

ModelFormat cv::dnn::Net::getModelFormat ( ) const
Python:
cv.dnn.Net.getModelFormat() -> retval

Retrieve the current model format, see DNN_MODEL_*.

◆ getParam() [1/2]

Mat cv::dnn::Net::getParam ( const String & layerName,
int numParam = 0 ) const
inline
Python:
cv.dnn.Net.getParam(layer[, numParam]) -> retval
cv.dnn.Net.getParam(layerName[, numParam]) -> retval
Here is the call graph for this function:

◆ getParam() [2/2]

Mat cv::dnn::Net::getParam ( int layer,
int numParam = 0 ) const
Python:
cv.dnn.Net.getParam(layer[, numParam]) -> retval
cv.dnn.Net.getParam(layerName[, numParam]) -> retval

Returns parameter blob of the layer.

Parameters
layername or id of the layer.
numParamindex of the layer parameter in the Layer::blobs array.
See also
Layer::blobs

◆ getPerfProfile()

int64 cv::dnn::Net::getPerfProfile ( std::vector< double > & timings)
Python:
cv.dnn.Net.getPerfProfile() -> retval, timings

Returns overall time for inference and timings (in ticks) for layers.

Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers. Supported by DNN_BACKEND_OPENCV on DNN_TARGET_CPU only.

Parameters
[out]timingsvector for tick timings for all layers.
Returns
overall ticks for model inference.

◆ getProfilingMode()

ProfilingMode cv::dnn::Net::getProfilingMode ( ) const
Python:
cv.dnn.Net.getProfilingMode() -> retval

Retrieve the current profiling mode.

◆ getTracingMode()

TracingMode cv::dnn::Net::getTracingMode ( ) const
Python:
cv.dnn.Net.getTracingMode() -> retval

Retrieve the current tracing mode.

◆ getUnconnectedOutLayers()

std::vector< int > cv::dnn::Net::getUnconnectedOutLayers ( ) const
Python:
cv.dnn.Net.getUnconnectedOutLayers() -> retval

Returns indexes of layers with unconnected outputs.

FIXIT: Rework API to registerOutput() approach, deprecate this call

◆ getUnconnectedOutLayersNames()

std::vector< String > cv::dnn::Net::getUnconnectedOutLayersNames ( ) const
Python:
cv.dnn.Net.getUnconnectedOutLayersNames() -> retval

Returns names of layers with unconnected outputs.

FIXIT: Rework API to registerOutput() approach, deprecate this call

◆ haveArg()

bool cv::dnn::Net::haveArg ( const std::string & name) const

◆ isConstArg()

bool cv::dnn::Net::isConstArg ( Arg arg) const

◆ readFromModelOptimizer() [1/3]

static Net cv::dnn::Net::readFromModelOptimizer ( const std::vector< uchar > & bufferModelConfig,
const std::vector< uchar > & bufferWeights )
static
Python:
cv.dnn.Net.readFromModelOptimizer(xml, bin) -> retval
cv.dnn.Net.readFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval
cv.dnn.Net_readFromModelOptimizer(xml, bin) -> retval
cv.dnn.Net_readFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval

Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).

Parameters
[in]bufferModelConfigbuffer with model's configuration.
[in]bufferWeightsbuffer with model's trained weights.
Returns
Net object.

◆ readFromModelOptimizer() [2/3]

static Net cv::dnn::Net::readFromModelOptimizer ( const uchar * bufferModelConfigPtr,
size_t bufferModelConfigSize,
const uchar * bufferWeightsPtr,
size_t bufferWeightsSize )
static
Python:
cv.dnn.Net.readFromModelOptimizer(xml, bin) -> retval
cv.dnn.Net.readFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval
cv.dnn.Net_readFromModelOptimizer(xml, bin) -> retval
cv.dnn.Net_readFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval

Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).

Parameters
[in]bufferModelConfigPtrbuffer pointer of model's configuration.
[in]bufferModelConfigSizebuffer size of model's configuration.
[in]bufferWeightsPtrbuffer pointer of model's trained weights.
[in]bufferWeightsSizebuffer size of model's trained weights.
Returns
Net object.

◆ readFromModelOptimizer() [3/3]

static Net cv::dnn::Net::readFromModelOptimizer ( CV_WRAP_FILE_PATH const String & xml,
CV_WRAP_FILE_PATH const String & bin )
static
Python:
cv.dnn.Net.readFromModelOptimizer(xml, bin) -> retval
cv.dnn.Net.readFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval
cv.dnn.Net_readFromModelOptimizer(xml, bin) -> retval
cv.dnn.Net_readFromModelOptimizer(bufferModelConfig, bufferWeights) -> retval

Create a network from Intel's Model Optimizer intermediate representation (IR).

Parameters
[in]xmlXML configuration file with network's topology.
[in]binBinary file with trained weights. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.

◆ registerOutput()

int cv::dnn::Net::registerOutput ( const std::string & outputName,
int layerId,
int outputPort )

Registers network output with name.

Function may create additional 'Identity' layer.

Parameters
outputNameidentifier of the output
layerIdidentifier of the second layer
outputPortnumber of the second layer input
Returns
index of bound layer (the same as layerId or newly created)

◆ setInput()

void cv::dnn::Net::setInput ( CV_ND InputArray blob,
const String & name = "",
double scalefactor = 1.0,
const Scalar & mean = Scalar() )
Python:
cv.dnn.Net.setInput(blob[, name[, scalefactor[, mean]]]) -> None

Sets the new input value for the network.

Parameters
blobA new blob. Should have CV_32F or CV_8U depth.
nameA name of input layer.
scalefactorAn optional normalization scale.
meanAn optional mean subtraction values.
See also
connect(String, String) to know format of the descriptor.

If scale or mean values are specified, a final input blob is computed as:

\[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\]

Examples
samples/dnn/colorization.cpp, and samples/dnn/openpose.cpp.

◆ setInputShape()

void cv::dnn::Net::setInputShape ( const String & inputName,
const MatShape & shape )
Python:
cv.dnn.Net.setInputShape(inputName, shape) -> None

Specify shape of network input.

◆ setInputsNames()

void cv::dnn::Net::setInputsNames ( const std::vector< String > & inputBlobNames)
Python:
cv.dnn.Net.setInputsNames(inputBlobNames) -> None

Sets outputs names of the network input pseudo layer.

Each net always has special own the network input pseudo layer with id=0. This layer stores the user blobs only and don't make any computations. In fact, this layer provides the only way to pass user data into the network. As any other layer, this layer can label its outputs and this function provides an easy way to do this.

◆ setParam() [1/2]

void cv::dnn::Net::setParam ( const String & layerName,
int numParam,
CV_ND const Mat & blob )
inline
Python:
cv.dnn.Net.setParam(layer, numParam, blob) -> None
cv.dnn.Net.setParam(layerName, numParam, blob) -> None
Here is the call graph for this function:

◆ setParam() [2/2]

void cv::dnn::Net::setParam ( int layer,
int numParam,
CV_ND const Mat & blob )
Python:
cv.dnn.Net.setParam(layer, numParam, blob) -> None
cv.dnn.Net.setParam(layerName, numParam, blob) -> None

Sets the new value for the learned param of the layer.

Parameters
layername or id of the layer.
numParamindex of the layer parameter in the Layer::blobs array.
blobthe new value.
See also
Layer::blobs
Note
If shape of the new blob differs from the previous shape, then the following forward pass may fail.

◆ setPreferableBackend()

void cv::dnn::Net::setPreferableBackend ( int backendId)
Python:
cv.dnn.Net.setPreferableBackend(backendId) -> None

Ask network to use specific computation backend where it supported.

Parameters
[in]backendIdbackend identifier.
See also
Backend
Examples
samples/dnn/colorization.cpp.

◆ setPreferableTarget()

void cv::dnn::Net::setPreferableTarget ( int targetId)
Python:
cv.dnn.Net.setPreferableTarget(targetId) -> None

Ask network to make computations on specific target device.

Parameters
[in]targetIdtarget identifier.
See also
Target

List of supported combinations backend / target:

DNN_BACKEND_OPENCV DNN_BACKEND_INFERENCE_ENGINE DNN_BACKEND_CUDA
DNN_TARGET_CPU + +
DNN_TARGET_OPENCL + +
DNN_TARGET_OPENCL_FP16 + +
DNN_TARGET_MYRIAD +
DNN_TARGET_FPGA +
DNN_TARGET_CUDA +
DNN_TARGET_CUDA_FP16 +
DNN_TARGET_HDDL +
Examples
samples/dnn/colorization.cpp.

◆ setProfilingMode()

void cv::dnn::Net::setProfilingMode ( ProfilingMode profilingMode)
Python:
cv.dnn.Net.setProfilingMode(profilingMode) -> None

Set the profiling mode.

Parameters
[in]profilingModethe profiling mode, see DNN_PROFILE_*

◆ setTracingMode()

void cv::dnn::Net::setTracingMode ( TracingMode tracingMode)
Python:
cv.dnn.Net.setTracingMode(tracingMode) -> None

Set the tracing mode.

Parameters
[in]tracingModethe tracing mode, see DNN_TRACE_*

Friends And Related Symbol Documentation

◆ accessor::DnnNetAccessor

friend class accessor::DnnNetAccessor
friend

Member Data Documentation

◆ impl

Ptr<Impl> cv::dnn::Net::impl
protected

The documentation for this class was generated from the following file: