OpenCV  4.8.0
Open Source Computer Vision
Public Member Functions | Public Attributes | List of all members
cv::dnn::Layer Class Reference

This interface class allows to build new Layers - are building blocks of networks. More...

#include <opencv2/dnn/dnn.hpp>

Inheritance diagram for cv::dnn::Layer:
cv::Algorithm cv::dnn::AccumLayer cv::dnn::ActivationLayer cv::dnn::ArgLayer cv::dnn::BaseConvolutionLayer cv::dnn::BlankLayer cv::dnn::CompareLayer cv::dnn::ConcatLayer cv::dnn::ConstLayer cv::dnn::CorrelationLayer cv::dnn::CropAndResizeLayer cv::dnn::CropLayer cv::dnn::CumSumLayer cv::dnn::DataAugmentationLayer cv::dnn::DequantizeLayer cv::dnn::DetectionOutputLayer cv::dnn::EltwiseLayer cv::dnn::EltwiseLayerInt8 cv::dnn::FlattenLayer cv::dnn::FlowWarpLayer cv::dnn::GatherLayer cv::dnn::GRULayer cv::dnn::InnerProductLayer cv::dnn::InterpLayer cv::dnn::LayerNormLayer cv::dnn::LRNLayer cv::dnn::LSTMLayer cv::dnn::MaxUnpoolLayer cv::dnn::MVNLayer cv::dnn::NaryEltwiseLayer cv::dnn::NormalizeBBoxLayer cv::dnn::PaddingLayer cv::dnn::PermuteLayer cv::dnn::PoolingLayer cv::dnn::PriorBoxLayer cv::dnn::ProposalLayer cv::dnn::QuantizeLayer cv::dnn::ReduceLayer cv::dnn::RegionLayer cv::dnn::ReorgLayer cv::dnn::RequantizeLayer cv::dnn::ReshapeLayer cv::dnn::ResizeLayer cv::dnn::RNNLayer cv::dnn::ScaleLayer cv::dnn::ScatterLayer cv::dnn::ScatterNDLayer cv::dnn::ShiftLayer cv::dnn::ShiftLayerInt8 cv::dnn::ShuffleChannelLayer cv::dnn::SliceLayer cv::dnn::SoftmaxLayer cv::dnn::SplitLayer cv::dnn::TileLayer

Public Member Functions

 Layer ()
 
 Layer (const LayerParams &params)
 Initializes only name, type and blobs fields. More...
 
virtual ~Layer ()
 
virtual void applyHalideScheduler (Ptr< BackendNode > &node, const std::vector< Mat *> &inputs, const std::vector< Mat > &outputs, int targetId) const
 Automatic Halide scheduling based on layer hyper-parameters. More...
 
virtual void finalize (const std::vector< Mat *> &input, std::vector< Mat > &output)
 Computes and sets internal parameters according to inputs, outputs and blobs. More...
 
virtual void finalize (InputArrayOfArrays inputs, OutputArrayOfArrays outputs)
 Computes and sets internal parameters according to inputs, outputs and blobs. More...
 
void finalize (const std::vector< Mat > &inputs, std::vector< Mat > &outputs)
 This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. More...
 
std::vector< Matfinalize (const std::vector< Mat > &inputs)
 This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. More...
 
virtual void forward (std::vector< Mat *> &input, std::vector< Mat > &output, std::vector< Mat > &internals)
 Given the input blobs, computes the output blobs. More...
 
virtual void forward (InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals)
 Given the input blobs, computes the output blobs. More...
 
void forward_fallback (InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals)
 Given the input blobs, computes the output blobs. More...
 
virtual int64 getFLOPS (const std::vector< MatShape > &inputs, const std::vector< MatShape > &outputs) const
 
virtual bool getMemoryShapes (const std::vector< MatShape > &inputs, const int requiredOutputs, std::vector< MatShape > &outputs, std::vector< MatShape > &internals) const
 
virtual void getScaleShift (Mat &scale, Mat &shift) const
 Returns parameters of layers with channel-wise multiplication and addition. More...
 
virtual void getScaleZeropoint (float &scale, int &zeropoint) const
 Returns scale and zeropoint of layers. More...
 
virtual Ptr< BackendNodeinitCann (const std::vector< Ptr< BackendWrapper > > &inputs, const std::vector< Ptr< BackendWrapper > > &outputs, const std::vector< Ptr< BackendNode > > &nodes)
 Returns a CANN backend node. More...
 
virtual Ptr< BackendNodeinitCUDA (void *context, const std::vector< Ptr< BackendWrapper >> &inputs, const std::vector< Ptr< BackendWrapper >> &outputs)
 Returns a CUDA backend node. More...
 
virtual Ptr< BackendNodeinitHalide (const std::vector< Ptr< BackendWrapper > > &inputs)
 Returns Halide backend node. More...
 
virtual Ptr< BackendNodeinitNgraph (const std::vector< Ptr< BackendWrapper > > &inputs, const std::vector< Ptr< BackendNode > > &nodes)
 
virtual Ptr< BackendNodeinitTimVX (void *timVxInfo, const std::vector< Ptr< BackendWrapper > > &inputsWrapper, const std::vector< Ptr< BackendWrapper > > &outputsWrapper, bool isLast)
 Returns a TimVX backend node. More...
 
virtual Ptr< BackendNodeinitVkCom (const std::vector< Ptr< BackendWrapper > > &inputs, std::vector< Ptr< BackendWrapper > > &outputs)
 
virtual Ptr< BackendNodeinitWebnn (const std::vector< Ptr< BackendWrapper > > &inputs, const std::vector< Ptr< BackendNode > > &nodes)
 
virtual int inputNameToIndex (String inputName)
 Returns index of input blob into the input array. More...
 
virtual int outputNameToIndex (const String &outputName)
 Returns index of output blob in output array. More...
 
void run (const std::vector< Mat > &inputs, std::vector< Mat > &outputs, std::vector< Mat > &internals)
 Allocates layer and computes output. More...
 
virtual bool setActivation (const Ptr< ActivationLayer > &layer)
 Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. More...
 
void setParamsFrom (const LayerParams &params)
 Initializes only name, type and blobs fields. More...
 
virtual bool supportBackend (int backendId)
 Ask layer if it support specific backend for doing computations. More...
 
virtual Ptr< BackendNodetryAttach (const Ptr< BackendNode > &node)
 Implement layers fusing. More...
 
virtual bool tryFuse (Ptr< Layer > &top)
 Try to fuse current layer with a next one. More...
 
virtual bool tryQuantize (const std::vector< std::vector< float > > &scales, const std::vector< std::vector< int > > &zeropoints, LayerParams &params)
 Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation. More...
 
virtual void unsetAttached ()
 "Detaches" all the layers, attached to particular layer. More...
 
virtual bool updateMemoryShapes (const std::vector< MatShape > &inputs)
 
- Public Member Functions inherited from cv::Algorithm
 Algorithm ()
 
virtual ~Algorithm ()
 
virtual void clear ()
 Clears the algorithm state. More...
 
virtual bool empty () const
 Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read. More...
 
virtual String getDefaultName () const
 
virtual void read (const FileNode &fn)
 Reads algorithm parameters from a file storage. More...
 
virtual void save (const String &filename) const
 
virtual void write (FileStorage &fs) const
 Stores algorithm parameters in a file storage. More...
 
void write (FileStorage &fs, const String &name) const
 
void write (const Ptr< FileStorage > &fs, const String &name=String()) const
 

Public Attributes

std::vector< Matblobs
 List of learned parameters must be stored here to allow read them by using Net::getParam(). More...
 
String name
 Name of the layer instance, can be used for logging or other internal purposes. More...
 
int preferableTarget
 prefer target for layer forwarding More...
 
String type
 Type name which was used for creating layer by layer factory. More...
 

Additional Inherited Members

- Static Public Member Functions inherited from cv::Algorithm
template<typename _Tp >
static Ptr< _Tp > load (const String &filename, const String &objname=String())
 Loads algorithm from the file. More...
 
template<typename _Tp >
static Ptr< _Tp > loadFromString (const String &strModel, const String &objname=String())
 Loads algorithm from a String. More...
 
template<typename _Tp >
static Ptr< _Tp > read (const FileNode &fn)
 Reads algorithm from the file node. More...
 
- Protected Member Functions inherited from cv::Algorithm
void writeFormat (FileStorage &fs) const
 

Detailed Description

This interface class allows to build new Layers - are building blocks of networks.

Each class, derived from Layer, must implement allocate() methods to declare own outputs and forward() to compute outputs. Also before using the new layer into networks you must register your layer by using one of LayerFactory macros.

Constructor & Destructor Documentation

◆ Layer() [1/2]

cv::dnn::Layer::Layer ( )

◆ Layer() [2/2]

cv::dnn::Layer::Layer ( const LayerParams params)
explicit

Initializes only name, type and blobs fields.

◆ ~Layer()

virtual cv::dnn::Layer::~Layer ( )
virtual

Member Function Documentation

◆ applyHalideScheduler()

virtual void cv::dnn::Layer::applyHalideScheduler ( Ptr< BackendNode > &  node,
const std::vector< Mat *> &  inputs,
const std::vector< Mat > &  outputs,
int  targetId 
) const
virtual

Automatic Halide scheduling based on layer hyper-parameters.

Parameters
[in]nodeBackend node with Halide functions.
[in]inputsBlobs that will be used in forward invocations.
[in]outputsBlobs that will be used in forward invocations.
[in]targetIdTarget identifier
See also
BackendNode, Target

Layer don't use own Halide::Func members because we can have applied layers fusing. In this way the fused function should be scheduled.

◆ finalize() [1/4]

virtual void cv::dnn::Layer::finalize ( const std::vector< Mat *> &  input,
std::vector< Mat > &  output 
)
virtual
Python:
cv.dnn.Layer.finalize(inputs[, outputs]) -> outputs

Computes and sets internal parameters according to inputs, outputs and blobs.

Deprecated:
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead
Parameters
[in]inputvector of already allocated input blobs
[out]outputvector of already allocated output blobs

If this method is called after network has allocated all memory for input and output blobs and before inferencing.

◆ finalize() [2/4]

virtual void cv::dnn::Layer::finalize ( InputArrayOfArrays  inputs,
OutputArrayOfArrays  outputs 
)
virtual
Python:
cv.dnn.Layer.finalize(inputs[, outputs]) -> outputs

Computes and sets internal parameters according to inputs, outputs and blobs.

Parameters
[in]inputsvector of already allocated input blobs
[out]outputsvector of already allocated output blobs

If this method is called after network has allocated all memory for input and output blobs and before inferencing.

◆ finalize() [3/4]

void cv::dnn::Layer::finalize ( const std::vector< Mat > &  inputs,
std::vector< Mat > &  outputs 
)
Python:
cv.dnn.Layer.finalize(inputs[, outputs]) -> outputs

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Deprecated:
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead

◆ finalize() [4/4]

std::vector<Mat> cv::dnn::Layer::finalize ( const std::vector< Mat > &  inputs)
Python:
cv.dnn.Layer.finalize(inputs[, outputs]) -> outputs

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Deprecated:
Use Layer::finalize(InputArrayOfArrays, OutputArrayOfArrays) instead

◆ forward() [1/2]

virtual void cv::dnn::Layer::forward ( std::vector< Mat *> &  input,
std::vector< Mat > &  output,
std::vector< Mat > &  internals 
)
virtual

Given the input blobs, computes the output blobs.

Deprecated:
Use Layer::forward(InputArrayOfArrays, OutputArrayOfArrays, OutputArrayOfArrays) instead
Parameters
[in]inputthe input blobs.
[out]outputallocated output blobs, which will store results of the computation.
[out]internalsallocated internal blobs

◆ forward() [2/2]

virtual void cv::dnn::Layer::forward ( InputArrayOfArrays  inputs,
OutputArrayOfArrays  outputs,
OutputArrayOfArrays  internals 
)
virtual

Given the input blobs, computes the output blobs.

Parameters
[in]inputsthe input blobs.
[out]outputsallocated output blobs, which will store results of the computation.
[out]internalsallocated internal blobs

◆ forward_fallback()

void cv::dnn::Layer::forward_fallback ( InputArrayOfArrays  inputs,
OutputArrayOfArrays  outputs,
OutputArrayOfArrays  internals 
)

Given the input blobs, computes the output blobs.

Parameters
[in]inputsthe input blobs.
[out]outputsallocated output blobs, which will store results of the computation.
[out]internalsallocated internal blobs

◆ getFLOPS()

virtual int64 cv::dnn::Layer::getFLOPS ( const std::vector< MatShape > &  inputs,
const std::vector< MatShape > &  outputs 
) const
inlinevirtual

◆ getMemoryShapes()

virtual bool cv::dnn::Layer::getMemoryShapes ( const std::vector< MatShape > &  inputs,
const int  requiredOutputs,
std::vector< MatShape > &  outputs,
std::vector< MatShape > &  internals 
) const
virtual

◆ getScaleShift()

virtual void cv::dnn::Layer::getScaleShift ( Mat scale,
Mat shift 
) const
virtual

Returns parameters of layers with channel-wise multiplication and addition.

Parameters
[out]scaleChannel-wise multipliers. Total number of values should be equal to number of channels.
[out]shiftChannel-wise offsets. Total number of values should be equal to number of channels.

Some layers can fuse their transformations with further layers. In example, convolution + batch normalization. This way base layer use weights from layer after it. Fused layer is skipped. By default, scale and shift are empty that means layer has no element-wise multiplications or additions.

◆ getScaleZeropoint()

virtual void cv::dnn::Layer::getScaleZeropoint ( float &  scale,
int &  zeropoint 
) const
virtual

Returns scale and zeropoint of layers.

Parameters
[out]scaleOutput scale
[out]zeropointOutput zeropoint

By default, scale is 1 and zeropoint is 0.

◆ initCann()

virtual Ptr<BackendNode> cv::dnn::Layer::initCann ( const std::vector< Ptr< BackendWrapper > > &  inputs,
const std::vector< Ptr< BackendWrapper > > &  outputs,
const std::vector< Ptr< BackendNode > > &  nodes 
)
virtual

Returns a CANN backend node.

Parameters
inputsinput tensors of CANN operator
outputsoutput tensors of CANN operator
nodesnodes of input tensors

◆ initCUDA()

virtual Ptr<BackendNode> cv::dnn::Layer::initCUDA ( void *  context,
const std::vector< Ptr< BackendWrapper >> &  inputs,
const std::vector< Ptr< BackendWrapper >> &  outputs 
)
virtual

Returns a CUDA backend node.

Parameters
contextvoid pointer to CSLContext object
inputslayer inputs
outputslayer outputs

◆ initHalide()

virtual Ptr<BackendNode> cv::dnn::Layer::initHalide ( const std::vector< Ptr< BackendWrapper > > &  inputs)
virtual

Returns Halide backend node.

Parameters
[in]inputsInput Halide buffers.
See also
BackendNode, BackendWrapper

Input buffers should be exactly the same that will be used in forward invocations. Despite we can use Halide::ImageParam based on input shape only, it helps prevent some memory management issues (if something wrong, Halide tests will be failed).

◆ initNgraph()

virtual Ptr<BackendNode> cv::dnn::Layer::initNgraph ( const std::vector< Ptr< BackendWrapper > > &  inputs,
const std::vector< Ptr< BackendNode > > &  nodes 
)
virtual

◆ initTimVX()

virtual Ptr<BackendNode> cv::dnn::Layer::initTimVX ( void *  timVxInfo,
const std::vector< Ptr< BackendWrapper > > &  inputsWrapper,
const std::vector< Ptr< BackendWrapper > > &  outputsWrapper,
bool  isLast 
)
virtual

Returns a TimVX backend node.

Parameters
timVxInfovoid pointer to CSLContext object
inputsWrapperlayer inputs
outputsWrapperlayer outputs
isLastif the node is the last one of the TimVX Graph.

◆ initVkCom()

virtual Ptr<BackendNode> cv::dnn::Layer::initVkCom ( const std::vector< Ptr< BackendWrapper > > &  inputs,
std::vector< Ptr< BackendWrapper > > &  outputs 
)
virtual

◆ initWebnn()

virtual Ptr<BackendNode> cv::dnn::Layer::initWebnn ( const std::vector< Ptr< BackendWrapper > > &  inputs,
const std::vector< Ptr< BackendNode > > &  nodes 
)
virtual

◆ inputNameToIndex()

virtual int cv::dnn::Layer::inputNameToIndex ( String  inputName)
virtual

Returns index of input blob into the input array.

Parameters
inputNamelabel of input blob

Each layer input and output can be labeled to easily identify them using "%<layer_name%>[.output_name]" notation. This method maps label of input blob to its index into input vector.

Reimplemented in cv::dnn::LSTMLayer.

◆ outputNameToIndex()

virtual int cv::dnn::Layer::outputNameToIndex ( const String outputName)
virtual
Python:
cv.dnn.Layer.outputNameToIndex(outputName) -> retval

Returns index of output blob in output array.

See also
inputNameToIndex()

Reimplemented in cv::dnn::LSTMLayer.

◆ run()

void cv::dnn::Layer::run ( const std::vector< Mat > &  inputs,
std::vector< Mat > &  outputs,
std::vector< Mat > &  internals 
)
Python:
cv.dnn.Layer.run(inputs, internals[, outputs]) -> outputs, internals

Allocates layer and computes output.

Deprecated:
This method will be removed in the future release.

◆ setActivation()

virtual bool cv::dnn::Layer::setActivation ( const Ptr< ActivationLayer > &  layer)
virtual

Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case.

Parameters
[in]layerThe subsequent activation layer.

Returns true if the activation layer has been attached successfully.

◆ setParamsFrom()

void cv::dnn::Layer::setParamsFrom ( const LayerParams params)

Initializes only name, type and blobs fields.

◆ supportBackend()

virtual bool cv::dnn::Layer::supportBackend ( int  backendId)
virtual

Ask layer if it support specific backend for doing computations.

Parameters
[in]backendIdcomputation backend identifier.
See also
Backend

◆ tryAttach()

virtual Ptr<BackendNode> cv::dnn::Layer::tryAttach ( const Ptr< BackendNode > &  node)
virtual

Implement layers fusing.

Parameters
[in]nodeBackend node of bottom layer.
See also
BackendNode

Actual for graph-based backends. If layer attached successfully, returns non-empty cv::Ptr to node of the same backend. Fuse only over the last function.

◆ tryFuse()

virtual bool cv::dnn::Layer::tryFuse ( Ptr< Layer > &  top)
virtual

Try to fuse current layer with a next one.

Parameters
[in]topNext layer to be fused.
Returns
True if fusion was performed.

◆ tryQuantize()

virtual bool cv::dnn::Layer::tryQuantize ( const std::vector< std::vector< float > > &  scales,
const std::vector< std::vector< int > > &  zeropoints,
LayerParams params 
)
virtual

Tries to quantize the given layer and compute the quantization parameters required for fixed point implementation.

Parameters
[in]scalesinput and output scales.
[in]zeropointsinput and output zeropoints.
[out]paramsQuantized parameters required for fixed point implementation of that layer.
Returns
True if layer can be quantized.

◆ unsetAttached()

virtual void cv::dnn::Layer::unsetAttached ( )
virtual

"Detaches" all the layers, attached to particular layer.

◆ updateMemoryShapes()

virtual bool cv::dnn::Layer::updateMemoryShapes ( const std::vector< MatShape > &  inputs)
virtual

Member Data Documentation

◆ blobs

std::vector<Mat> cv::dnn::Layer::blobs

List of learned parameters must be stored here to allow read them by using Net::getParam().

◆ name

String cv::dnn::Layer::name

Name of the layer instance, can be used for logging or other internal purposes.

◆ preferableTarget

int cv::dnn::Layer::preferableTarget

prefer target for layer forwarding

◆ type

String cv::dnn::Layer::type

Type name which was used for creating layer by layer factory.


The documentation for this class was generated from the following file: