Class Net


  • public class Net
    extends java.lang.Object
    This class allows to create and manipulate comprehensive artificial neural networks. Neural network is presented as directed acyclic graph (DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs. Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id. This class supports reference counting of its instances, i. e. copies point to the same instance.
    • Field Summary

      Fields 
      Modifier and Type Field Description
      protected long nativeObj  
    • Constructor Summary

      Constructors 
      Modifier Constructor Description
        Net()  
      protected Net​(long addr)  
    • Method Summary

      All Methods Static Methods Instance Methods Concrete Methods Deprecated Methods 
      Modifier and Type Method Description
      static Net __fromPtr__​(long addr)  
      void connect​(java.lang.String outPin, java.lang.String inpPin)
      Connects output of the first layer to input of the second layer.
      java.lang.String dump()
      Dump net to String
      void dumpToFile​(java.lang.String path)
      Dump net structure, hyperparameters, backend, target and fusion to dot file
      boolean empty()
      Returns true if there are no layers in the network.
      void enableFusion​(boolean fusion)
      Enables or disables layer fusion in the network.
      void enableWinograd​(boolean useWinograd)
      Enables or disables the Winograd compute branch.
      protected void finalize()  
      Mat forward()
      Runs forward pass to compute output of layer with name outputName.
      Mat forward​(java.lang.String outputName)
      Runs forward pass to compute output of layer with name outputName.
      void forward​(java.util.List<Mat> outputBlobs)
      Runs forward pass to compute output of layer with name outputName.
      void forward​(java.util.List<Mat> outputBlobs, java.lang.String outputName)
      Runs forward pass to compute output of layer with name outputName.
      void forward​(java.util.List<Mat> outputBlobs, java.util.List<java.lang.String> outBlobNames)
      Runs forward pass to compute outputs of layers listed in outBlobNames.
      long getFLOPS​(int layerId, java.util.List<MatOfInt> netInputShapes)  
      long getFLOPS​(int layerId, MatOfInt netInputShape)  
      long getFLOPS​(java.util.List<MatOfInt> netInputShapes)
      Computes FLOP for whole loaded model with specified input shapes.
      long getFLOPS​(MatOfInt netInputShape)  
      void getInputDetails​(MatOfFloat scales, MatOfInt zeropoints)
      Returns input scale and zeropoint for a quantized Net.
      Layer getLayer​(int layerId)
      Returns pointer to layer with specified id or name which the network use.
      Layer getLayer​(java.lang.String layerName)
      Deprecated.
      Use int getLayerId(const String &layer)
      Layer getLayer​(DictValue layerId)
      Deprecated.
      to be removed
      int getLayerId​(java.lang.String layer)
      Converts string name of the layer to the integer identifier.
      java.util.List<java.lang.String> getLayerNames()  
      int getLayersCount​(java.lang.String layerType)
      Returns count of layers of specified type.
      void getLayerTypes​(java.util.List<java.lang.String> layersTypes)
      Returns list of types for layer used in model.
      void getMemoryConsumption​(int layerId, java.util.List<MatOfInt> netInputShapes, long[] weights, long[] blobs)  
      void getMemoryConsumption​(int layerId, MatOfInt netInputShape, long[] weights, long[] blobs)  
      void getMemoryConsumption​(MatOfInt netInputShape, long[] weights, long[] blobs)  
      long getNativeObjAddr()  
      void getOutputDetails​(MatOfFloat scales, MatOfInt zeropoints)
      Returns output scale and zeropoint for a quantized Net.
      Mat getParam​(int layer)
      Returns parameter blob of the layer.
      Mat getParam​(int layer, int numParam)
      Returns parameter blob of the layer.
      Mat getParam​(java.lang.String layerName)  
      Mat getParam​(java.lang.String layerName, int numParam)  
      long getPerfProfile​(MatOfDouble timings)
      Returns overall time for inference and timings (in ticks) for layers.
      MatOfInt getUnconnectedOutLayers()
      Returns indexes of layers with unconnected outputs.
      java.util.List<java.lang.String> getUnconnectedOutLayersNames()
      Returns names of layers with unconnected outputs.
      Net quantize​(java.util.List<Mat> calibData, int inputsDtype, int outputsDtype)
      Returns a quantized Net from a floating-point Net.
      Net quantize​(java.util.List<Mat> calibData, int inputsDtype, int outputsDtype, boolean perChannel)
      Returns a quantized Net from a floating-point Net.
      static Net readFromModelOptimizer​(java.lang.String xml, java.lang.String bin)
      Create a network from Intel's Model Optimizer intermediate representation (IR).
      static Net readFromModelOptimizer​(MatOfByte bufferModelConfig, MatOfByte bufferWeights)
      Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).
      void setHalideScheduler​(java.lang.String scheduler)
      Compile Halide layers.
      void setInput​(Mat blob)
      Sets the new input value for the network
      void setInput​(Mat blob, java.lang.String name)
      Sets the new input value for the network
      void setInput​(Mat blob, java.lang.String name, double scalefactor)
      Sets the new input value for the network
      void setInput​(Mat blob, java.lang.String name, double scalefactor, Scalar mean)
      Sets the new input value for the network
      void setInputShape​(java.lang.String inputName, MatOfInt shape)
      Specify shape of network input.
      void setInputsNames​(java.util.List<java.lang.String> inputBlobNames)
      Sets outputs names of the network input pseudo layer.
      void setParam​(int layer, int numParam, Mat blob)
      Sets the new value for the learned param of the layer.
      void setParam​(java.lang.String layerName, int numParam, Mat blob)  
      void setPreferableBackend​(int backendId)
      Ask network to use specific computation backend where it supported.
      void setPreferableTarget​(int targetId)
      Ask network to make computations on specific target device.
      • Methods inherited from class java.lang.Object

        clone, equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
    • Field Detail

      • nativeObj

        protected final long nativeObj
    • Constructor Detail

      • Net

        protected Net​(long addr)
      • Net

        public Net()
    • Method Detail

      • getNativeObjAddr

        public long getNativeObjAddr()
      • __fromPtr__

        public static Net __fromPtr__​(long addr)
      • readFromModelOptimizer

        public static Net readFromModelOptimizer​(java.lang.String xml,
                                                 java.lang.String bin)
        Create a network from Intel's Model Optimizer intermediate representation (IR).
        Parameters:
        xml - XML configuration file with network's topology.
        bin - Binary file with trained weights. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.
        Returns:
        automatically generated
      • readFromModelOptimizer

        public static Net readFromModelOptimizer​(MatOfByte bufferModelConfig,
                                                 MatOfByte bufferWeights)
        Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).
        Parameters:
        bufferModelConfig - buffer with model's configuration.
        bufferWeights - buffer with model's trained weights.
        Returns:
        Net object.
      • empty

        public boolean empty()
        Returns true if there are no layers in the network.
        Returns:
        automatically generated
      • dump

        public java.lang.String dump()
        Dump net to String
        Returns:
        String with structure, hyperparameters, backend, target and fusion Call method after setInput(). To see correct backend, target and fusion run after forward().
      • dumpToFile

        public void dumpToFile​(java.lang.String path)
        Dump net structure, hyperparameters, backend, target and fusion to dot file
        Parameters:
        path - path to output file with .dot extension SEE: dump()
      • getLayerId

        public int getLayerId​(java.lang.String layer)
        Converts string name of the layer to the integer identifier.
        Parameters:
        layer - automatically generated
        Returns:
        id of the layer, or -1 if the layer wasn't found.
      • getLayerNames

        public java.util.List<java.lang.String> getLayerNames()
      • getLayer

        public Layer getLayer​(int layerId)
        Returns pointer to layer with specified id or name which the network use.
        Parameters:
        layerId - automatically generated
        Returns:
        automatically generated
      • getLayer

        @Deprecated
        public Layer getLayer​(java.lang.String layerName)
        Deprecated.
        Use int getLayerId(const String &layer)
        Parameters:
        layerName - automatically generated
        Returns:
        automatically generated
      • getLayer

        @Deprecated
        public Layer getLayer​(DictValue layerId)
        Deprecated.
        to be removed
        Parameters:
        layerId - automatically generated
        Returns:
        automatically generated
      • connect

        public void connect​(java.lang.String outPin,
                            java.lang.String inpPin)
        Connects output of the first layer to input of the second layer.
        Parameters:
        outPin - descriptor of the first layer output.
        inpPin - descriptor of the second layer input. Descriptors have the following template <DFN>&lt;layer_name&gt;[.input_number]</DFN>: - the first part of the template <DFN>layer_name</DFN> is string name of the added layer. If this part is empty then the network input pseudo layer will be used; - the second optional part of the template <DFN>input_number</DFN> is either number of the layer input, either label one. If this part is omitted then the first layer input will be used. SEE: setNetInputs(), Layer::inputNameToIndex(), Layer::outputNameToIndex()
      • setInputsNames

        public void setInputsNames​(java.util.List<java.lang.String> inputBlobNames)
        Sets outputs names of the network input pseudo layer. Each net always has special own the network input pseudo layer with id=0. This layer stores the user blobs only and don't make any computations. In fact, this layer provides the only way to pass user data into the network. As any other layer, this layer can label its outputs and this function provides an easy way to do this.
        Parameters:
        inputBlobNames - automatically generated
      • setInputShape

        public void setInputShape​(java.lang.String inputName,
                                  MatOfInt shape)
        Specify shape of network input.
        Parameters:
        inputName - automatically generated
        shape - automatically generated
      • forward

        public Mat forward​(java.lang.String outputName)
        Runs forward pass to compute output of layer with name outputName.
        Parameters:
        outputName - name for layer which output is needed to get
        Returns:
        blob for first output of specified layer. By default runs forward pass for the whole network.
      • forward

        public Mat forward()
        Runs forward pass to compute output of layer with name outputName.
        Returns:
        blob for first output of specified layer. By default runs forward pass for the whole network.
      • forward

        public void forward​(java.util.List<Mat> outputBlobs,
                            java.lang.String outputName)
        Runs forward pass to compute output of layer with name outputName.
        Parameters:
        outputBlobs - contains all output blobs for specified layer.
        outputName - name for layer which output is needed to get If outputName is empty, runs forward pass for the whole network.
      • forward

        public void forward​(java.util.List<Mat> outputBlobs)
        Runs forward pass to compute output of layer with name outputName.
        Parameters:
        outputBlobs - contains all output blobs for specified layer. If outputName is empty, runs forward pass for the whole network.
      • forward

        public void forward​(java.util.List<Mat> outputBlobs,
                            java.util.List<java.lang.String> outBlobNames)
        Runs forward pass to compute outputs of layers listed in outBlobNames.
        Parameters:
        outputBlobs - contains blobs for first outputs of specified layers.
        outBlobNames - names for layers which outputs are needed to get
      • quantize

        public Net quantize​(java.util.List<Mat> calibData,
                            int inputsDtype,
                            int outputsDtype,
                            boolean perChannel)
        Returns a quantized Net from a floating-point Net.
        Parameters:
        calibData - Calibration data to compute the quantization parameters.
        inputsDtype - Datatype of quantized net's inputs. Can be CV_32F or CV_8S.
        outputsDtype - Datatype of quantized net's outputs. Can be CV_32F or CV_8S.
        perChannel - Quantization granularity of quantized Net. The default is true, that means quantize model in per-channel way (channel-wise). Set it false to quantize model in per-tensor way (or tensor-wise).
        Returns:
        automatically generated
      • quantize

        public Net quantize​(java.util.List<Mat> calibData,
                            int inputsDtype,
                            int outputsDtype)
        Returns a quantized Net from a floating-point Net.
        Parameters:
        calibData - Calibration data to compute the quantization parameters.
        inputsDtype - Datatype of quantized net's inputs. Can be CV_32F or CV_8S.
        outputsDtype - Datatype of quantized net's outputs. Can be CV_32F or CV_8S. in per-channel way (channel-wise). Set it false to quantize model in per-tensor way (or tensor-wise).
        Returns:
        automatically generated
      • getInputDetails

        public void getInputDetails​(MatOfFloat scales,
                                    MatOfInt zeropoints)
        Returns input scale and zeropoint for a quantized Net.
        Parameters:
        scales - output parameter for returning input scales.
        zeropoints - output parameter for returning input zeropoints.
      • getOutputDetails

        public void getOutputDetails​(MatOfFloat scales,
                                     MatOfInt zeropoints)
        Returns output scale and zeropoint for a quantized Net.
        Parameters:
        scales - output parameter for returning output scales.
        zeropoints - output parameter for returning output zeropoints.
      • setHalideScheduler

        public void setHalideScheduler​(java.lang.String scheduler)
        Compile Halide layers.
        Parameters:
        scheduler - Path to YAML file with scheduling directives. SEE: setPreferableBackend Schedule layers that support Halide backend. Then compile them for specific target. For layers that not represented in scheduling file or if no manual scheduling used at all, automatic scheduling will be applied.
      • setPreferableBackend

        public void setPreferableBackend​(int backendId)
        Ask network to use specific computation backend where it supported.
        Parameters:
        backendId - backend identifier. SEE: Backend If OpenCV is compiled with Intel's Inference Engine library, DNN_BACKEND_DEFAULT means DNN_BACKEND_INFERENCE_ENGINE. Otherwise it equals to DNN_BACKEND_OPENCV.
      • setPreferableTarget

        public void setPreferableTarget​(int targetId)
        Ask network to make computations on specific target device.
        Parameters:
        targetId - target identifier. SEE: Target List of supported combinations backend / target: | | DNN_BACKEND_OPENCV | DNN_BACKEND_INFERENCE_ENGINE | DNN_BACKEND_HALIDE | DNN_BACKEND_CUDA | |------------------------|--------------------|------------------------------|--------------------|-------------------| | DNN_TARGET_CPU | + | + | + | | | DNN_TARGET_OPENCL | + | + | + | | | DNN_TARGET_OPENCL_FP16 | + | + | | | | DNN_TARGET_MYRIAD | | + | | | | DNN_TARGET_FPGA | | + | | | | DNN_TARGET_CUDA | | | | + | | DNN_TARGET_CUDA_FP16 | | | | + | | DNN_TARGET_HDDL | | + | | |
      • setInput

        public void setInput​(Mat blob,
                             java.lang.String name,
                             double scalefactor,
                             Scalar mean)
        Sets the new input value for the network
        Parameters:
        blob - A new blob. Should have CV_32F or CV_8U depth.
        name - A name of input layer.
        scalefactor - An optional normalization scale.
        mean - An optional mean subtraction values. SEE: connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \(input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\)
      • setInput

        public void setInput​(Mat blob,
                             java.lang.String name,
                             double scalefactor)
        Sets the new input value for the network
        Parameters:
        blob - A new blob. Should have CV_32F or CV_8U depth.
        name - A name of input layer.
        scalefactor - An optional normalization scale. SEE: connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \(input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\)
      • setInput

        public void setInput​(Mat blob,
                             java.lang.String name)
        Sets the new input value for the network
        Parameters:
        blob - A new blob. Should have CV_32F or CV_8U depth.
        name - A name of input layer. SEE: connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \(input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\)
      • setInput

        public void setInput​(Mat blob)
        Sets the new input value for the network
        Parameters:
        blob - A new blob. Should have CV_32F or CV_8U depth. SEE: connect(String, String) to know format of the descriptor. If scale or mean values are specified, a final input blob is computed as: \(input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\)
      • setParam

        public void setParam​(int layer,
                             int numParam,
                             Mat blob)
        Sets the new value for the learned param of the layer.
        Parameters:
        layer - name or id of the layer.
        numParam - index of the layer parameter in the Layer::blobs array.
        blob - the new value. SEE: Layer::blobs Note: If shape of the new blob differs from the previous shape, then the following forward pass may fail.
      • setParam

        public void setParam​(java.lang.String layerName,
                             int numParam,
                             Mat blob)
      • getParam

        public Mat getParam​(int layer,
                            int numParam)
        Returns parameter blob of the layer.
        Parameters:
        layer - name or id of the layer.
        numParam - index of the layer parameter in the Layer::blobs array. SEE: Layer::blobs
        Returns:
        automatically generated
      • getParam

        public Mat getParam​(int layer)
        Returns parameter blob of the layer.
        Parameters:
        layer - name or id of the layer. SEE: Layer::blobs
        Returns:
        automatically generated
      • getParam

        public Mat getParam​(java.lang.String layerName,
                            int numParam)
      • getParam

        public Mat getParam​(java.lang.String layerName)
      • getUnconnectedOutLayers

        public MatOfInt getUnconnectedOutLayers()
        Returns indexes of layers with unconnected outputs. FIXIT: Rework API to registerOutput() approach, deprecate this call
        Returns:
        automatically generated
      • getUnconnectedOutLayersNames

        public java.util.List<java.lang.String> getUnconnectedOutLayersNames()
        Returns names of layers with unconnected outputs. FIXIT: Rework API to registerOutput() approach, deprecate this call
        Returns:
        automatically generated
      • getFLOPS

        public long getFLOPS​(java.util.List<MatOfInt> netInputShapes)
        Computes FLOP for whole loaded model with specified input shapes.
        Parameters:
        netInputShapes - vector of shapes for all net inputs.
        Returns:
        computed FLOP.
      • getFLOPS

        public long getFLOPS​(MatOfInt netInputShape)
      • getFLOPS

        public long getFLOPS​(int layerId,
                             java.util.List<MatOfInt> netInputShapes)
      • getFLOPS

        public long getFLOPS​(int layerId,
                             MatOfInt netInputShape)
      • getLayerTypes

        public void getLayerTypes​(java.util.List<java.lang.String> layersTypes)
        Returns list of types for layer used in model.
        Parameters:
        layersTypes - output parameter for returning types.
      • getLayersCount

        public int getLayersCount​(java.lang.String layerType)
        Returns count of layers of specified type.
        Parameters:
        layerType - type.
        Returns:
        count of layers
      • getMemoryConsumption

        public void getMemoryConsumption​(MatOfInt netInputShape,
                                         long[] weights,
                                         long[] blobs)
      • getMemoryConsumption

        public void getMemoryConsumption​(int layerId,
                                         java.util.List<MatOfInt> netInputShapes,
                                         long[] weights,
                                         long[] blobs)
      • getMemoryConsumption

        public void getMemoryConsumption​(int layerId,
                                         MatOfInt netInputShape,
                                         long[] weights,
                                         long[] blobs)
      • enableFusion

        public void enableFusion​(boolean fusion)
        Enables or disables layer fusion in the network.
        Parameters:
        fusion - true to enable the fusion, false to disable. The fusion is enabled by default.
      • enableWinograd

        public void enableWinograd​(boolean useWinograd)
        Enables or disables the Winograd compute branch. The Winograd compute branch can speed up 3x3 Convolution at a small loss of accuracy.
        Parameters:
        useWinograd - true to enable the Winograd compute branch. The default is true.
      • getPerfProfile

        public long getPerfProfile​(MatOfDouble timings)
        Returns overall time for inference and timings (in ticks) for layers. Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers. Supported by DNN_BACKEND_OPENCV on DNN_TARGET_CPU only.
        Parameters:
        timings - vector for tick timings for all layers.
        Returns:
        overall ticks for model inference.
      • finalize

        protected void finalize()
                         throws java.lang.Throwable
        Overrides:
        finalize in class java.lang.Object
        Throws:
        java.lang.Throwable