Package org.opencv.ml
Class ANN_MLP
- java.lang.Object
- 
- org.opencv.core.Algorithm
- 
- org.opencv.ml.StatModel
- 
- org.opencv.ml.ANN_MLP
 
 
 
- 
 public class ANN_MLP extends StatModel Artificial Neural Networks - Multi-Layer Perceptrons. Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method ANN_MLP::create. All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data. Additional flags for StatModel::train are available: ANN_MLP::TrainFlags. SEE: REF: ml_intro_ann
- 
- 
Field SummaryFields Modifier and Type Field Description static intANNEALstatic intBACKPROPstatic intGAUSSIANstatic intIDENTITYstatic intLEAKYRELUstatic intNO_INPUT_SCALEstatic intNO_OUTPUT_SCALEstatic intRELUstatic intRPROPstatic intSIGMOID_SYMstatic intUPDATE_WEIGHTS- 
Fields inherited from class org.opencv.ml.StatModelCOMPRESSED_INPUT, PREPROCESSED_INPUT, RAW_OUTPUT, UPDATE_MODEL
 
- 
 - 
Constructor SummaryConstructors Modifier Constructor Description protectedANN_MLP(long addr)
 - 
Method SummaryAll Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description static ANN_MLP__fromPtr__(long addr)static ANN_MLPcreate()Creates empty model Use StatModel::train to train the model, Algorithm::load<ANN_MLP>(filename) to load the pre-trained model.protected voidfinalize()doublegetAnnealCoolingRatio()SEE: setAnnealCoolingRatiodoublegetAnnealFinalT()SEE: setAnnealFinalTdoublegetAnnealInitialT()SEE: setAnnealInitialTintgetAnnealItePerStep()SEE: setAnnealItePerStepdoublegetBackpropMomentumScale()SEE: setBackpropMomentumScaledoublegetBackpropWeightScale()SEE: setBackpropWeightScaleMatgetLayerSizes()Integer vector specifying the number of neurons in each layer including the input and output layers.doublegetRpropDW0()SEE: setRpropDW0doublegetRpropDWMax()SEE: setRpropDWMaxdoublegetRpropDWMin()SEE: setRpropDWMindoublegetRpropDWMinus()SEE: setRpropDWMinusdoublegetRpropDWPlus()SEE: setRpropDWPlusTermCriteriagetTermCriteria()SEE: setTermCriteriaintgetTrainMethod()Returns current training methodMatgetWeights(int layerIdx)static ANN_MLPload(java.lang.String filepath)Loads and creates a serialized ANN from a file Use ANN::save to serialize and store an ANN to disk.voidsetActivationFunction(int type)Initialize the activation function for each neuron.voidsetActivationFunction(int type, double param1)Initialize the activation function for each neuron.voidsetActivationFunction(int type, double param1, double param2)Initialize the activation function for each neuron.voidsetAnnealCoolingRatio(double val)getAnnealCoolingRatio SEE: getAnnealCoolingRatiovoidsetAnnealFinalT(double val)getAnnealFinalT SEE: getAnnealFinalTvoidsetAnnealInitialT(double val)getAnnealInitialT SEE: getAnnealInitialTvoidsetAnnealItePerStep(int val)getAnnealItePerStep SEE: getAnnealItePerStepvoidsetBackpropMomentumScale(double val)getBackpropMomentumScale SEE: getBackpropMomentumScalevoidsetBackpropWeightScale(double val)getBackpropWeightScale SEE: getBackpropWeightScalevoidsetLayerSizes(Mat _layer_sizes)Integer vector specifying the number of neurons in each layer including the input and output layers.voidsetRpropDW0(double val)getRpropDW0 SEE: getRpropDW0voidsetRpropDWMax(double val)getRpropDWMax SEE: getRpropDWMaxvoidsetRpropDWMin(double val)getRpropDWMin SEE: getRpropDWMinvoidsetRpropDWMinus(double val)getRpropDWMinus SEE: getRpropDWMinusvoidsetRpropDWPlus(double val)getRpropDWPlus SEE: getRpropDWPlusvoidsetTermCriteria(TermCriteria val)getTermCriteria SEE: getTermCriteriavoidsetTrainMethod(int method)Sets training method and common parameters.voidsetTrainMethod(int method, double param1)Sets training method and common parameters.voidsetTrainMethod(int method, double param1, double param2)Sets training method and common parameters.- 
Methods inherited from class org.opencv.ml.StatModelcalcError, empty, getVarCount, isClassifier, isTrained, predict, predict, predict, train, train, train
 - 
Methods inherited from class org.opencv.core.Algorithmclear, getDefaultName, getNativeObjAddr, save
 
- 
 
- 
- 
- 
Field Detail- 
BACKPROPpublic static final int BACKPROP - See Also:
- Constant Field Values
 
 - 
RPROPpublic static final int RPROP - See Also:
- Constant Field Values
 
 - 
ANNEALpublic static final int ANNEAL - See Also:
- Constant Field Values
 
 - 
IDENTITYpublic static final int IDENTITY - See Also:
- Constant Field Values
 
 - 
SIGMOID_SYMpublic static final int SIGMOID_SYM - See Also:
- Constant Field Values
 
 - 
GAUSSIANpublic static final int GAUSSIAN - See Also:
- Constant Field Values
 
 - 
RELUpublic static final int RELU - See Also:
- Constant Field Values
 
 - 
LEAKYRELUpublic static final int LEAKYRELU - See Also:
- Constant Field Values
 
 - 
UPDATE_WEIGHTSpublic static final int UPDATE_WEIGHTS - See Also:
- Constant Field Values
 
 - 
NO_INPUT_SCALEpublic static final int NO_INPUT_SCALE - See Also:
- Constant Field Values
 
 - 
NO_OUTPUT_SCALEpublic static final int NO_OUTPUT_SCALE - See Also:
- Constant Field Values
 
 
- 
 - 
Method Detail- 
__fromPtr__public static ANN_MLP __fromPtr__(long addr) 
 - 
getLayerSizespublic Mat getLayerSizes() Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. SEE: setLayerSizes- Returns:
- automatically generated
 
 - 
getWeightspublic Mat getWeights(int layerIdx) 
 - 
createpublic static ANN_MLP create() Creates empty model Use StatModel::train to train the model, Algorithm::load<ANN_MLP>(filename) to load the pre-trained model. Note that the train method has optional flags: ANN_MLP::TrainFlags.- Returns:
- automatically generated
 
 - 
loadpublic static ANN_MLP load(java.lang.String filepath) Loads and creates a serialized ANN from a file Use ANN::save to serialize and store an ANN to disk. Load the ANN from this file again, by calling this function with the path to the file.- Parameters:
- filepath- path to serialized ANN
- Returns:
- automatically generated
 
 - 
getTermCriteriapublic TermCriteria getTermCriteria() SEE: setTermCriteria- Returns:
- automatically generated
 
 - 
getAnnealCoolingRatiopublic double getAnnealCoolingRatio() SEE: setAnnealCoolingRatio- Returns:
- automatically generated
 
 - 
getAnnealFinalTpublic double getAnnealFinalT() SEE: setAnnealFinalT- Returns:
- automatically generated
 
 - 
getAnnealInitialTpublic double getAnnealInitialT() SEE: setAnnealInitialT- Returns:
- automatically generated
 
 - 
getBackpropMomentumScalepublic double getBackpropMomentumScale() SEE: setBackpropMomentumScale- Returns:
- automatically generated
 
 - 
getBackpropWeightScalepublic double getBackpropWeightScale() SEE: setBackpropWeightScale- Returns:
- automatically generated
 
 - 
getRpropDW0public double getRpropDW0() SEE: setRpropDW0- Returns:
- automatically generated
 
 - 
getRpropDWMaxpublic double getRpropDWMax() SEE: setRpropDWMax- Returns:
- automatically generated
 
 - 
getRpropDWMinpublic double getRpropDWMin() SEE: setRpropDWMin- Returns:
- automatically generated
 
 - 
getRpropDWMinuspublic double getRpropDWMinus() SEE: setRpropDWMinus- Returns:
- automatically generated
 
 - 
getRpropDWPluspublic double getRpropDWPlus() SEE: setRpropDWPlus- Returns:
- automatically generated
 
 - 
getAnnealItePerSteppublic int getAnnealItePerStep() SEE: setAnnealItePerStep- Returns:
- automatically generated
 
 - 
getTrainMethodpublic int getTrainMethod() Returns current training method- Returns:
- automatically generated
 
 - 
setActivationFunctionpublic void setActivationFunction(int type, double param1, double param2)Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.- Parameters:
- type- The type of activation function. See ANN_MLP::ActivationFunctions.
- param1- The first parameter of the activation function, \(\alpha\). Default value is 0.
- param2- The second parameter of the activation function, \(\beta\). Default value is 0.
 
 - 
setActivationFunctionpublic void setActivationFunction(int type, double param1)Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.- Parameters:
- type- The type of activation function. See ANN_MLP::ActivationFunctions.
- param1- The first parameter of the activation function, \(\alpha\). Default value is 0.
 
 - 
setActivationFunctionpublic void setActivationFunction(int type) Initialize the activation function for each neuron. Currently the default and the only fully supported activation function is ANN_MLP::SIGMOID_SYM.- Parameters:
- type- The type of activation function. See ANN_MLP::ActivationFunctions.
 
 - 
setAnnealCoolingRatiopublic void setAnnealCoolingRatio(double val) getAnnealCoolingRatio SEE: getAnnealCoolingRatio- Parameters:
- val- automatically generated
 
 - 
setAnnealFinalTpublic void setAnnealFinalT(double val) getAnnealFinalT SEE: getAnnealFinalT- Parameters:
- val- automatically generated
 
 - 
setAnnealInitialTpublic void setAnnealInitialT(double val) getAnnealInitialT SEE: getAnnealInitialT- Parameters:
- val- automatically generated
 
 - 
setAnnealItePerSteppublic void setAnnealItePerStep(int val) getAnnealItePerStep SEE: getAnnealItePerStep- Parameters:
- val- automatically generated
 
 - 
setBackpropMomentumScalepublic void setBackpropMomentumScale(double val) getBackpropMomentumScale SEE: getBackpropMomentumScale- Parameters:
- val- automatically generated
 
 - 
setBackpropWeightScalepublic void setBackpropWeightScale(double val) getBackpropWeightScale SEE: getBackpropWeightScale- Parameters:
- val- automatically generated
 
 - 
setLayerSizespublic void setLayerSizes(Mat _layer_sizes) Integer vector specifying the number of neurons in each layer including the input and output layers. The very first element specifies the number of elements in the input layer. The last element - number of elements in the output layer. Default value is empty Mat. SEE: getLayerSizes- Parameters:
- _layer_sizes- automatically generated
 
 - 
setRpropDW0public void setRpropDW0(double val) getRpropDW0 SEE: getRpropDW0- Parameters:
- val- automatically generated
 
 - 
setRpropDWMaxpublic void setRpropDWMax(double val) getRpropDWMax SEE: getRpropDWMax- Parameters:
- val- automatically generated
 
 - 
setRpropDWMinpublic void setRpropDWMin(double val) getRpropDWMin SEE: getRpropDWMin- Parameters:
- val- automatically generated
 
 - 
setRpropDWMinuspublic void setRpropDWMinus(double val) getRpropDWMinus SEE: getRpropDWMinus- Parameters:
- val- automatically generated
 
 - 
setRpropDWPluspublic void setRpropDWPlus(double val) getRpropDWPlus SEE: getRpropDWPlus- Parameters:
- val- automatically generated
 
 - 
setTermCriteriapublic void setTermCriteria(TermCriteria val) getTermCriteria SEE: getTermCriteria- Parameters:
- val- automatically generated
 
 - 
setTrainMethodpublic void setTrainMethod(int method, double param1, double param2)Sets training method and common parameters.- Parameters:
- method- Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
- param1- passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.
- param2- passed to setRpropDWMin for ANN_MLP::RPROP and to setBackpropMomentumScale for ANN_MLP::BACKPROP and to finalT for ANN_MLP::ANNEAL.
 
 - 
setTrainMethodpublic void setTrainMethod(int method, double param1)Sets training method and common parameters.- Parameters:
- method- Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
- param1- passed to setRpropDW0 for ANN_MLP::RPROP and to setBackpropWeightScale for ANN_MLP::BACKPROP and to initialT for ANN_MLP::ANNEAL.
 
 - 
setTrainMethodpublic void setTrainMethod(int method) Sets training method and common parameters.- Parameters:
- method- Default value is ANN_MLP::RPROP. See ANN_MLP::TrainingMethods.
 
 
- 
 
-