org.opencv.ml
public class CvANN_MLP extends CvStatModel
MLP model.
Unlike many other models in ML that are constructed and trained at once, in the MLP model these steps are separated. First, a network with the specified topology is created using the non-default constructor or the method "CvANN_MLP.create". All the weights are set to zeros. Then, the network is trained using a set of input and output vectors. The training procedure can be repeated more than once, that is, the weights can be adjusted based on the new training data.
Modifier and Type | Field and Description |
---|---|
static int |
GAUSSIAN |
static int |
IDENTITY |
static int |
NO_INPUT_SCALE |
static int |
NO_OUTPUT_SCALE |
static int |
SIGMOID_SYM |
static int |
UPDATE_WEIGHTS |
Constructor and Description |
---|
CvANN_MLP()
The constructors.
|
CvANN_MLP(Mat layerSizes)
The constructors.
|
CvANN_MLP(Mat layerSizes,
int activateFunc,
double fparam1,
double fparam2)
The constructors.
|
Modifier and Type | Method and Description |
---|---|
void |
clear() |
void |
create(Mat layerSizes)
Constructs MLP with the specified topology.
|
void |
create(Mat layerSizes,
int activateFunc,
double fparam1,
double fparam2)
Constructs MLP with the specified topology.
|
float |
predict(Mat inputs,
Mat outputs)
Predicts responses for input samples.
|
int |
train(Mat inputs,
Mat outputs,
Mat sampleWeights)
Trains/updates MLP.
|
int |
train(Mat inputs,
Mat outputs,
Mat sampleWeights,
Mat sampleIdx,
CvANN_MLP_TrainParams params,
int flags)
Trains/updates MLP.
|
load, load, save, save
public static final int GAUSSIAN
public static final int IDENTITY
public static final int NO_INPUT_SCALE
public static final int NO_OUTPUT_SCALE
public static final int SIGMOID_SYM
public static final int UPDATE_WEIGHTS
public CvANN_MLP()
The constructors.
The advanced constructor allows to create MLP with the specified topology. See "CvANN_MLP.create" for details.
public CvANN_MLP(Mat layerSizes)
The constructors.
The advanced constructor allows to create MLP with the specified topology. See "CvANN_MLP.create" for details.
layerSizes
- a layerSizespublic CvANN_MLP(Mat layerSizes, int activateFunc, double fparam1, double fparam2)
The constructors.
The advanced constructor allows to create MLP with the specified topology. See "CvANN_MLP.create" for details.
layerSizes
- a layerSizesactivateFunc
- a activateFuncfparam1
- a fparam1fparam2
- a fparam2public void clear()
public void create(Mat layerSizes)
Constructs MLP with the specified topology.
The method creates an MLP network with the specified topology and assigns the same activation function to all the neurons.
layerSizes
- Integer vector specifying the number of neurons in each
layer including the input and output layers.public void create(Mat layerSizes, int activateFunc, double fparam1, double fparam2)
Constructs MLP with the specified topology.
The method creates an MLP network with the specified topology and assigns the same activation function to all the neurons.
layerSizes
- Integer vector specifying the number of neurons in each
layer including the input and output layers.activateFunc
- Parameter specifying the activation function for each
neuron: one of CvANN_MLP.IDENTITY
, CvANN_MLP.SIGMOID_SYM
,
and CvANN_MLP.GAUSSIAN
.fparam1
- Free parameter of the activation function, alpha. See
the formulas in the introduction section.fparam2
- Free parameter of the activation function, beta. See
the formulas in the introduction section.public float predict(Mat inputs, Mat outputs)
Predicts responses for input samples.
The method returns a dummy value which should be ignored.
If you are using the default cvANN_MLP.SIGMOID_SYM
activation
function with the default parameter values fparam1=0 and fparam2=0 then the
function used is y = 1.7159*tanh(2/3 * x), so the output will range from
[-1.7159, 1.7159], instead of [0,1].
inputs
- Input samples.outputs
- Predicted responses for corresponding samples.public int train(Mat inputs, Mat outputs, Mat sampleWeights)
Trains/updates MLP.
This method applies the specified training algorithm to computing/adjusting the network weights. It returns the number of done iterations.
The RPROP training algorithm is parallelized with the TBB library.
If you are using the default cvANN_MLP.SIGMOID_SYM
activation
function then the output should be in the range [-1,1], instead of [0,1], for
optimal results.
inputs
- Floating-point matrix of input vectors, one vector per row.outputs
- Floating-point matrix of the corresponding output vectors, one
vector per row.sampleWeights
- (RPROP only) Optional floating-point vector of weights
for each sample. Some samples may be more important than others for training.
You may want to raise the weight of certain classes to find the right balance
between hit-rate and false-alarm rate, and so on.public int train(Mat inputs, Mat outputs, Mat sampleWeights, Mat sampleIdx, CvANN_MLP_TrainParams params, int flags)
Trains/updates MLP.
This method applies the specified training algorithm to computing/adjusting the network weights. It returns the number of done iterations.
The RPROP training algorithm is parallelized with the TBB library.
If you are using the default cvANN_MLP.SIGMOID_SYM
activation
function then the output should be in the range [-1,1], instead of [0,1], for
optimal results.
inputs
- Floating-point matrix of input vectors, one vector per row.outputs
- Floating-point matrix of the corresponding output vectors, one
vector per row.sampleWeights
- (RPROP only) Optional floating-point vector of weights
for each sample. Some samples may be more important than others for training.
You may want to raise the weight of certain classes to find the right balance
between hit-rate and false-alarm rate, and so on.sampleIdx
- Optional integer vector indicating the samples (rows of
inputs
and outputs
) that are taken into account.params
- Training parameters. See the "CvANN_MLP_TrainParams"
description.flags
- Various parameters to control the training algorithm. A
combination of the following parameters is possible: