public class FaceRecognizer extends Algorithm
// Let's say we want to keep 10 Eigenfaces and have a threshold value of 10.0
int num_components = 10;
double threshold = 10.0;
// Then if you want to have a cv::FaceRecognizer with a confidence threshold,
// create the concrete implementation with the appropiate parameters:
Ptr<FaceRecognizer> model = EigenFaceRecognizer::create(num_components, threshold);
Sometimes it's impossible to train the model, just to experiment with threshold values. Thanks to
Algorithm it's possible to set internal model thresholds during runtime. Let's see how we would
set/get the prediction for the Eigenface model, we've created above:
// The following line reads the threshold from the Eigenfaces model:
double current_threshold = model->getDouble("threshold");
// And this line sets the threshold to 0.0:
model->set("threshold", 0.0);
If you've set the threshold to 0.0 as we did above, then:
//
Mat img = imread("person1/3.jpg", IMREAD_GRAYSCALE);
// Get a prediction from the model. Note: We've set a threshold of 0.0 above,
// since the distance is almost always larger than 0.0, you'll get -1 as
// label, which indicates, this face is unknown
int predicted_label = model->predict(img);
// ...
is going to yield -1 as predicted label, which states this face is unknown.
### Getting the name of a FaceRecognizer
Since every FaceRecognizer is a Algorithm, you can use Algorithm::name to get the name of a
FaceRecognizer:
// Create a FaceRecognizer:
Ptr<FaceRecognizer> model = EigenFaceRecognizer::create();
// And here's how to get its name:
String name = model->name();
Modifier | Constructor and Description |
---|---|
protected |
FaceRecognizer(long addr) |
Modifier and Type | Method and Description |
---|---|
static FaceRecognizer |
__fromPtr__(long addr) |
protected void |
finalize() |
String |
getLabelInfo(int label)
Gets string information by label.
|
MatOfInt |
getLabelsByString(String str)
Gets vector of labels by string.
|
void |
predict_collect(Mat src,
PredictCollector collector)
if implemented - send all result of prediction to collector that can be used for somehow custom result handling
|
int |
predict_label(Mat src) |
void |
predict(Mat src,
int[] label,
double[] confidence)
Predicts a label and associated confidence (e.g.
|
void |
read(String filename)
Loads a FaceRecognizer and its model state.
|
void |
setLabelInfo(int label,
String strInfo)
Sets string info for the specified model's label.
|
void |
train(List<Mat> src,
Mat labels)
Trains a FaceRecognizer with given data and associated labels.
|
void |
update(List<Mat> src,
Mat labels)
Updates a FaceRecognizer with given data and associated labels.
|
void |
write(String filename)
Saves a FaceRecognizer and its model state.
|
clear, empty, getDefaultName, getNativeObjAddr, save
public static FaceRecognizer __fromPtr__(long addr)
public String getLabelInfo(int label)
label
- automatically generatedpublic int predict_label(Mat src)
public MatOfInt getLabelsByString(String str)
str
- automatically generatedpublic void predict_collect(Mat src, PredictCollector collector)
src
- Sample image to get a prediction from.collector
- User-defined collector object that accepts all results
public void predict(Mat src, int[] label, double[] confidence)
src
- Sample image to get a prediction from.label
- The predicted label for the given image.confidence
- Associated confidence (e.g. distance) for the predicted label.
The suffix const means that prediction does not affect the internal model state, so the method can
be safely called from within different threads.
The following example shows how to get a prediction from a trained model:
using namespace cv;
// Do your initialization here (create the cv::FaceRecognizer model) ...
// ...
// Read in a sample image:
Mat img = imread("person1/3.jpg", IMREAD_GRAYSCALE);
// And get a prediction from the cv::FaceRecognizer:
int predicted = model->predict(img);
Or to get a prediction and the associated confidence (e.g. distance):
using namespace cv;
// Do your initialization here (create the cv::FaceRecognizer model) ...
// ...
Mat img = imread("person1/3.jpg", IMREAD_GRAYSCALE);
// Some variables for the predicted label and associated confidence (e.g. distance):
int predicted_label = -1;
double predicted_confidence = 0.0;
// Get the prediction and associated confidence from the model
model->predict(img, predicted_label, predicted_confidence);
public void read(String filename)
filename
- automatically generatedpublic void setLabelInfo(int label, String strInfo)
label
- automatically generatedstrInfo
- automatically generatedpublic void train(List<Mat> src, Mat labels)
src
- The training images, that means the faces you want to learn. The data has to be
given as a vector<Mat>.labels
- The labels corresponding to the images have to be given either as a vector<int>
or a Mat of type CV_32SC1.
The following source code snippet shows you how to learn a Fisherfaces model on a given set of
images. The images are read with imread and pushed into a std::vector<Mat>. The labels of each
image are stored within a std::vector<int> (you could also use a Mat of type CV_32SC1). Think of
the label as the subject (the person) this image belongs to, so same subjects (persons) should have
the same label. For the available FaceRecognizer you don't have to pay any attention to the order of
the labels, just make sure same persons have the same label:
// holds images and labels
vector<Mat> images;
vector<int> labels;
// using Mat of type CV_32SC1
// Mat labels(number_of_samples, 1, CV_32SC1);
// images for first person
images.push_back(imread("person0/0.jpg", IMREAD_GRAYSCALE)); labels.push_back(0);
images.push_back(imread("person0/1.jpg", IMREAD_GRAYSCALE)); labels.push_back(0);
images.push_back(imread("person0/2.jpg", IMREAD_GRAYSCALE)); labels.push_back(0);
// images for second person
images.push_back(imread("person1/0.jpg", IMREAD_GRAYSCALE)); labels.push_back(1);
images.push_back(imread("person1/1.jpg", IMREAD_GRAYSCALE)); labels.push_back(1);
images.push_back(imread("person1/2.jpg", IMREAD_GRAYSCALE)); labels.push_back(1);
Now that you have read some images, we can create a new FaceRecognizer. In this example I'll create
a Fisherfaces model and decide to keep all of the possible Fisherfaces:
// Create a new Fisherfaces model and retain all available Fisherfaces,
// this is the most common usage of this specific FaceRecognizer:
//
Ptr<FaceRecognizer> model = FisherFaceRecognizer::create();
And finally train it on the given dataset (the face images and labels):
// This is the common interface to train all of the available cv::FaceRecognizer
// implementations:
//
model->train(images, labels);
public void update(List<Mat> src, Mat labels)
src
- The training images, that means the faces you want to learn. The data has to be given
as a vector<Mat>.labels
- The labels corresponding to the images have to be given either as a vector<int> or
a Mat of type CV_32SC1.
This method updates a (probably trained) FaceRecognizer, but only if the algorithm supports it. The
Local Binary Patterns Histograms (LBPH) recognizer (see createLBPHFaceRecognizer) can be updated.
For the Eigenfaces and Fisherfaces method, this is algorithmically not possible and you have to
re-estimate the model with FaceRecognizer::train. In any case, a call to train empties the existing
model and learns a new model, while update does not delete any model data.
// Create a new LBPH model (it can be updated) and use the default parameters,
// this is the most common usage of this specific FaceRecognizer:
//
Ptr<FaceRecognizer> model = LBPHFaceRecognizer::create();
// This is the common interface to train all of the available cv::FaceRecognizer
// implementations:
//
model->train(images, labels);
// Some containers to hold new image:
vector<Mat> newImages;
vector<int> newLabels;
// You should add some images to the containers:
//
// ...
//
// Now updating the model is as easy as calling:
model->update(newImages,newLabels);
// This will preserve the old model data and extend the existing model
// with the new features extracted from newImages!
Calling update on an Eigenfaces model (see EigenFaceRecognizer::create), which doesn't support
updating, will throw an error similar to:
OpenCV Error: The function/feature is not implemented (This FaceRecognizer (FaceRecognizer.Eigenfaces) does not support updating, you have to use FaceRecognizer::train to update it.) in update, file /home/philipp/git/opencv/modules/contrib/src/facerec.cpp, line 305
terminate called after throwing an instance of 'cv::Exception'
Note: The FaceRecognizer does not store your training images, because this would be very
memory intense and it's not the responsibility of te FaceRecognizer to do so. The caller is
responsible for maintaining the dataset, he want to work with.public void write(String filename)
filename
- The filename to store this FaceRecognizer to (either XML/YAML).
Every FaceRecognizer overwrites FaceRecognizer::save(FileStorage& fs) to save the internal model
state. FaceRecognizer::save(const String& filename) saves the state of a model to the given
filename.
The suffix const means that prediction does not affect the internal model state, so the method can
be safely called from within different threads.Generated on Wed Oct 9 2019 23:24:43 UTC / OpenCV 4.1.2