OpenCV  4.7.0
Open Source Computer Vision
Using the Facemark API

Goals

In this tutorial will helps you to

Preparation

Before you continue with this tutorial, you should download the dataset of facial landmarks detection. We suggest you to download the helen dataset which can be retrieved at http://www.ifp.illinois.edu/~vuongle2/helen/ (Caution! The algorithm requires around 9GB of RAM to train on this dataset).

Make sure that the annotation format is supported by the API, the contents in annotation file should look like the following snippet:

version: 1
n_points: 68
{
212.716603 499.771793
230.232816 566.290071
...
}

The next thing to do is to make 2 text files containing the list of image files and annotation files respectively. Make sure that the order or image and annotation in both files are matched. Furthermore, it is advised to use absolute path instead of relative path. Example to make the file list in Linux machine

ls $PWD/trainset/*.jpg > images_train.txt
ls $PWD/trainset/*.pts > annotation_train.txt

example of content in the images_train.txt

/home/user/helen/trainset/100032540_1.jpg
/home/user/helen/trainset/100040721_1.jpg
/home/user/helen/trainset/100040721_2.jpg
/home/user/helen/trainset/1002681492_1.jpg

example of content in the annotation_train.txt

/home/user/helen/trainset/100032540_1.pts
/home/user/helen/trainset/100040721_1.pts
/home/user/helen/trainset/100040721_2.pts
/home/user/helen/trainset/1002681492_1.pts

Creating the facemark object

/*create the facemark instance*/
FacemarkLBF::Params params;
params.model_filename = "helen.model"; // the trained model will be saved using this filename
Ptr<Facemark> facemark = FacemarkLBF::create(params);

Set a custom face detector function

Firstly, you need to create your own face detector function, you might also need to create a struct to save the custom parameter. Alternatively, you can just make these parameter hard coded within the myDetector function.

struct Conf {
cv::String model_path;
double scaleFactor;
Conf(cv::String s, double d){
model_path = s;
scaleFactor = d;
face_detector.load(model_path);
};
CascadeClassifier face_detector;
};
bool myDetector(InputArray image, OutputArray faces, Conf *conf){
Mat gray;
if (image.channels() > 1)
cvtColor(image, gray, COLOR_BGR2GRAY);
else
gray = image.getMat().clone();
equalizeHist(gray, gray);
std::vector<Rect> faces_;
conf->face_cascade.detectMultiScale(gray, faces_, conf->scaleFactor, 2, CASCADE_SCALE_IMAGE, Size(30, 30) );
Mat(faces_).copyTo(faces);
return true;
}

The following snippet demonstrates how to set the custom detector to the facemark object and use it to detect the faces. Keep in mind that some facemark object might use the face detector during the training process.

Conf config("../data/lbpcascade_frontalface.xml", 1.4);
facemark->setFaceDetector(myDetector, &config); // we must guarantee proper lifetime of "config" object

Here is the snippet for detecting face using the user defined face detector function.

Mat img = imread("../data/himym3.jpg");
std::vector<cv::Rect> faces;
facemark->getFaces(img, faces, config);
for(int j=0;j<faces.size();j++){
cv::rectangle(img, faces[j], cv::Scalar(255,0,255));
}
imshow("result", img);
waitKey(0);

Training a facemark object

Use the trained model to detect the facial landmarks from a given image.