OpenCV
Open Source Computer Vision
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Friends Macros Modules Pages
Meanshift and Camshift

Prev Tutorial: How to Use Background Subtraction Methods
Next Tutorial: Optical Flow

Goal

In this chapter,

  • We will learn about the Meanshift and Camshift algorithms to track objects in videos.

Meanshift

The intuition behind the meanshift is simple. Consider you have a set of points. (It can be a pixel distribution like histogram backprojection). You are given a small window (may be a circle) and you have to move that window to the area of maximum pixel density (or maximum number of points). It is illustrated in the simple image given below:

image

The initial window is shown in blue circle with the name "C1". Its original center is marked in blue rectangle, named "C1_o". But if you find the centroid of the points inside that window, you will get the point "C1_r" (marked in small blue circle) which is the real centroid of the window. Surely they don't match. So move your window such that the circle of the new window matches with the previous centroid. Again find the new centroid. Most probably, it won't match. So move it again, and continue the iterations such that the center of window and its centroid falls on the same location (or within a small desired error). So finally what you obtain is a window with maximum pixel distribution. It is marked with a green circle, named "C2". As you can see in the image, it has maximum number of points. The whole process is demonstrated on a static image below:

image

So we normally pass the histogram backprojected image and initial target location. When the object moves, obviously the movement is reflected in the histogram backprojected image. As a result, the meanshift algorithm moves our window to the new location with maximum density.

Meanshift in OpenCV

To use meanshift in OpenCV, first we need to setup the target, find its histogram so that we can backproject the target on each frame for calculation of meanshift. We also need to provide an initial location of window. For histogram, only Hue is considered here. Also, to avoid false values due to low light, low light values are discarded using cv.inRange() function.

  • Downloadable code: Click here
  • Code at glance:
    #include <iostream>
    using namespace cv;
    using namespace std;
    int main(int argc, char **argv)
    {
    const string about =
    "This sample demonstrates the meanshift algorithm.\n"
    "The example file can be downloaded from:\n"
    " https://www.bogotobogo.com/python/OpenCV_Python/images/mean_shift_tracking/slow_traffic_small.mp4";
    const string keys =
    "{ h help | | print this help message }"
    "{ @image |<none>| path to image file }";
    CommandLineParser parser(argc, argv, keys);
    parser.about(about);
    if (parser.has("help"))
    {
    parser.printMessage();
    return 0;
    }
    string filename = parser.get<string>("@image");
    if (!parser.check())
    {
    parser.printErrors();
    return 0;
    }
    VideoCapture capture(filename);
    if (!capture.isOpened()){
    //error in opening the video input
    cerr << "Unable to open file!" << endl;
    return 0;
    }
    Mat frame, roi, hsv_roi, mask;
    // take first frame of the video
    capture >> frame;
    // setup initial location of window
    Rect track_window(300, 200, 100, 50); // simply hardcoded the values
    // set up the ROI for tracking
    roi = frame(track_window);
    cvtColor(roi, hsv_roi, COLOR_BGR2HSV);
    inRange(hsv_roi, Scalar(0, 60, 32), Scalar(180, 255, 255), mask);
    float range_[] = {0, 180};
    const float* range[] = {range_};
    Mat roi_hist;
    int histSize[] = {180};
    int channels[] = {0};
    calcHist(&hsv_roi, 1, channels, mask, roi_hist, 1, histSize, range);
    normalize(roi_hist, roi_hist, 0, 255, NORM_MINMAX);
    // Setup the termination criteria, either 10 iteration or move by at least 1 pt
    TermCriteria term_crit(TermCriteria::EPS | TermCriteria::COUNT, 10, 1);
    while(true){
    Mat hsv, dst;
    capture >> frame;
    if (frame.empty())
    break;
    cvtColor(frame, hsv, COLOR_BGR2HSV);
    calcBackProject(&hsv, 1, channels, roi_hist, dst, range);
    // apply meanshift to get the new location
    meanShift(dst, track_window, term_crit);
    // Draw it on image
    rectangle(frame, track_window, 255, 2);
    imshow("img2", frame);
    int keyboard = waitKey(30);
    if (keyboard == 'q' || keyboard == 27)
    break;
    }
    }
    Designed for command line parsing.
    Definition utility.hpp:890
    n-dimensional dense array class
    Definition mat.hpp:829
    Template class for 2D rectangles.
    Definition types.hpp:444
    The class defining termination criteria for iterative algorithms.
    Definition types.hpp:893
    Class for video capturing from video files, image sequences or cameras.
    Definition videoio.hpp:766
    void normalize(InputArray src, InputOutputArray dst, double alpha=1, double beta=0, int norm_type=NORM_L2, int dtype=-1, InputArray mask=noArray())
    Normalizes the norm or value range of an array.
    void imshow(const String &winname, InputArray mat)
    Displays an image in the specified window.
    int waitKey(int delay=0)
    Waits for a pressed key.
    void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0, AlgorithmHint hint=cv::ALGO_HINT_DEFAULT)
    Converts an image from one color space to another.
    void rectangle(InputOutputArray img, Point pt1, Point pt2, const Scalar &color, int thickness=1, int lineType=LINE_8, int shift=0)
    Draws a simple, thick, or filled up-right rectangle.
    void calcBackProject(const Mat *images, int nimages, const int *channels, InputArray hist, OutputArray backProject, const float **ranges, double scale=1, bool uniform=true)
    Calculates the back projection of a histogram.
    void calcHist(const Mat *images, int nimages, const int *channels, InputArray mask, OutputArray hist, int dims, const int *histSize, const float **ranges, bool uniform=true, bool accumulate=false)
    Calculates a histogram of a set of arrays.
    int meanShift(InputArray probImage, Rect &window, TermCriteria criteria)
    Finds an object on a back projection image.
    int main(int argc, char *argv[])
    Definition highgui_qt.cpp:3
    Definition core.hpp:107
    STL namespace.

Three frames in a video I used is given below:

image

Camshift

Did you closely watch the last result? There is a problem. Our window always has the same size whether the car is very far or very close to the camera. That is not good. We need to adapt the window size with size and rotation of the target. Once again, the solution came from "OpenCV Labs" and it is called CAMshift (Continuously Adaptive Meanshift) published by Gary Bradsky in his paper "Computer Vision Face Tracking for Use in a Perceptual User Interface" in 1998 [39] .

It applies meanshift first. Once meanshift converges, it updates the size of the window as, s=2×M00256. It also calculates the orientation of the best fitting ellipse to it. Again it applies the meanshift with new scaled search window and previous window location. The process continues until the required accuracy is met.

image

Camshift in OpenCV

It is similar to meanshift, but returns a rotated rectangle (that is our result) and box parameters (used to be passed as search window in next iteration). See the code below:

  • Downloadable code: Click here
  • Code at glance:
    #include <iostream>
    using namespace cv;
    using namespace std;
    int main(int argc, char **argv)
    {
    const string about =
    "This sample demonstrates the camshift algorithm.\n"
    "The example file can be downloaded from:\n"
    " https://www.bogotobogo.com/python/OpenCV_Python/images/mean_shift_tracking/slow_traffic_small.mp4";
    const string keys =
    "{ h help | | print this help message }"
    "{ @image |<none>| path to image file }";
    CommandLineParser parser(argc, argv, keys);
    parser.about(about);
    if (parser.has("help"))
    {
    parser.printMessage();
    return 0;
    }
    string filename = parser.get<string>("@image");
    if (!parser.check())
    {
    parser.printErrors();
    return 0;
    }
    VideoCapture capture(filename);
    if (!capture.isOpened()){
    //error in opening the video input
    cerr << "Unable to open file!" << endl;
    return 0;
    }
    Mat frame, roi, hsv_roi, mask;
    // take first frame of the video
    capture >> frame;
    // setup initial location of window
    Rect track_window(300, 200, 100, 50); // simply hardcoded the values
    // set up the ROI for tracking
    roi = frame(track_window);
    cvtColor(roi, hsv_roi, COLOR_BGR2HSV);
    inRange(hsv_roi, Scalar(0, 60, 32), Scalar(180, 255, 255), mask);
    float range_[] = {0, 180};
    const float* range[] = {range_};
    Mat roi_hist;
    int histSize[] = {180};
    int channels[] = {0};
    calcHist(&hsv_roi, 1, channels, mask, roi_hist, 1, histSize, range);
    normalize(roi_hist, roi_hist, 0, 255, NORM_MINMAX);
    // Setup the termination criteria, either 10 iteration or move by at least 1 pt
    TermCriteria term_crit(TermCriteria::EPS | TermCriteria::COUNT, 10, 1);
    while(true){
    Mat hsv, dst;
    capture >> frame;
    if (frame.empty())
    break;
    cvtColor(frame, hsv, COLOR_BGR2HSV);
    calcBackProject(&hsv, 1, channels, roi_hist, dst, range);
    // apply camshift to get the new location
    RotatedRect rot_rect = CamShift(dst, track_window, term_crit);
    // Draw it on image
    Point2f points[4];
    rot_rect.points(points);
    for (int i = 0; i < 4; i++)
    line(frame, points[i], points[(i+1)%4], 255, 2);
    imshow("img2", frame);
    int keyboard = waitKey(30);
    if (keyboard == 'q' || keyboard == 27)
    break;
    }
    }
    The class represents rotated (i.e. not up-right) rectangles on a plane.
    Definition types.hpp:538
    void points(Point2f pts[]) const
    void line(InputOutputArray img, Point pt1, Point pt2, const Scalar &color, int thickness=1, int lineType=LINE_8, int shift=0)
    Draws a line segment connecting two points.
    RotatedRect CamShift(InputArray probImage, Rect &window, TermCriteria criteria)
    Finds an object center, size, and orientation.

Three frames of the result is shown below:

image

Additional Resources

  1. French Wikipedia page on Camshift. (The two animations are taken from there)
  2. Bradski, G.R., "Real time face and object tracking as a component of a perceptual user interface," Applications of Computer Vision, 1998. WACV '98. Proceedings., Fourth IEEE Workshop on , vol., no., pp.214,219, 19-21 Oct 1998

Exercises

  1. OpenCV comes with a Python sample for an interactive demo of camshift. Use it, hack it, understand it.