Common Interfaces of Feature Detectors

Feature detectors in OpenCV have wrappers with a common interface that enables you to easily switch between different algorithms solving the same problem. All objects that implement keypoint detectors inherit the FeatureDetector interface.

Note

  • An example explaining keypoint detection can be found at opencv_source_code/samples/cpp/descriptor_extractor_matcher.cpp

FeatureDetector

class FeatureDetector : public Algorithm

Abstract base class for 2D image feature detectors.

class CV_EXPORTS FeatureDetector
{
public:
    virtual ~FeatureDetector();

    void detect( InputArray image, vector<KeyPoint>& keypoints,
                 InputArray mask=noArray() ) const;

    void detect( InputArrayOfArrays images,
                 vector<vector<KeyPoint> >& keypoints,
                 InputArrayOfArrays masks=noArray() ) const;

    virtual void read(const FileNode&);
    virtual void write(FileStorage&) const;

    static Ptr<FeatureDetector> create( const String& detectorType );

protected:
...
};

FeatureDetector::detect

Detects keypoints in an image (first variant) or image set (second variant).

C++: void FeatureDetector::detect(InputArray image, vector<KeyPoint>& keypoints, InputArray mask=noArray() ) const
C++: void FeatureDetector::detect(InputArrayOfArrays images, vector<vector<KeyPoint>>& keypoints, InputArrayOfArrays masks=noArray() ) const
Python: cv2.FeatureDetector_create.detect(image[, mask]) → keypoints
Parameters:
  • image – Image.
  • images – Image set.
  • keypoints – The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .
  • mask – Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.
  • masks – Masks for each input image specifying where to look for keypoints (optional). masks[i] is a mask for images[i].

FeatureDetector::create

Creates a feature detector by its name.

C++: Ptr<FeatureDetector> FeatureDetector::create(const String& detectorType)
Python: cv2.FeatureDetector_create(detectorType) → retval
Parameters:detectorType – Feature detector type.

The following detector types are supported:

FastFeatureDetector

class FastFeatureDetector : public FeatureDetector

Wrapping class for feature detection using the FAST() method.

class FastFeatureDetector : public FeatureDetector
{
public:
    FastFeatureDetector( int threshold=1, bool nonmaxSuppression=true, type=FastFeatureDetector::TYPE_9_16 );
    virtual void read( const FileNode& fn );
    virtual void write( FileStorage& fs ) const;
protected:
    ...
};

GoodFeaturesToTrackDetector

class GoodFeaturesToTrackDetector : public FeatureDetector

Wrapping class for feature detection using the goodFeaturesToTrack() function.

class GoodFeaturesToTrackDetector : public FeatureDetector
{
public:
    class Params
    {
    public:
        Params( int maxCorners=1000, double qualityLevel=0.01,
                double minDistance=1., int blockSize=3,
                bool useHarrisDetector=false, double k=0.04 );
        void read( const FileNode& fn );
        void write( FileStorage& fs ) const;

        int maxCorners;
        double qualityLevel;
        double minDistance;
        int blockSize;
        bool useHarrisDetector;
        double k;
    };

    GoodFeaturesToTrackDetector( const GoodFeaturesToTrackDetector::Params& params=
                                            GoodFeaturesToTrackDetector::Params() );
    GoodFeaturesToTrackDetector( int maxCorners, double qualityLevel,
                                 double minDistance, int blockSize=3,
                                 bool useHarrisDetector=false, double k=0.04 );
    virtual void read( const FileNode& fn );
    virtual void write( FileStorage& fs ) const;
protected:
    ...
};

MserFeatureDetector

class MserFeatureDetector : public FeatureDetector

Wrapping class for feature detection using the MSER class.

class MserFeatureDetector : public FeatureDetector
{
public:
    MserFeatureDetector( CvMSERParams params=cvMSERParams() );
    MserFeatureDetector( int delta, int minArea, int maxArea,
                         double maxVariation, double minDiversity,
                         int maxEvolution, double areaThreshold,
                         double minMargin, int edgeBlurSize );
    virtual void read( const FileNode& fn );
    virtual void write( FileStorage& fs ) const;
protected:
    ...
};

SimpleBlobDetector

class SimpleBlobDetector : public FeatureDetector

Class for extracting blobs from an image.

class SimpleBlobDetector : public FeatureDetector
{
public:
struct Params
{
    Params();
    float thresholdStep;
    float minThreshold;
    float maxThreshold;
    size_t minRepeatability;
    float minDistBetweenBlobs;

    bool filterByColor;
    uchar blobColor;

    bool filterByArea;
    float minArea, maxArea;

    bool filterByCircularity;
    float minCircularity, maxCircularity;

    bool filterByInertia;
    float minInertiaRatio, maxInertiaRatio;

    bool filterByConvexity;
    float minConvexity, maxConvexity;
};

SimpleBlobDetector(const SimpleBlobDetector::Params &parameters = SimpleBlobDetector::Params());

protected:
    ...
};

The class implements a simple algorithm for extracting blobs from an image:

  1. Convert the source image to binary images by applying thresholding with several thresholds from minThreshold (inclusive) to maxThreshold (exclusive) with distance thresholdStep between neighboring thresholds.
  2. Extract connected components from every binary image by findContours() and calculate their centers.
  3. Group centers from several binary images by their coordinates. Close centers form one group that corresponds to one blob, which is controlled by the minDistBetweenBlobs parameter.
  4. From the groups, estimate final centers of blobs and their radiuses and return as locations and sizes of keypoints.

This class performs several filtrations of returned blobs. You should set filterBy* to true/false to turn on/off corresponding filtration. Available filtrations:

  • By color. This filter compares the intensity of a binary image at the center of a blob to blobColor. If they differ, the blob is filtered out. Use blobColor = 0 to extract dark blobs and blobColor = 255 to extract light blobs.
  • By area. Extracted blobs have an area between minArea (inclusive) and maxArea (exclusive).
  • By circularity. Extracted blobs have circularity (\frac{4*\pi*Area}{perimeter * perimeter}) between minCircularity (inclusive) and maxCircularity (exclusive).
  • By ratio of the minimum inertia to maximum inertia. Extracted blobs have this ratio between minInertiaRatio (inclusive) and maxInertiaRatio (exclusive).
  • By convexity. Extracted blobs have convexity (area / area of blob convex hull) between minConvexity (inclusive) and maxConvexity (exclusive).

Default values of parameters are tuned to extract dark circular blobs.