Feature detectors in OpenCV have wrappers with a common interface that enables you to easily switch between different algorithms solving the same problem. All objects that implement keypoint detectors inherit the FeatureDetector interface.
Data structure for salient point detectors.
- Point2f pt¶
coordinates of the keypoint
- float size¶
diameter of the meaningful keypoint neighborhood
- float angle¶
computed orientation of the keypoint (-1 if not applicable)
- float response¶
the response by which the most strong keypoints have been selected. Can be used for further sorting or subsampling
- int octave¶
octave (pyramid layer) from which the keypoint has been extracted
- int class_id¶
object id that can be used to clustered keypoints by an object they belong to
The keypoint constructors
Parameters: |
|
---|
Abstract base class for 2D image feature detectors.
class CV_EXPORTS FeatureDetector
{
public:
virtual ~FeatureDetector();
void detect( const Mat& image, vector<KeyPoint>& keypoints,
const Mat& mask=Mat() ) const;
void detect( const vector<Mat>& images,
vector<vector<KeyPoint> >& keypoints,
const vector<Mat>& masks=vector<Mat>() ) const;
virtual void read(const FileNode&);
virtual void write(FileStorage&) const;
static Ptr<FeatureDetector> create( const string& detectorType );
protected:
...
};
Detects keypoints in an image (first variant) or image set (second variant).
Parameters: |
|
---|
Reads a feature detector object from a file node.
Parameters: |
|
---|
Writes a feature detector object to a file storage.
Parameters: |
|
---|
Creates a feature detector by its name.
Parameters: |
|
---|
The following detector types are supported:
Also a combined format is supported: feature detector adapter name ( "Grid" – GridAdaptedFeatureDetector, "Pyramid" – PyramidAdaptedFeatureDetector ) + feature detector name (see above), for example: "GridFAST", "PyramidSTAR" .
Wrapping class for feature detection using the FAST() method.
class FastFeatureDetector : public FeatureDetector
{
public:
FastFeatureDetector( int threshold=1, bool nonmaxSuppression=true );
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
protected:
...
};
Wrapping class for feature detection using the goodFeaturesToTrack() function.
class GoodFeaturesToTrackDetector : public FeatureDetector
{
public:
class Params
{
public:
Params( int maxCorners=1000, double qualityLevel=0.01,
double minDistance=1., int blockSize=3,
bool useHarrisDetector=false, double k=0.04 );
void read( const FileNode& fn );
void write( FileStorage& fs ) const;
int maxCorners;
double qualityLevel;
double minDistance;
int blockSize;
bool useHarrisDetector;
double k;
};
GoodFeaturesToTrackDetector( const GoodFeaturesToTrackDetector::Params& params=
GoodFeaturesToTrackDetector::Params() );
GoodFeaturesToTrackDetector( int maxCorners, double qualityLevel,
double minDistance, int blockSize=3,
bool useHarrisDetector=false, double k=0.04 );
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
protected:
...
};
Wrapping class for feature detection using the MSER class.
class MserFeatureDetector : public FeatureDetector
{
public:
MserFeatureDetector( CvMSERParams params=cvMSERParams() );
MserFeatureDetector( int delta, int minArea, int maxArea,
double maxVariation, double minDiversity,
int maxEvolution, double areaThreshold,
double minMargin, int edgeBlurSize );
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
protected:
...
};
Wrapping class for feature detection using the StarDetector class.
class StarFeatureDetector : public FeatureDetector
{
public:
StarFeatureDetector( int maxSize=16, int responseThreshold=30,
int lineThresholdProjected = 10,
int lineThresholdBinarized=8, int suppressNonmaxSize=5 );
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
protected:
...
};
Wrapping class for feature detection using the SIFT class.
class SiftFeatureDetector : public FeatureDetector
{
public:
SiftFeatureDetector(
const SIFT::DetectorParams& detectorParams=SIFT::DetectorParams(),
const SIFT::CommonParams& commonParams=SIFT::CommonParams() );
SiftFeatureDetector( double threshold, double edgeThreshold,
int nOctaves=SIFT::CommonParams::DEFAULT_NOCTAVES,
int nOctaveLayers=SIFT::CommonParams::DEFAULT_NOCTAVE_LAYERS,
int firstOctave=SIFT::CommonParams::DEFAULT_FIRST_OCTAVE,
int angleMode=SIFT::CommonParams::FIRST_ANGLE );
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
protected:
...
};
Wrapping class for feature detection using the SURF class.
class SurfFeatureDetector : public FeatureDetector
{
public:
SurfFeatureDetector( double hessianThreshold = 400., int octaves = 3,
int octaveLayers = 4 );
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
protected:
...
};
Wrapping class for feature detection using the ORB class.
class OrbFeatureDetector : public FeatureDetector
{
public:
OrbFeatureDetector( size_t n_features );
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
protected:
...
};
Class for generation of image features which are distributed densely and regularly over the image.
class DenseFeatureDetector : public FeatureDetector
{
public:
DenseFeatureDetector( float initFeatureScale=1.f, int featureScaleLevels=1,
float featureScaleMul=0.1f,
int initXyStep=6, int initImgBound=0,
bool varyXyStepWithScale=true,
bool varyImgBoundWithScale=false );
protected:
...
};
The detector generates several levels (in the amount of featureScaleLevels) of features. Features of each level are located in the nodes of a regular grid over the image (excluding the image boundary of given size). The level parameters (a feature scale, a node size, a size of boundary) are multiplied by featureScaleMul with level index growing depending on input flags, viz.:
Class for extracting blobs from an image.
class SimpleBlobDetector : public FeatureDetector
{
public:
struct Params
{
Params();
float thresholdStep;
float minThreshold;
float maxThreshold;
size_t minRepeatability;
float minDistBetweenBlobs;
bool filterByColor;
uchar blobColor;
bool filterByArea;
float minArea, maxArea;
bool filterByCircularity;
float minCircularity, maxCircularity;
bool filterByInertia;
float minInertiaRatio, maxInertiaRatio;
bool filterByConvexity;
float minConvexity, maxConvexity;
};
SimpleBlobDetector(const SimpleBlobDetector::Params ¶meters = SimpleBlobDetector::Params());
protected:
...
};
The class implements a simple algorithm for extracting blobs from an image:
This class performs several filtrations of returned blobs. You should set filterBy* to true/false to turn on/off corresponding filtration. Available filtrations:
- By color. This filter compares the intensity of a binary image at the center of a blob to blobColor. If they differ, the blob is filtered out. Use blobColor = 0 to extract dark blobs and blobColor = 255 to extract light blobs.
- By area. Extracted blobs have an area between minArea (inclusive) and maxArea (exclusive).
- By circularity. Extracted blobs have circularity () between minCircularity (inclusive) and maxCircularity (exclusive).
- By ratio of the minimum inertia to maximum inertia. Extracted blobs have this ratio between minInertiaRatio (inclusive) and maxInertiaRatio (exclusive).
- By convexity. Extracted blobs have convexity (area / area of blob convex hull) between minConvexity (inclusive) and maxConvexity (exclusive).
Default values of parameters are tuned to extract dark circular blobs.
Class adapting a detector to partition the source image into a grid and detect points in each cell.
class GridAdaptedFeatureDetector : public FeatureDetector
{
public:
/*
* detector Detector that will be adapted.
* maxTotalKeypoints Maximum count of keypoints detected on the image.
* Only the strongest keypoints will be kept.
* gridRows Grid row count.
* gridCols Grid column count.
*/
GridAdaptedFeatureDetector( const Ptr<FeatureDetector>& detector,
int maxTotalKeypoints, int gridRows=4,
int gridCols=4 );
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
protected:
...
};
Class adapting a detector to detect points over multiple levels of a Gaussian pyramid. Consider using this class for detectors that are not inherently scaled.
class PyramidAdaptedFeatureDetector : public FeatureDetector
{
public:
PyramidAdaptedFeatureDetector( const Ptr<FeatureDetector>& detector,
int levels=2 );
virtual void read( const FileNode& fn );
virtual void write( FileStorage& fs ) const;
protected:
...
};
Adaptively adjusting detector that iteratively detects features until the desired number is found.
class DynamicAdaptedFeatureDetector: public FeatureDetector
{
public:
DynamicAdaptedFeatureDetector( const Ptr<AdjusterAdapter>& adjuster,
int min_features=400, int max_features=500, int max_iters=5 );
...
};
If the detector is persisted, it “remembers” the parameters used for the last detection. In this case, the detector may be used for consistent numbers of keypoints in a set of temporally related images, such as video streams or panorama series.
DynamicAdaptedFeatureDetector uses another detector, such as FAST or SURF, to do the dirty work, with the help of AdjusterAdapter . If the detected number of features is not large enough, AdjusterAdapter adjusts the detection parameters so that the next detection results in a bigger or smaller number of features. This is repeated until either the number of desired features are found or the parameters are maxed out.
Adapters can be easily implemented for any detector via the AdjusterAdapter interface.
Beware that this is not thread-safe since the adjustment of parameters requires modification of the feature detector class instance.
Example of creating DynamicAdaptedFeatureDetector :
//sample usage:
//will create a detector that attempts to find
//100 - 110 FAST Keypoints, and will at most run
//FAST feature detection 10 times until that
//number of keypoints are found
Ptr<FeatureDetector> detector(new DynamicAdaptedFeatureDetector (100, 110, 10,
new FastAdjuster(20,true)));
The constructor
Parameters: |
|
---|
Class providing an interface for adjusting parameters of a feature detector. This interface is used by DynamicAdaptedFeatureDetector . It is a wrapper for FeatureDetector that enables adjusting parameters after feature detection.
class AdjusterAdapter: public FeatureDetector
{
public:
virtual ~AdjusterAdapter() {}
virtual void tooFew(int min, int n_detected) = 0;
virtual void tooMany(int max, int n_detected) = 0;
virtual bool good() const = 0;
virtual Ptr<AdjusterAdapter> clone() const = 0;
static Ptr<AdjusterAdapter> create( const string& detectorType );
};
See FastAdjuster, StarAdjuster, and SurfAdjuster for concrete implementations.
Adjusts the detector parameters to detect more features.
Parameters: |
|
---|
Example:
void FastAdjuster::tooFew(int min, int n_detected)
{
thresh_--;
}
Adjusts the detector parameters to detect less features.
Parameters: |
|
---|
Example:
void FastAdjuster::tooMany(int min, int n_detected)
{
thresh_++;
}
Returns false if the detector parameters cannot be adjusted any more.
Example:
bool FastAdjuster::good() const
{
return (thresh_ > 1) && (thresh_ < 200);
}
Creates an adjuster adapter by name
Creates an adjuster adapter by name detectorType. The detector name is the same as in FeatureDetector::create(), but now supports "FAST", "STAR", and "SURF" only.
AdjusterAdapter for FastFeatureDetector. This class decreases or increases the threshold value by 1.
class FastAdjuster FastAdjuster: public AdjusterAdapter
{
public:
FastAdjuster(int init_thresh = 20, bool nonmax = true);
...
};
AdjusterAdapter for StarFeatureDetector. This class adjusts the responseThreshhold of StarFeatureDetector.
class StarAdjuster: public AdjusterAdapter
{
StarAdjuster(double initial_thresh = 30.0);
...
};
AdjusterAdapter for SurfFeatureDetector. This class adjusts the hessianThreshold of SurfFeatureDetector.
class SurfAdjuster: public SurfAdjuster
{
SurfAdjuster();
...
};