Feature Detection and Description

FAST

Detects corners using the FAST algorithm

C++: void FAST(const Mat& image, vector<KeyPoint>& keypoints, int threshold, bool nonmaxSupression=true )
Parameters:
  • image – Image where keypoints (corners) are detected.
  • keypoints – Keypoints detected on the image.
  • threshold – Threshold on difference between intensity of the central pixel and pixels on a circle around this pixel. See the algorithm description below.
  • nonmaxSupression – If it is true, non-maximum supression is applied to detected corners (keypoints).

Detects corners using the FAST algorithm by E. Rosten (Machine Learning for High-speed Corner Detection, 2006).

MSER

class MSER

Maximally stable extremal region extractor.

class MSER : public CvMSERParams
{
public:
    // default constructor
    MSER();
    // constructor that initializes all the algorithm parameters
    MSER( int _delta, int _min_area, int _max_area,
          float _max_variation, float _min_diversity,
          int _max_evolution, double _area_threshold,
          double _min_margin, int _edge_blur_size );
    // runs the extractor on the specified image; returns the MSERs,
    // each encoded as a contour (vector<Point>, see findContours)
    // the optional mask marks the area where MSERs are searched for
    void operator()( const Mat& image, vector<vector<Point> >& msers, const Mat& mask ) const;
};

The class encapsulates all the parameters of the MSER extraction algorithm (see http://en.wikipedia.org/wiki/Maximally_stable_extremal_regions). Also see http://opencv.willowgarage.com/wiki/documentation/cpp/features2d/MSER for usefull comments and parameters description.

StarDetector

class StarDetector

Class implementing the Star keypoint detector, a modified version of the CenSurE keypoint detector described in [Agrawal08].

[Agrawal08]Agrawal, M. and Konolige, K. and Blas, M.R. “CenSurE: Center Surround Extremas for Realtime Feature Detection and Matching”, ECCV08, 2008

StarDetector::StarDetector

The Star Detector constructor

C++: StarDetector::StarDetector()
C++: StarDetector::StarDetector(int maxSize, int responseThreshold, int lineThresholdProjected, int lineThresholdBinarized, int suppressNonmaxSize)
Python: cv2.StarDetector(maxSize, responseThreshold, lineThresholdProjected, lineThresholdBinarized, suppressNonmaxSize) → <StarDetector object>
Parameters:
  • maxSize – maximum size of the features. The following values are supported: 4, 6, 8, 11, 12, 16, 22, 23, 32, 45, 46, 64, 90, 128. In the case of a different value the result is undefined.
  • responseThreshold – threshold for the approximated laplacian, used to eliminate weak features. The larger it is, the less features will be retrieved
  • lineThresholdProjected – another threshold for the laplacian to eliminate edges
  • lineThresholdBinarized – yet another threshold for the feature size to eliminate edges. The larger the 2nd threshold, the more points you get.

StarDetector::operator()

Finds keypoints in an image

C++: void StarDetector::operator()(const Mat& image, vector<KeyPoint>& keypoints)
Python: cv2.StarDetector.detect(image) → keypoints
C: CvSeq* cvGetStarKeypoints(const CvArr* image, CvMemStorage* storage, CvStarDetectorParams params=cvStarDetectorParams() )
Python: cv.GetStarKeypoints(image, storage, params) → keypoints
Parameters:
  • image – The input 8-bit grayscale image
  • keypoints – The output vector of keypoints
  • storage – The memory storage used to store the keypoints (OpenCV 1.x API only)
  • params – The algorithm parameters stored in CvStarDetectorParams (OpenCV 1.x API only)

SIFT

class SIFT

Class for extracting keypoints and computing descriptors using the Scale Invariant Feature Transform (SIFT) approach.

class CV_EXPORTS SIFT
{
public:
    struct CommonParams
    {
        static const int DEFAULT_NOCTAVES = 4;
        static const int DEFAULT_NOCTAVE_LAYERS = 3;
        static const int DEFAULT_FIRST_OCTAVE = -1;
        enum{ FIRST_ANGLE = 0, AVERAGE_ANGLE = 1 };

        CommonParams();
        CommonParams( int _nOctaves, int _nOctaveLayers, int _firstOctave,
                                          int _angleMode );
        int nOctaves, nOctaveLayers, firstOctave;
        int angleMode;
    };

    struct DetectorParams
    {
        static double GET_DEFAULT_THRESHOLD()
          { return 0.04 / SIFT::CommonParams::DEFAULT_NOCTAVE_LAYERS / 2.0; }
        static double GET_DEFAULT_EDGE_THRESHOLD() { return 10.0; }

        DetectorParams();
        DetectorParams( double _threshold, double _edgeThreshold );
        double threshold, edgeThreshold;
    };

    struct DescriptorParams
    {
        static double GET_DEFAULT_MAGNIFICATION() { return 3.0; }
        static const bool DEFAULT_IS_NORMALIZE = true;
        static const int DESCRIPTOR_SIZE = 128;

        DescriptorParams();
        DescriptorParams( double _magnification, bool _isNormalize,
                                                  bool _recalculateAngles );
        double magnification;
        bool isNormalize;
        bool recalculateAngles;
    };

    SIFT();
    //! sift-detector constructor
    SIFT( double _threshold, double _edgeThreshold,
          int _nOctaves=CommonParams::DEFAULT_NOCTAVES,
          int _nOctaveLayers=CommonParams::DEFAULT_NOCTAVE_LAYERS,
          int _firstOctave=CommonParams::DEFAULT_FIRST_OCTAVE,
          int _angleMode=CommonParams::FIRST_ANGLE );
    //! sift-descriptor constructor
    SIFT( double _magnification, bool _isNormalize=true,
          bool _recalculateAngles = true,
          int _nOctaves=CommonParams::DEFAULT_NOCTAVES,
          int _nOctaveLayers=CommonParams::DEFAULT_NOCTAVE_LAYERS,
          int _firstOctave=CommonParams::DEFAULT_FIRST_OCTAVE,
          int _angleMode=CommonParams::FIRST_ANGLE );
    SIFT( const CommonParams& _commParams,
          const DetectorParams& _detectorParams = DetectorParams(),
          const DescriptorParams& _descriptorParams = DescriptorParams() );

    //! returns the descriptor size in floats (128)
    int descriptorSize() const { return DescriptorParams::DESCRIPTOR_SIZE; }
    //! finds the keypoints using the SIFT algorithm
    void operator()(const Mat& img, const Mat& mask,
                    vector<KeyPoint>& keypoints) const;
    //! finds the keypoints and computes descriptors for them using SIFT algorithm.
    //! Optionally it can compute descriptors for the user-provided keypoints
    void operator()(const Mat& img, const Mat& mask,
                    vector<KeyPoint>& keypoints,
                    Mat& descriptors,
                    bool useProvidedKeypoints=false) const;

    CommonParams getCommonParams () const { return commParams; }
    DetectorParams getDetectorParams () const { return detectorParams; }
    DescriptorParams getDescriptorParams () const { return descriptorParams; }
protected:
    ...
};

SURF

class SURF

Class for extracting Speeded Up Robust Features from an image [Bay06]. The class is derived from CvSURFParams structure, which specifies the algorithm parameters:

int extended
  • 0 means that the basic descriptors (64 elements each) shall be computed
  • 1 means that the extended descriptors (128 elements each) shall be computed
int upright
  • 0 means that detector computes orientation of each feature.
  • 1 means that the orientation is not computed (which is much, much faster). For example, if you match images from a stereo pair, or do image stitching, the matched features likely have very similar angles, and you can speed up feature extraction by setting upright=1.
double hessianThreshold

Threshold for the keypoint detector. Only features, whose hessian is larger than hessianThreshold are retained by the detector. Therefore, the larger the value, the less keypoints you will get. A good default value could be from 300 to 500, depending from the image contrast.

int nOctaves

The number of a gaussian pyramid octaves that the detector uses. It is set to 4 by default. If you want to get very large features, use the larger value. If you want just small features, decrease it.

int nOctaveLayers

The number of images within each octave of a gaussian pyramid. It is set to 2 by default.

[Bay06]Bay, H. and Tuytelaars, T. and Van Gool, L. “SURF: Speeded Up Robust Features”, 9th European Conference on Computer Vision, 2006

SURF::SURF

The SURF extractor constructors.

C++: SURF::SURF()
C++: SURF::SURF(double hessianThreshold, int nOctaves=4, int nOctaveLayers=2, bool extended=false, bool upright=false)
Python: cv2.SURF(_hessianThreshold[, _nOctaves[, _nOctaveLayers[, _extended[, _upright]]]]) → <SURF object>
Parameters:
  • hessianThreshold – Threshold for hessian keypoint detector used in SURF.
  • nOctaves – Number of pyramid octaves the keypoint detector will use.
  • nOctaveLayers – Number of octave layers within each octave.
  • extended – Extended descriptor flag (true - use extended 128-element descriptors; false - use 64-element descriptors).
  • upright – Up-right or rotated features flag (true - do not compute orientation of features; false - compute orientation).

SURF::operator()

Detects keypoints and computes SURF descriptors for them.

C++: void SURF::operator()(const Mat& image, const Mat& mask, vector<KeyPoint>& keypoints)
C++: void SURF::operator()(const Mat& image, const Mat& mask, vector<KeyPoint>& keypoints, vector<float>& descriptors, bool useProvidedKeypoints=false)
Python: cv2.SURF.detect(img, mask) → keypoints
Python: cv2.SURF.detect(img, mask[, useProvidedKeypoints]) → keypoints, descriptors
C: void cvExtractSURF(const CvArr* image, const CvArr* mask, CvSeq** keypoints, CvSeq** descriptors, CvMemStorage* storage, CvSURFParams params)
Python: cv.ExtractSURF(image, mask, storage, params)-> (keypoints, descriptors)
Parameters:
  • image – Input 8-bit grayscale image
  • mask – Optional input mask that marks the regions where we should detect features.
  • keypoints – The input/output vector of keypoints
  • descriptors – The output concatenated vectors of descriptors. Each descriptor is 64- or 128-element vector, as returned by SURF::descriptorSize(). So the total size of descriptors will be keypoints.size()*descriptorSize().
  • useProvidedKeypoints – Boolean flag. If it is true, the keypoint detector is not run. Instead, the provided vector of keypoints is used and the algorithm just computes their descriptors.
  • storage – Memory storage for the output keypoints and descriptors in OpenCV 1.x API.
  • params – SURF algorithm parameters in OpenCV 1.x API.

ORB

class ORB

Class for extracting ORB features and descriptors from an image.

class ORB
{
public:
    /** The patch sizes that can be used (only one right now) */
    struct CommonParams
    {
        enum { DEFAULT_N_LEVELS = 3, DEFAULT_FIRST_LEVEL = 0};

        /** default constructor */
        CommonParams(float scale_factor = 1.2f, unsigned int n_levels = DEFAULT_N_LEVELS,
             int edge_threshold = 31, unsigned int first_level = DEFAULT_FIRST_LEVEL);
        void read(const FileNode& fn);
        void write(FileStorage& fs) const;

        /** Coefficient by which we divide the dimensions from one scale pyramid level to the next */
        float scale_factor_;
        /** The number of levels in the scale pyramid */
        unsigned int n_levels_;
        /** The level at which the image is given
         * if 1, that means we will also look at the image scale_factor_ times bigger
         */
        unsigned int first_level_;
        /** How far from the boundary the points should be */
        int edge_threshold_;
    };

    // c:function::default constructor
    ORB();
    // constructor that initializes all the algorithm parameters
    ORB( const CommonParams detector_params );
    // returns the number of elements in each descriptor (32 bytes)
    int descriptorSize() const;
    // detects keypoints using ORB
    void operator()(const Mat& img, const Mat& mask,
                    vector<KeyPoint>& keypoints) const;
    // detects ORB keypoints and computes the ORB descriptors for them;
    // output vector "descriptors" stores elements of descriptors and has size
    // equal descriptorSize()*keypoints.size() as each descriptor is
    // descriptorSize() elements of this vector.
    void operator()(const Mat& img, const Mat& mask,
                    vector<KeyPoint>& keypoints,
                    cv::Mat& descriptors,
                    bool useProvidedKeypoints=false) const;
};

The class implements ORB.

RandomizedTree

class RandomizedTree

Class containing a base structure for RTreeClassifier.

class CV_EXPORTS RandomizedTree
{
public:
        friend class RTreeClassifier;

        RandomizedTree();
        ~RandomizedTree();

        void train(std::vector<BaseKeypoint> const& base_set,
                 RNG &rng, int depth, int views,
                 size_t reduced_num_dim, int num_quant_bits);
        void train(std::vector<BaseKeypoint> const& base_set,
                 RNG &rng, PatchGenerator &make_patch, int depth,
                 int views, size_t reduced_num_dim, int num_quant_bits);

        // next two functions are EXPERIMENTAL
        //(do not use unless you know exactly what you do)
        static void quantizeVector(float *vec, int dim, int N, float bnds[2],
                 int clamp_mode=0);
        static void quantizeVector(float *src, int dim, int N, float bnds[2],
                 uchar *dst);

        // patch_data must be a 32x32 array (no row padding)
        float* getPosterior(uchar* patch_data);
        const float* getPosterior(uchar* patch_data) const;
        uchar* getPosterior2(uchar* patch_data);

        void read(const char* file_name, int num_quant_bits);
        void read(std::istream &is, int num_quant_bits);
        void write(const char* file_name) const;
        void write(std::ostream &os) const;

        int classes() { return classes_; }
        int depth() { return depth_; }

        void discardFloatPosteriors() { freePosteriors(1); }

        inline void applyQuantization(int num_quant_bits)
                 { makePosteriors2(num_quant_bits); }

private:
        int classes_;
        int depth_;
        int num_leaves_;
        std::vector<RTreeNode> nodes_;
        float **posteriors_;        // 16-byte aligned posteriors
        uchar **posteriors2_;     // 16-byte aligned posteriors
        std::vector<int> leaf_counts_;

        void createNodes(int num_nodes, RNG &rng);
        void allocPosteriorsAligned(int num_leaves, int num_classes);
        void freePosteriors(int which);
                 // which: 1=posteriors_, 2=posteriors2_, 3=both
        void init(int classes, int depth, RNG &rng);
        void addExample(int class_id, uchar* patch_data);
        void finalize(size_t reduced_num_dim, int num_quant_bits);
        int getIndex(uchar* patch_data) const;
        inline float* getPosteriorByIndex(int index);
        inline uchar* getPosteriorByIndex2(int index);
        inline const float* getPosteriorByIndex(int index) const;
        void convertPosteriorsToChar();
        void makePosteriors2(int num_quant_bits);
        void compressLeaves(size_t reduced_num_dim);
        void estimateQuantPercForPosteriors(float perc[2]);
};

RandomizedTree::train

Trains a randomized tree using an input set of keypoints.

C++: void train(std::vector<BaseKeypoint> const& base_set, RNG& rng, PatchGenerator& make_patch, int depth, int views, size_t reduced_num_dim, int num_quant_bits)
C++: void train(std::vector<BaseKeypoint> const& base_set, RNG& rng, PatchGenerator& make_patch, int depth, int views, size_t reduced_num_dim, int num_quant_bits)
Parameters:
  • base_set – Vector of the BaseKeypoint type. It contains image keypoints used for training.
  • rng – Random-number generator used for training.
  • make_patch – Patch generator used for training.
  • depth – Maximum tree depth.
  • views – Number of random views of each keypoint neighborhood to generate.
  • reduced_num_dim – Number of dimensions used in the compressed signature.
  • num_quant_bits – Number of bits used for quantization.

RandomizedTree::read

Reads a pre-saved randomized tree from a file or stream.

C++: read(const char* file_name, int num_quant_bits)
C++: read(std::istream& is, int num_quant_bits)
Parameters:
  • file_name – Name of the file that contains randomized tree data.
  • is – Input stream associated with the file that contains randomized tree data.
  • num_quant_bits – Number of bits used for quantization.

RandomizedTree::write

Writes the current randomized tree to a file or stream.

C++: void write(const char* file_name) const
C++: void write(std::ostream& os) const
Parameters:
  • file_name – Name of the file where randomized tree data is stored.
  • is – Output stream associated with the file where randomized tree data is stored.

RandomizedTree::applyQuantization

C++: void applyQuantization(int num_quant_bits)

Applies quantization to the current randomized tree.

Parameters:
  • num_quant_bits – Number of bits used for quantization.

RTreeNode

class RTreeNode

Class containing a base structure for RandomizedTree.

struct RTreeNode
{
        short offset1, offset2;

        RTreeNode() {}

        RTreeNode(uchar x1, uchar y1, uchar x2, uchar y2)
                : offset1(y1*PATCH_SIZE + x1),
                offset2(y2*PATCH_SIZE + x2)
        {}

        //! Left child on 0, right child on 1
        inline bool operator() (uchar* patch_data) const
        {
                return patch_data[offset1] > patch_data[offset2];
        }
};

RTreeClassifier

class RTreeClassifier

Class containing RTreeClassifier. It represents the Calonder descriptor originally introduced by Michael Calonder.

class CV_EXPORTS RTreeClassifier
{
public:
        static const int DEFAULT_TREES = 48;
        static const size_t DEFAULT_NUM_QUANT_BITS = 4;

        RTreeClassifier();

        void train(std::vector<BaseKeypoint> const& base_set,
                RNG &rng,
                int num_trees = RTreeClassifier::DEFAULT_TREES,
                int depth = DEFAULT_DEPTH,
                int views = DEFAULT_VIEWS,
                size_t reduced_num_dim = DEFAULT_REDUCED_NUM_DIM,
                int num_quant_bits = DEFAULT_NUM_QUANT_BITS,
                         bool print_status = true);
        void train(std::vector<BaseKeypoint> const& base_set,
                RNG &rng,
                PatchGenerator &make_patch,
                int num_trees = RTreeClassifier::DEFAULT_TREES,
                int depth = DEFAULT_DEPTH,
                int views = DEFAULT_VIEWS,
                size_t reduced_num_dim = DEFAULT_REDUCED_NUM_DIM,
                int num_quant_bits = DEFAULT_NUM_QUANT_BITS,
                 bool print_status = true);

        // sig must point to a memory block of at least
        //classes()*sizeof(float|uchar) bytes
        void getSignature(IplImage *patch, uchar *sig);
        void getSignature(IplImage *patch, float *sig);
        void getSparseSignature(IplImage *patch, float *sig,
                 float thresh);

        static int countNonZeroElements(float *vec, int n, double tol=1e-10);
        static inline void safeSignatureAlloc(uchar **sig, int num_sig=1,
                        int sig_len=176);
        static inline uchar* safeSignatureAlloc(int num_sig=1,
                         int sig_len=176);

        inline int classes() { return classes_; }
        inline int original_num_classes()
                 { return original_num_classes_; }

        void setQuantization(int num_quant_bits);
        void discardFloatPosteriors();

        void read(const char* file_name);
        void read(std::istream &is);
        void write(const char* file_name) const;
        void write(std::ostream &os) const;

        std::vector<RandomizedTree> trees_;

private:
        int classes_;
        int num_quant_bits_;
        uchar **posteriors_;
        ushort *ptemp_;
        int original_num_classes_;
        bool keep_floats_;
};

RTreeClassifier::train

Trains a randomized tree classifier using an input set of keypoints.

Parameters:
  • base_set – Vector of the BaseKeypoint type. It contains image keypoints used for training.
  • rng – Random-number generator used for training.
  • make_patch – Patch generator used for training.
  • num_trees – Number of randomized trees used in RTreeClassificator .
  • depth – Maximum tree depth.
  • views – Number of random views of each keypoint neighborhood to generate.
  • reduced_num_dim – Number of dimensions used in the compressed signature.
  • num_quant_bits – Number of bits used for quantization.
  • print_status – Current status of training printed on the console.

RTreeClassifier::getSignature

Returns a signature for an image patch.

C++: void getSignature(IplImage* patch, uchar* sig)
C++: void getSignature(IplImage* patch, float* sig)
Parameters:
  • patch – Image patch to calculate the signature for.
  • sig – Output signature (array dimension is reduced_num_dim) .

RTreeClassifier::getSparseSignature

Returns a sparse signature for an image patch

C++: void getSparseSignature(IplImage* patch, float* sig, float thresh)
Parameters:
  • patch – Image patch to calculate the signature for.
  • sig – Output signature (array dimension is reduced_num_dim) .
  • thresh – Threshold used for compressing the signature.

Returns a signature for an image patch similarly to getSignature but uses a threshold for removing all signature elements below the threshold so that the signature is compressed.

RTreeClassifier::countNonZeroElements

Returns the number of non-zero elements in an input array.

C++: static int countNonZeroElements(float* vec, int n, double tol=1e-10)
Parameters:
  • vec – Input vector containing float elements.
  • n – Input vector size.
  • tol – Threshold used for counting elements. All elements less than tol are considered as zero elements.

RTreeClassifier::read

Reads a pre-saved RTreeClassifier from a file or stream.

C++: read(const char* file_name)
C++: read(std::istream& is)
Parameters:
  • file_name – Name of the file that contains randomized tree data.
  • is – Input stream associated with the file that contains randomized tree data.

RTreeClassifier::write

Writes the current RTreeClassifier to a file or stream.

C++: void write(const char* file_name) const
C++: void write(std::ostream& os) const
Parameters:
  • file_name – Name of the file where randomized tree data is stored.
  • os – Output stream associated with the file where randomized tree data is stored.

RTreeClassifier::setQuantization

Applies quantization to the current randomized tree.

C++: void setQuantization(int num_quant_bits)
Parameters:
  • num_quant_bits – Number of bits used for quantization.

The example below demonstrates the usage of RTreeClassifier for matching the features. The features are extracted from the test and train images with SURF. Output is best\_corr and best\_corr\_idx arrays that keep the best probabilities and corresponding features indices for every train feature.

CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq *objectKeypoints = 0, *objectDescriptors = 0;
CvSeq *imageKeypoints = 0, *imageDescriptors = 0;
CvSURFParams params = cvSURFParams(500, 1);
cvExtractSURF( test_image, 0, &imageKeypoints, &imageDescriptors,
                 storage, params );
cvExtractSURF( train_image, 0, &objectKeypoints, &objectDescriptors,
                 storage, params );

RTreeClassifier detector;
int patch_width = PATCH_SIZE;
iint patch_height = PATCH_SIZE;
vector<BaseKeypoint> base_set;
int i=0;
CvSURFPoint* point;
for (i=0;i<(n_points > 0 ? n_points : objectKeypoints->total);i++)
{
        point=(CvSURFPoint*)cvGetSeqElem(objectKeypoints,i);
        base_set.push_back(
                BaseKeypoint(point->pt.x,point->pt.y,train_image));
}

        //Detector training
 RNG rng( cvGetTickCount() );
PatchGenerator gen(0,255,2,false,0.7,1.3,-CV_PI/3,CV_PI/3,
                        -CV_PI/3,CV_PI/3);

printf("RTree Classifier training...n");
detector.train(base_set,rng,gen,24,DEFAULT_DEPTH,2000,
        (int)base_set.size(), detector.DEFAULT_NUM_QUANT_BITS);
printf("Donen");

float* signature = new float[detector.original_num_classes()];
float* best_corr;
int* best_corr_idx;
if (imageKeypoints->total > 0)
{
        best_corr = new float[imageKeypoints->total];
        best_corr_idx = new int[imageKeypoints->total];
}

for(i=0; i < imageKeypoints->total; i++)
{
        point=(CvSURFPoint*)cvGetSeqElem(imageKeypoints,i);
        int part_idx = -1;
        float prob = 0.0f;

        CvRect roi = cvRect((int)(point->pt.x) - patch_width/2,
                (int)(point->pt.y) - patch_height/2,
                 patch_width, patch_height);
        cvSetImageROI(test_image, roi);
        roi = cvGetImageROI(test_image);
        if(roi.width != patch_width || roi.height != patch_height)
        {
                best_corr_idx[i] = part_idx;
                best_corr[i] = prob;
        }
        else
        {
                cvSetImageROI(test_image, roi);
                IplImage* roi_image =
                         cvCreateImage(cvSize(roi.width, roi.height),
                         test_image->depth, test_image->nChannels);
                cvCopy(test_image,roi_image);

                detector.getSignature(roi_image, signature);
                for (int j = 0; j< detector.original_num_classes();j++)
                {
                        if (prob < signature[j])
                        {
                                part_idx = j;
                                prob = signature[j];
                        }
                }

                best_corr_idx[i] = part_idx;
                best_corr[i] = prob;

                if (roi_image)
                        cvReleaseImage(&roi_image);
        }
        cvResetImageROI(test_image);
}