OpenCV  5.0.0alpha
Open Source Computer Vision
Loading...
Searching...
No Matches
Feature Detection

Detailed Description

Classes

class  cv::LineSegmentDetector
 Line segment detector class. More...
 

Enumerations

enum  cv::HoughModes {
  cv::HOUGH_STANDARD = 0 ,
  cv::HOUGH_PROBABILISTIC = 1 ,
  cv::HOUGH_MULTI_SCALE = 2 ,
  cv::HOUGH_GRADIENT = 3 ,
  cv::HOUGH_GRADIENT_ALT = 4
}
 Variants of a Hough transform. More...
 
enum  cv::LineSegmentDetectorModes {
  cv::LSD_REFINE_NONE = 0 ,
  cv::LSD_REFINE_STD = 1 ,
  cv::LSD_REFINE_ADV = 2
}
 Variants of Line Segment Detector. More...
 

Functions

void cv::Canny (InputArray dx, InputArray dy, OutputArray edges, double threshold1, double threshold2, bool L2gradient=false)
 
void cv::Canny (InputArray image, OutputArray edges, double threshold1, double threshold2, int apertureSize=3, bool L2gradient=false)
 Finds edges in an image using the Canny algorithm [48] .
 
void cv::cornerEigenValsAndVecs (InputArray src, OutputArray dst, int blockSize, int ksize, int borderType=BORDER_DEFAULT)
 Calculates eigenvalues and eigenvectors of image blocks for corner detection.
 
void cv::cornerHarris (InputArray src, OutputArray dst, int blockSize, int ksize, double k, int borderType=BORDER_DEFAULT)
 Harris corner detector.
 
void cv::cornerMinEigenVal (InputArray src, OutputArray dst, int blockSize, int ksize=3, int borderType=BORDER_DEFAULT)
 Calculates the minimal eigenvalue of gradient matrices for corner detection.
 
void cv::cornerSubPix (InputArray image, InputOutputArray corners, Size winSize, Size zeroZone, TermCriteria criteria)
 Refines the corner locations.
 
Ptr< LineSegmentDetectorcv::createLineSegmentDetector (LineSegmentDetectorModes refine=LSD_REFINE_STD, double scale=0.8, double sigma_scale=0.6, double quant=2.0, double ang_th=22.5, double log_eps=0, double density_th=0.7, int n_bins=1024)
 Creates a smart pointer to a LineSegmentDetector object and initializes it.
 
void cv::goodFeaturesToTrack (InputArray image, OutputArray corners, int maxCorners, double qualityLevel, double minDistance, InputArray mask, int blockSize, int gradientSize, bool useHarrisDetector=false, double k=0.04)
 
void cv::goodFeaturesToTrack (InputArray image, OutputArray corners, int maxCorners, double qualityLevel, double minDistance, InputArray mask, OutputArray cornersQuality, int blockSize=3, int gradientSize=3, bool useHarrisDetector=false, double k=0.04)
 Same as above, but returns also quality measure of the detected corners.
 
void cv::goodFeaturesToTrack (InputArray image, OutputArray corners, int maxCorners, double qualityLevel, double minDistance, InputArray mask=noArray(), int blockSize=3, bool useHarrisDetector=false, double k=0.04)
 Determines strong corners on an image.
 
void cv::HoughCircles (InputArray image, OutputArray circles, int method, double dp, double minDist, double param1=100, double param2=100, int minRadius=0, int maxRadius=0)
 Finds circles in a grayscale image using the Hough transform.
 
void cv::HoughLines (InputArray image, OutputArray lines, double rho, double theta, int threshold, double srn=0, double stn=0, double min_theta=0, double max_theta=CV_PI)
 Finds lines in a binary image using the standard Hough transform.
 
void cv::HoughLinesP (InputArray image, OutputArray lines, double rho, double theta, int threshold, double minLineLength=0, double maxLineGap=0)
 Finds line segments in a binary image using the probabilistic Hough transform.
 
void cv::HoughLinesPointSet (InputArray point, OutputArray lines, int lines_max, int threshold, double min_rho, double max_rho, double rho_step, double min_theta, double max_theta, double theta_step)
 Finds lines in a set of points using the standard Hough transform.
 
void cv::preCornerDetect (InputArray src, OutputArray dst, int ksize, int borderType=BORDER_DEFAULT)
 Calculates a feature map for corner detection.
 

Enumeration Type Documentation

◆ HoughModes

#include <opencv2/imgproc.hpp>

Variants of a Hough transform.

Enumerator
HOUGH_STANDARD 
Python: cv.HOUGH_STANDARD

classical or standard Hough transform. Every line is represented by two floating-point numbers \((\rho, \theta)\) , where \(\rho\) is a distance between (0,0) point and the line, and \(\theta\) is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of CV_32FC2 type

HOUGH_PROBABILISTIC 
Python: cv.HOUGH_PROBABILISTIC

probabilistic Hough transform (more efficient in case if the picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of the CV_32SC4 type.

HOUGH_MULTI_SCALE 
Python: cv.HOUGH_MULTI_SCALE

multi-scale variant of the classical Hough transform. The lines are encoded the same way as HOUGH_STANDARD.

HOUGH_GRADIENT 
Python: cv.HOUGH_GRADIENT

basically 21HT, described in [314]

HOUGH_GRADIENT_ALT 
Python: cv.HOUGH_GRADIENT_ALT

variation of HOUGH_GRADIENT to get better accuracy

◆ LineSegmentDetectorModes

#include <opencv2/imgproc.hpp>

Variants of Line Segment Detector.

Enumerator
LSD_REFINE_NONE 
Python: cv.LSD_REFINE_NONE

No refinement applied.

LSD_REFINE_STD 
Python: cv.LSD_REFINE_STD

Standard refinement is applied. E.g. breaking arches into smaller straighter line approximations.

LSD_REFINE_ADV 
Python: cv.LSD_REFINE_ADV

Advanced refinement. Number of false alarms is calculated, lines are refined through increase of precision, decrement in size, etc.

Function Documentation

◆ Canny() [1/2]

void cv::Canny ( InputArray dx,
InputArray dy,
OutputArray edges,
double threshold1,
double threshold2,
bool L2gradient = false )
Python:
cv.Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]]) -> edges
cv.Canny(dx, dy, threshold1, threshold2[, edges[, L2gradient]]) -> edges

#include <opencv2/imgproc.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Finds edges in an image using the Canny algorithm with custom image gradient.

Parameters
dx16-bit x derivative of input image (CV_16SC1 or CV_16SC3).
dy16-bit y derivative of input image (same type as dx).
edgesoutput edge map; single channels 8-bit image, which has the same size as image .
threshold1first threshold for the hysteresis procedure.
threshold2second threshold for the hysteresis procedure.
L2gradienta flag, indicating whether a more accurate \(L_2\) norm \(=\sqrt{(dI/dx)^2 + (dI/dy)^2}\) should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default \(L_1\) norm \(=|dI/dx|+|dI/dy|\) is enough ( L2gradient=false ).

◆ Canny() [2/2]

void cv::Canny ( InputArray image,
OutputArray edges,
double threshold1,
double threshold2,
int apertureSize = 3,
bool L2gradient = false )
Python:
cv.Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]]) -> edges
cv.Canny(dx, dy, threshold1, threshold2[, edges[, L2gradient]]) -> edges

#include <opencv2/imgproc.hpp>

Finds edges in an image using the Canny algorithm [48] .

The function finds edges in the input image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector

Parameters
image8-bit input image.
edgesoutput edge map; single channels 8-bit image, which has the same size as image .
threshold1first threshold for the hysteresis procedure.
threshold2second threshold for the hysteresis procedure.
apertureSizeaperture size for the Sobel operator.
L2gradienta flag, indicating whether a more accurate \(L_2\) norm \(=\sqrt{(dI/dx)^2 + (dI/dy)^2}\) should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default \(L_1\) norm \(=|dI/dx|+|dI/dy|\) is enough ( L2gradient=false ).

◆ cornerEigenValsAndVecs()

void cv::cornerEigenValsAndVecs ( InputArray src,
OutputArray dst,
int blockSize,
int ksize,
int borderType = BORDER_DEFAULT )
Python:
cv.cornerEigenValsAndVecs(src, blockSize, ksize[, dst[, borderType]]) -> dst

#include <opencv2/imgproc.hpp>

Calculates eigenvalues and eigenvectors of image blocks for corner detection.

For every pixel \(p\) , the function cornerEigenValsAndVecs considers a blockSize \(\times\) blockSize neighborhood \(S(p)\) . It calculates the covariation matrix of derivatives over the neighborhood as:

\[M = \begin{bmatrix} \sum _{S(p)}(dI/dx)^2 & \sum _{S(p)}dI/dx dI/dy \\ \sum _{S(p)}dI/dx dI/dy & \sum _{S(p)}(dI/dy)^2 \end{bmatrix}\]

where the derivatives are computed using the Sobel operator.

After that, it finds eigenvectors and eigenvalues of \(M\) and stores them in the destination image as \((\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)\) where

  • \(\lambda_1, \lambda_2\) are the non-sorted eigenvalues of \(M\)
  • \(x_1, y_1\) are the eigenvectors corresponding to \(\lambda_1\)
  • \(x_2, y_2\) are the eigenvectors corresponding to \(\lambda_2\)

The output of the function can be used for robust edge or corner detection.

Parameters
srcInput single-channel 8-bit or floating-point image.
dstImage to store the results. It has the same size as src and the type CV_32FC(6) .
blockSizeNeighborhood size (see details below).
ksizeAperture parameter for the Sobel operator.
borderTypePixel extrapolation method. See BorderTypes. BORDER_WRAP is not supported.
See also
cornerMinEigenVal, cornerHarris, preCornerDetect

◆ cornerHarris()

void cv::cornerHarris ( InputArray src,
OutputArray dst,
int blockSize,
int ksize,
double k,
int borderType = BORDER_DEFAULT )
Python:
cv.cornerHarris(src, blockSize, ksize, k[, dst[, borderType]]) -> dst

#include <opencv2/imgproc.hpp>

Harris corner detector.

The function runs the Harris corner detector on the image. Similarly to cornerMinEigenVal and cornerEigenValsAndVecs , for each pixel \((x, y)\) it calculates a \(2\times2\) gradient covariance matrix \(M^{(x,y)}\) over a \(\texttt{blockSize} \times \texttt{blockSize}\) neighborhood. Then, it computes the following characteristic:

\[\texttt{dst} (x,y) = \mathrm{det} M^{(x,y)} - k \cdot \left ( \mathrm{tr} M^{(x,y)} \right )^2\]

Corners in the image can be found as the local maxima of this response map.

Parameters
srcInput single-channel 8-bit or floating-point image.
dstImage to store the Harris detector responses. It has the type CV_32FC1 and the same size as src .
blockSizeNeighborhood size (see the details on cornerEigenValsAndVecs ).
ksizeAperture parameter for the Sobel operator.
kHarris detector free parameter. See the formula above.
borderTypePixel extrapolation method. See BorderTypes. BORDER_WRAP is not supported.

◆ cornerMinEigenVal()

void cv::cornerMinEigenVal ( InputArray src,
OutputArray dst,
int blockSize,
int ksize = 3,
int borderType = BORDER_DEFAULT )
Python:
cv.cornerMinEigenVal(src, blockSize[, dst[, ksize[, borderType]]]) -> dst

#include <opencv2/imgproc.hpp>

Calculates the minimal eigenvalue of gradient matrices for corner detection.

The function is similar to cornerEigenValsAndVecs but it calculates and stores only the minimal eigenvalue of the covariance matrix of derivatives, that is, \(\min(\lambda_1, \lambda_2)\) in terms of the formulae in the cornerEigenValsAndVecs description.

Parameters
srcInput single-channel 8-bit or floating-point image.
dstImage to store the minimal eigenvalues. It has the type CV_32FC1 and the same size as src .
blockSizeNeighborhood size (see the details on cornerEigenValsAndVecs ).
ksizeAperture parameter for the Sobel operator.
borderTypePixel extrapolation method. See BorderTypes. BORDER_WRAP is not supported.

◆ cornerSubPix()

void cv::cornerSubPix ( InputArray image,
InputOutputArray corners,
Size winSize,
Size zeroZone,
TermCriteria criteria )
Python:
cv.cornerSubPix(image, corners, winSize, zeroZone, criteria) -> corners

#include <opencv2/imgproc.hpp>

Refines the corner locations.

The function iterates to find the sub-pixel accurate location of corners or radial saddle points as described in [96], and as shown on the figure below.

image

Sub-pixel accurate corner locator is based on the observation that every vector from the center \(q\) to a point \(p\) located within a neighborhood of \(q\) is orthogonal to the image gradient at \(p\) subject to image and measurement noise. Consider the expression:

\[\epsilon _i = {DI_{p_i}}^T \cdot (q - p_i)\]

where \({DI_{p_i}}\) is an image gradient at one of the points \(p_i\) in a neighborhood of \(q\) . The value of \(q\) is to be found so that \(\epsilon_i\) is minimized. A system of equations may be set up with \(\epsilon_i\) set to zero:

\[\sum _i(DI_{p_i} \cdot {DI_{p_i}}^T) \cdot q - \sum _i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i)\]

where the gradients are summed within a neighborhood ("search window") of \(q\) . Calling the first gradient term \(G\) and the second gradient term \(b\) gives:

\[q = G^{-1} \cdot b\]

The algorithm sets the center of the neighborhood window at this new center \(q\) and then iterates until the center stays within a set threshold.

Parameters
imageInput single-channel, 8-bit or float image.
cornersInitial coordinates of the input corners and refined coordinates provided for output.
winSizeHalf of the side length of the search window. For example, if winSize=Size(5,5) , then a \((5*2+1) \times (5*2+1) = 11 \times 11\) search window is used.
zeroZoneHalf of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such a size.
criteriaCriteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after criteria.maxCount iterations or when the corner position moves by less than criteria.epsilon on some iteration.

◆ createLineSegmentDetector()

Ptr< LineSegmentDetector > cv::createLineSegmentDetector ( LineSegmentDetectorModes refine = LSD_REFINE_STD,
double scale = 0.8,
double sigma_scale = 0.6,
double quant = 2.0,
double ang_th = 22.5,
double log_eps = 0,
double density_th = 0.7,
int n_bins = 1024 )
Python:
cv.createLineSegmentDetector([, refine[, scale[, sigma_scale[, quant[, ang_th[, log_eps[, density_th[, n_bins]]]]]]]]) -> retval

#include <opencv2/imgproc.hpp>

Creates a smart pointer to a LineSegmentDetector object and initializes it.

The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

Parameters
refineThe way found lines will be refined, see LineSegmentDetectorModes
scaleThe scale of the image that will be used to find the lines. Range (0..1].
sigma_scaleSigma for Gaussian filter. It is computed as sigma = sigma_scale/scale.
quantBound to the quantization error on the gradient norm.
ang_thGradient angle tolerance in degrees.
log_epsDetection threshold: -log10(NFA) > log_eps. Used only when advance refinement is chosen.
density_thMinimal density of aligned region points in the enclosing rectangle.
n_binsNumber of bins in pseudo-ordering of gradient modulus.

◆ goodFeaturesToTrack() [1/3]

void cv::goodFeaturesToTrack ( InputArray image,
OutputArray corners,
int maxCorners,
double qualityLevel,
double minDistance,
InputArray mask,
int blockSize,
int gradientSize,
bool useHarrisDetector = false,
double k = 0.04 )
Python:
cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance[, corners[, mask[, blockSize[, useHarrisDetector[, k]]]]]) -> corners
cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance, mask, blockSize, gradientSize[, corners[, useHarrisDetector[, k]]]) -> corners
cv.goodFeaturesToTrackWithQuality(image, maxCorners, qualityLevel, minDistance, mask[, corners[, cornersQuality[, blockSize[, gradientSize[, useHarrisDetector[, k]]]]]]) -> corners, cornersQuality

#include <opencv2/imgproc.hpp>

◆ goodFeaturesToTrack() [2/3]

void cv::goodFeaturesToTrack ( InputArray image,
OutputArray corners,
int maxCorners,
double qualityLevel,
double minDistance,
InputArray mask,
OutputArray cornersQuality,
int blockSize = 3,
int gradientSize = 3,
bool useHarrisDetector = false,
double k = 0.04 )
Python:
cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance[, corners[, mask[, blockSize[, useHarrisDetector[, k]]]]]) -> corners
cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance, mask, blockSize, gradientSize[, corners[, useHarrisDetector[, k]]]) -> corners
cv.goodFeaturesToTrackWithQuality(image, maxCorners, qualityLevel, minDistance, mask[, corners[, cornersQuality[, blockSize[, gradientSize[, useHarrisDetector[, k]]]]]]) -> corners, cornersQuality

#include <opencv2/imgproc.hpp>

Same as above, but returns also quality measure of the detected corners.

Parameters
imageInput 8-bit or floating-point 32-bit, single-channel image.
cornersOutput vector of detected corners.
maxCornersMaximum number of corners to return. If there are more corners than are found, the strongest of them is returned. maxCorners <= 0 implies that no limit on the maximum is set and all detected corners are returned.
qualityLevelParameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue (see cornerMinEigenVal ) or the Harris function response (see cornerHarris ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01 , then all the corners with the quality measure less than 15 are rejected.
minDistanceMinimum possible Euclidean distance between the returned corners.
maskRegion of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are detected.
cornersQualityOutput vector of quality measure of the detected corners.
blockSizeSize of an average block for computing a derivative covariation matrix over each pixel neighborhood. See cornerEigenValsAndVecs .
gradientSizeAperture parameter for the Sobel operator used for derivatives computation. See cornerEigenValsAndVecs .
useHarrisDetectorParameter indicating whether to use a Harris detector (see cornerHarris) or cornerMinEigenVal.
kFree parameter of the Harris detector.

◆ goodFeaturesToTrack() [3/3]

void cv::goodFeaturesToTrack ( InputArray image,
OutputArray corners,
int maxCorners,
double qualityLevel,
double minDistance,
InputArray mask = noArray(),
int blockSize = 3,
bool useHarrisDetector = false,
double k = 0.04 )
Python:
cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance[, corners[, mask[, blockSize[, useHarrisDetector[, k]]]]]) -> corners
cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance, mask, blockSize, gradientSize[, corners[, useHarrisDetector[, k]]]) -> corners
cv.goodFeaturesToTrackWithQuality(image, maxCorners, qualityLevel, minDistance, mask[, corners[, cornersQuality[, blockSize[, gradientSize[, useHarrisDetector[, k]]]]]]) -> corners, cornersQuality

#include <opencv2/imgproc.hpp>

Determines strong corners on an image.

The function finds the most prominent corners in the image or in the specified image region, as described in [246]

  • Function calculates the corner quality measure at every source image pixel using the cornerMinEigenVal or cornerHarris .
  • Function performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained).
  • The corners with the minimal eigenvalue less than \(\texttt{qualityLevel} \cdot \max_{x,y} qualityMeasureMap(x,y)\) are rejected.
  • The remaining corners are sorted by the quality measure in the descending order.
  • Function throws away each corner for which there is a stronger corner at a distance less than maxDistance.

The function can be used to initialize a point-based tracker of an object.

Note
If the function is called with different values A and B of the parameter qualityLevel , and A > B, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B .
Parameters
imageInput 8-bit or floating-point 32-bit, single-channel image.
cornersOutput vector of detected corners.
maxCornersMaximum number of corners to return. If there are more corners than are found, the strongest of them is returned. maxCorners <= 0 implies that no limit on the maximum is set and all detected corners are returned.
qualityLevelParameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue (see cornerMinEigenVal ) or the Harris function response (see cornerHarris ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01 , then all the corners with the quality measure less than 15 are rejected.
minDistanceMinimum possible Euclidean distance between the returned corners.
maskOptional region of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are detected.
blockSizeSize of an average block for computing a derivative covariation matrix over each pixel neighborhood. See cornerEigenValsAndVecs .
useHarrisDetectorParameter indicating whether to use a Harris detector (see cornerHarris) or cornerMinEigenVal.
kFree parameter of the Harris detector.
See also
cornerMinEigenVal, cornerHarris, calcOpticalFlowPyrLK, estimateRigidTransform,

◆ HoughCircles()

void cv::HoughCircles ( InputArray image,
OutputArray circles,
int method,
double dp,
double minDist,
double param1 = 100,
double param2 = 100,
int minRadius = 0,
int maxRadius = 0 )
Python:
cv.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]]) -> circles

#include <opencv2/imgproc.hpp>

Finds circles in a grayscale image using the Hough transform.

The function finds circles in a grayscale image using a modification of the Hough transform.

Example: :

#include <math.h>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
Mat img, gray;
if( argc != 2 || !(img=imread(argv[1], IMREAD_COLOR)).data)
return -1;
cvtColor(img, gray, COLOR_BGR2GRAY);
// smooth it, otherwise a lot of false circles may be detected
GaussianBlur( gray, gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
HoughCircles(gray, circles, HOUGH_GRADIENT,
2, gray.rows/4, 200, 100 );
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// draw the circle center
circle( img, center, 3, Scalar(0,255,0), -1, 8, 0 );
// draw the circle outline
circle( img, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
namedWindow( "circles", 1 );
imshow( "circles", img );
waitKey(0);
return 0;
}
n-dimensional dense array class
Definition mat.hpp:951
int rows
the number of rows and columns or (-1, -1) when the matrix has more than 2 dimensions
Definition mat.hpp:2425
Template class for specifying the size of an image or rectangle.
Definition types.hpp:338
int cvRound(double value)
Rounds floating-point number to the nearest integer.
Definition fast_math.hpp:200
CV_EXPORTS_W Mat imread(const String &filename, int flags=IMREAD_COLOR_BGR)
Loads an image from a file.
void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0, AlgorithmHint hint=cv::ALGO_HINT_DEFAULT)
Converts an image from one color space to another.
void HoughCircles(InputArray image, OutputArray circles, int method, double dp, double minDist, double param1=100, double param2=100, int minRadius=0, int maxRadius=0)
Finds circles in a grayscale image using the Hough transform.
void GaussianBlur(InputArray src, OutputArray dst, Size ksize, double sigmaX, double sigmaY=0, int borderType=BORDER_DEFAULT, AlgorithmHint hint=cv::ALGO_HINT_DEFAULT)
Blurs an image using a Gaussian filter.
int main(int argc, char *argv[])
Definition highgui_qt.cpp:3
Definition core.hpp:107
STL namespace.
Note
Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.

It also helps to smooth image a bit unless it's already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.

Parameters
image8-bit, single-channel, grayscale input image.
circlesOutput vector of found circles. Each vector is encoded as 3 or 4 element floating-point vector \((x, y, radius)\) or \((x, y, radius, votes)\) .
methodDetection method, see HoughModes. The available methods are HOUGH_GRADIENT and HOUGH_GRADIENT_ALT.
dpInverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.
minDistMinimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
param1First method-specific parameter. In case of HOUGH_GRADIENT and HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that HOUGH_GRADIENT_ALT uses Scharr algorithm to compute image derivatives, so the threshold value should normally be higher, such as 300 or normally exposed and contrasty images.
param2Second method-specific parameter. In case of HOUGH_GRADIENT, it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of HOUGH_GRADIENT_ALT algorithm, this is the circle "perfectness" measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles.
minRadiusMinimum circle radius.
maxRadiusMaximum circle radius. If <= 0, uses the maximum image dimension. If < 0, HOUGH_GRADIENT returns centers without finding the radius. HOUGH_GRADIENT_ALT always computes circle radiuses.
See also
fitEllipse, minEnclosingCircle

◆ HoughLines()

void cv::HoughLines ( InputArray image,
OutputArray lines,
double rho,
double theta,
int threshold,
double srn = 0,
double stn = 0,
double min_theta = 0,
double max_theta = CV_PI )
Python:
cv.HoughLines(image, rho, theta, threshold[, lines[, srn[, stn[, min_theta[, max_theta]]]]]) -> lines

#include <opencv2/imgproc.hpp>

Finds lines in a binary image using the standard Hough transform.

The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.

Parameters
image8-bit, single-channel binary source image. The image may be modified by the function.
linesOutput vector of lines. Each line is represented by a 2 or 3 element vector \((\rho, \theta)\) or \((\rho, \theta, \textrm{votes})\), where \(\rho\) is the distance from the coordinate origin \((0,0)\) (top-left corner of the image), \(\theta\) is the line rotation angle in radians ( \(0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}\) ), and \(\textrm{votes}\) is the value of accumulator.
rhoDistance resolution of the accumulator in pixels.
thetaAngle resolution of the accumulator in radians.
thresholdAccumulator threshold parameter. Only those lines are returned that get enough votes ( \(>\texttt{threshold}\) ).
srnFor the multi-scale Hough transform, it is a divisor for the distance resolution rho. The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn. If both srn=0 and stn=0, the classical Hough transform is used. Otherwise, both these parameters should be positive.
stnFor the multi-scale Hough transform, it is a divisor for the distance resolution theta.
min_thetaFor standard and multi-scale Hough transform, minimum angle to check for lines. Must fall between 0 and max_theta.
max_thetaFor standard and multi-scale Hough transform, an upper bound for the angle. Must fall between min_theta and CV_PI. The actual maximum angle in the accumulator may be slightly less than max_theta, depending on the parameters min_theta and theta.

◆ HoughLinesP()

void cv::HoughLinesP ( InputArray image,
OutputArray lines,
double rho,
double theta,
int threshold,
double minLineLength = 0,
double maxLineGap = 0 )
Python:
cv.HoughLinesP(image, rho, theta, threshold[, lines[, minLineLength[, maxLineGap]]]) -> lines

#include <opencv2/imgproc.hpp>

Finds line segments in a binary image using the probabilistic Hough transform.

The function implements the probabilistic Hough transform algorithm for line detection, described in [186]

See the line detection example below:

using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
Mat src, dst, color_dst;
if( argc != 2 || !(src=imread(argv[1], IMREAD_GRAYSCALE)).data)
return -1;
Canny( src, dst, 50, 200, 3 );
cvtColor( dst, color_dst, COLOR_GRAY2BGR );
vector<Vec4i> lines;
HoughLinesP( dst, lines, 1, CV_PI/180, 80, 30, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
line( color_dst, Point(lines[i][0], lines[i][1]),
Point( lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 );
}
namedWindow( "Source", 1 );
imshow( "Source", src );
namedWindow( "Detected Lines", 1 );
imshow( "Detected Lines", color_dst );
waitKey(0);
return 0;
}
#define CV_PI
Definition cvdef.h:382
void line(InputOutputArray img, Point pt1, Point pt2, const Scalar &color, int thickness=1, int lineType=LINE_8, int shift=0)
Draws a line segment connecting two points.
void Canny(InputArray image, OutputArray edges, double threshold1, double threshold2, int apertureSize=3, bool L2gradient=false)
Finds edges in an image using the Canny algorithm canny86 .
void HoughLinesP(InputArray image, OutputArray lines, double rho, double theta, int threshold, double minLineLength=0, double maxLineGap=0)
Finds line segments in a binary image using the probabilistic Hough transform.

This is a sample picture the function parameters have been tuned for:

image

And this is the output of the above program in case of the probabilistic Hough transform:

image
Parameters
image8-bit, single-channel binary source image. The image may be modified by the function.
linesOutput vector of lines. Each line is represented by a 4-element vector \((x_1, y_1, x_2, y_2)\) , where \((x_1,y_1)\) and \((x_2, y_2)\) are the ending points of each detected line segment.
rhoDistance resolution of the accumulator in pixels.
thetaAngle resolution of the accumulator in radians.
thresholdAccumulator threshold parameter. Only those lines are returned that get enough votes ( \(>\texttt{threshold}\) ).
minLineLengthMinimum line length. Line segments shorter than that are rejected.
maxLineGapMaximum allowed gap between points on the same line to link them.
See also
LineSegmentDetector

◆ HoughLinesPointSet()

void cv::HoughLinesPointSet ( InputArray point,
OutputArray lines,
int lines_max,
int threshold,
double min_rho,
double max_rho,
double rho_step,
double min_theta,
double max_theta,
double theta_step )
Python:
cv.HoughLinesPointSet(point, lines_max, threshold, min_rho, max_rho, rho_step, min_theta, max_theta, theta_step[, lines]) -> lines

#include <opencv2/imgproc.hpp>

Finds lines in a set of points using the standard Hough transform.

The function finds lines in a set of points using a modification of the Hough transform.

#include <opencv2/core.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat lines;
vector<Vec3d> line3d;
vector<Point2f> point;
const static float Points[20][2] = {
{ 0.0f, 369.0f }, { 10.0f, 364.0f }, { 20.0f, 358.0f }, { 30.0f, 352.0f },
{ 40.0f, 346.0f }, { 50.0f, 341.0f }, { 60.0f, 335.0f }, { 70.0f, 329.0f },
{ 80.0f, 323.0f }, { 90.0f, 318.0f }, { 100.0f, 312.0f }, { 110.0f, 306.0f },
{ 120.0f, 300.0f }, { 130.0f, 295.0f }, { 140.0f, 289.0f }, { 150.0f, 284.0f },
{ 160.0f, 277.0f }, { 170.0f, 271.0f }, { 180.0f, 266.0f }, { 190.0f, 260.0f }
};
for (int i = 0; i < 20; i++)
{
point.push_back(Point2f(Points[i][0],Points[i][1]));
}
double rhoMin = 0.0f, rhoMax = 360.0f, rhoStep = 1;
double thetaMin = 0.0f, thetaMax = CV_PI / 2.0f, thetaStep = CV_PI / 180.0f;
HoughLinesPointSet(point, lines, 20, 1,
rhoMin, rhoMax, rhoStep,
thetaMin, thetaMax, thetaStep);
lines.copyTo(line3d);
printf("votes:%d, rho:%.7f, theta:%.7f\n",(int)line3d.at(0).val[0], line3d.at(0).val[1], line3d.at(0).val[2]);
}
void copyTo(OutputArray m) const
Copies the matrix to another one.
void HoughLinesPointSet(InputArray point, OutputArray lines, int lines_max, int threshold, double min_rho, double max_rho, double rho_step, double min_theta, double max_theta, double theta_step)
Finds lines in a set of points using the standard Hough transform.
Parameters
pointInput vector of points. Each vector must be encoded as a Point vector \((x,y)\). Type must be CV_32FC2 or CV_32SC2.
linesOutput vector of found lines. Each vector is encoded as a vector<Vec3d> \((votes, rho, theta)\). The larger the value of 'votes', the higher the reliability of the Hough line.
lines_maxMax count of Hough lines.
thresholdAccumulator threshold parameter. Only those lines are returned that get enough votes ( \(>\texttt{threshold}\) ).
min_rhoMinimum value for \(\rho\) for the accumulator (Note: \(\rho\) can be negative. The absolute value \(|\rho|\) is the distance of a line to the origin.).
max_rhoMaximum value for \(\rho\) for the accumulator.
rho_stepDistance resolution of the accumulator.
min_thetaMinimum angle value of the accumulator in radians.
max_thetaUpper bound for the angle value of the accumulator in radians. The actual maximum angle may be slightly less than max_theta, depending on the parameters min_theta and theta_step.
theta_stepAngle resolution of the accumulator in radians.

◆ preCornerDetect()

void cv::preCornerDetect ( InputArray src,
OutputArray dst,
int ksize,
int borderType = BORDER_DEFAULT )
Python:
cv.preCornerDetect(src, ksize[, dst[, borderType]]) -> dst

#include <opencv2/imgproc.hpp>

Calculates a feature map for corner detection.

The function calculates the complex spatial derivative-based function of the source image

\[\texttt{dst} = (D_x \texttt{src} )^2 \cdot D_{yy} \texttt{src} + (D_y \texttt{src} )^2 \cdot D_{xx} \texttt{src} - 2 D_x \texttt{src} \cdot D_y \texttt{src} \cdot D_{xy} \texttt{src}\]

where \(D_x\), \(D_y\) are the first image derivatives, \(D_{xx}\), \(D_{yy}\) are the second image derivatives, and \(D_{xy}\) is the mixed derivative.

The corners can be found as local maximums of the functions, as shown below:

Mat corners, dilated_corners;
preCornerDetect(image, corners, 3);
// dilation with 3x3 rectangular structuring element
dilate(corners, dilated_corners, Mat(), 1);
Mat corner_mask = corners == dilated_corners;
void preCornerDetect(InputArray src, OutputArray dst, int ksize, int borderType=BORDER_DEFAULT)
Calculates a feature map for corner detection.
void dilate(InputArray src, OutputArray dst, InputArray kernel, Point anchor=Point(-1,-1), int iterations=1, int borderType=BORDER_CONSTANT, const Scalar &borderValue=morphologyDefaultBorderValue())
Dilates an image by using a specific structuring element.
Parameters
srcSource single-channel 8-bit of floating-point image.
dstOutput image that has the type CV_32F and the same size as src .
ksizeAperture size of the Sobel .
borderTypePixel extrapolation method. See BorderTypes. BORDER_WRAP is not supported.