OpenCV  4.3.0
Open Source Computer Vision
Classes | Enumerations | Functions
Feature Detection

Classes

class  cv::LineSegmentDetector
 Line segment detector class. More...
 

Enumerations

enum  cv::HoughModes {
  cv::HOUGH_STANDARD = 0,
  cv::HOUGH_PROBABILISTIC = 1,
  cv::HOUGH_MULTI_SCALE = 2,
  cv::HOUGH_GRADIENT = 3,
  cv::HOUGH_GRADIENT_ALT = 4
}
 Variants of a Hough transform. More...
 
enum  cv::LineSegmentDetectorModes {
  cv::LSD_REFINE_NONE = 0,
  cv::LSD_REFINE_STD = 1,
  cv::LSD_REFINE_ADV = 2
}
 Variants of Line Segment Detector. More...
 

Functions

void cv::Canny (InputArray image, OutputArray edges, double threshold1, double threshold2, int apertureSize=3, bool L2gradient=false)
 Finds edges in an image using the Canny algorithm [36] . More...
 
void cv::Canny (InputArray dx, InputArray dy, OutputArray edges, double threshold1, double threshold2, bool L2gradient=false)
 
void cv::cornerEigenValsAndVecs (InputArray src, OutputArray dst, int blockSize, int ksize, int borderType=BORDER_DEFAULT)
 Calculates eigenvalues and eigenvectors of image blocks for corner detection. More...
 
void cv::cornerHarris (InputArray src, OutputArray dst, int blockSize, int ksize, double k, int borderType=BORDER_DEFAULT)
 Harris corner detector. More...
 
void cv::cornerMinEigenVal (InputArray src, OutputArray dst, int blockSize, int ksize=3, int borderType=BORDER_DEFAULT)
 Calculates the minimal eigenvalue of gradient matrices for corner detection. More...
 
void cv::cornerSubPix (InputArray image, InputOutputArray corners, Size winSize, Size zeroZone, TermCriteria criteria)
 Refines the corner locations. More...
 
Ptr< LineSegmentDetectorcv::createLineSegmentDetector (int _refine=LSD_REFINE_STD, double _scale=0.8, double _sigma_scale=0.6, double _quant=2.0, double _ang_th=22.5, double _log_eps=0, double _density_th=0.7, int _n_bins=1024)
 Creates a smart pointer to a LineSegmentDetector object and initializes it. More...
 
void cv::goodFeaturesToTrack (InputArray image, OutputArray corners, int maxCorners, double qualityLevel, double minDistance, InputArray mask=noArray(), int blockSize=3, bool useHarrisDetector=false, double k=0.04)
 Determines strong corners on an image. More...
 
void cv::goodFeaturesToTrack (InputArray image, OutputArray corners, int maxCorners, double qualityLevel, double minDistance, InputArray mask, int blockSize, int gradientSize, bool useHarrisDetector=false, double k=0.04)
 
void cv::HoughCircles (InputArray image, OutputArray circles, int method, double dp, double minDist, double param1=100, double param2=100, int minRadius=0, int maxRadius=0)
 Finds circles in a grayscale image using the Hough transform. More...
 
void cv::HoughLines (InputArray image, OutputArray lines, double rho, double theta, int threshold, double srn=0, double stn=0, double min_theta=0, double max_theta=CV_PI)
 Finds lines in a binary image using the standard Hough transform. More...
 
void cv::HoughLinesP (InputArray image, OutputArray lines, double rho, double theta, int threshold, double minLineLength=0, double maxLineGap=0)
 Finds line segments in a binary image using the probabilistic Hough transform. More...
 
void cv::HoughLinesPointSet (InputArray _point, OutputArray _lines, int lines_max, int threshold, double min_rho, double max_rho, double rho_step, double min_theta, double max_theta, double theta_step)
 Finds lines in a set of points using the standard Hough transform. More...
 
void cv::preCornerDetect (InputArray src, OutputArray dst, int ksize, int borderType=BORDER_DEFAULT)
 Calculates a feature map for corner detection. More...
 

Detailed Description

Enumeration Type Documentation

◆ HoughModes

#include <opencv2/imgproc.hpp>

Variants of a Hough transform.

Enumerator
HOUGH_STANDARD 
Python: cv.HOUGH_STANDARD

classical or standard Hough transform. Every line is represented by two floating-point numbers \((\rho, \theta)\) , where \(\rho\) is a distance between (0,0) point and the line, and \(\theta\) is the angle between x-axis and the normal to the line. Thus, the matrix must be (the created sequence will be) of CV_32FC2 type

HOUGH_PROBABILISTIC 
Python: cv.HOUGH_PROBABILISTIC

probabilistic Hough transform (more efficient in case if the picture contains a few long linear segments). It returns line segments rather than the whole line. Each segment is represented by starting and ending points, and the matrix must be (the created sequence will be) of the CV_32SC4 type.

HOUGH_MULTI_SCALE 
Python: cv.HOUGH_MULTI_SCALE

multi-scale variant of the classical Hough transform. The lines are encoded the same way as HOUGH_STANDARD.

HOUGH_GRADIENT 
Python: cv.HOUGH_GRADIENT

basically 21HT, described in [266]

HOUGH_GRADIENT_ALT 
Python: cv.HOUGH_GRADIENT_ALT

variation of HOUGH_GRADIENT to get better accuracy

◆ LineSegmentDetectorModes

#include <opencv2/imgproc.hpp>

Variants of Line Segment Detector.

Enumerator
LSD_REFINE_NONE 
Python: cv.LSD_REFINE_NONE

No refinement applied.

LSD_REFINE_STD 
Python: cv.LSD_REFINE_STD

Standard refinement is applied. E.g. breaking arches into smaller straighter line approximations.

LSD_REFINE_ADV 
Python: cv.LSD_REFINE_ADV

Advanced refinement. Number of false alarms is calculated, lines are refined through increase of precision, decrement in size, etc.

Function Documentation

◆ Canny() [1/2]

void cv::Canny ( InputArray  image,
OutputArray  edges,
double  threshold1,
double  threshold2,
int  apertureSize = 3,
bool  L2gradient = false 
)
Python:
edges=cv.Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]])
edges=cv.Canny(dx, dy, threshold1, threshold2[, edges[, L2gradient]])

#include <opencv2/imgproc.hpp>

Finds edges in an image using the Canny algorithm [36] .

The function finds edges in the input image and marks them in the output map edges using the Canny algorithm. The smallest value between threshold1 and threshold2 is used for edge linking. The largest value is used to find initial segments of strong edges. See http://en.wikipedia.org/wiki/Canny_edge_detector

Parameters
image8-bit input image.
edgesoutput edge map; single channels 8-bit image, which has the same size as image .
threshold1first threshold for the hysteresis procedure.
threshold2second threshold for the hysteresis procedure.
apertureSizeaperture size for the Sobel operator.
L2gradienta flag, indicating whether a more accurate \(L_2\) norm \(=\sqrt{(dI/dx)^2 + (dI/dy)^2}\) should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default \(L_1\) norm \(=|dI/dx|+|dI/dy|\) is enough ( L2gradient=false ).
Examples:
samples/cpp/edge.cpp, samples/cpp/squares.cpp, samples/cpp/tutorial_code/ImgTrans/houghlines.cpp, and samples/tapi/squares.cpp.

◆ Canny() [2/2]

void cv::Canny ( InputArray  dx,
InputArray  dy,
OutputArray  edges,
double  threshold1,
double  threshold2,
bool  L2gradient = false 
)
Python:
edges=cv.Canny(image, threshold1, threshold2[, edges[, apertureSize[, L2gradient]]])
edges=cv.Canny(dx, dy, threshold1, threshold2[, edges[, L2gradient]])

#include <opencv2/imgproc.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Finds edges in an image using the Canny algorithm with custom image gradient.

Parameters
dx16-bit x derivative of input image (CV_16SC1 or CV_16SC3).
dy16-bit y derivative of input image (same type as dx).
edgesoutput edge map; single channels 8-bit image, which has the same size as image .
threshold1first threshold for the hysteresis procedure.
threshold2second threshold for the hysteresis procedure.
L2gradienta flag, indicating whether a more accurate \(L_2\) norm \(=\sqrt{(dI/dx)^2 + (dI/dy)^2}\) should be used to calculate the image gradient magnitude ( L2gradient=true ), or whether the default \(L_1\) norm \(=|dI/dx|+|dI/dy|\) is enough ( L2gradient=false ).

◆ cornerEigenValsAndVecs()

void cv::cornerEigenValsAndVecs ( InputArray  src,
OutputArray  dst,
int  blockSize,
int  ksize,
int  borderType = BORDER_DEFAULT 
)
Python:
dst=cv.cornerEigenValsAndVecs(src, blockSize, ksize[, dst[, borderType]])

#include <opencv2/imgproc.hpp>

Calculates eigenvalues and eigenvectors of image blocks for corner detection.

For every pixel \(p\) , the function cornerEigenValsAndVecs considers a blockSize \(\times\) blockSize neighborhood \(S(p)\) . It calculates the covariation matrix of derivatives over the neighborhood as:

\[M = \begin{bmatrix} \sum _{S(p)}(dI/dx)^2 & \sum _{S(p)}dI/dx dI/dy \\ \sum _{S(p)}dI/dx dI/dy & \sum _{S(p)}(dI/dy)^2 \end{bmatrix}\]

where the derivatives are computed using the Sobel operator.

After that, it finds eigenvectors and eigenvalues of \(M\) and stores them in the destination image as \((\lambda_1, \lambda_2, x_1, y_1, x_2, y_2)\) where

  • \(\lambda_1, \lambda_2\) are the non-sorted eigenvalues of \(M\)
  • \(x_1, y_1\) are the eigenvectors corresponding to \(\lambda_1\)
  • \(x_2, y_2\) are the eigenvectors corresponding to \(\lambda_2\)

The output of the function can be used for robust edge or corner detection.

Parameters
srcInput single-channel 8-bit or floating-point image.
dstImage to store the results. It has the same size as src and the type CV_32FC(6) .
blockSizeNeighborhood size (see details below).
ksizeAperture parameter for the Sobel operator.
borderTypePixel extrapolation method. See BorderTypes. BORDER_WRAP is not supported.
See also
cornerMinEigenVal, cornerHarris, preCornerDetect

◆ cornerHarris()

void cv::cornerHarris ( InputArray  src,
OutputArray  dst,
int  blockSize,
int  ksize,
double  k,
int  borderType = BORDER_DEFAULT 
)
Python:
dst=cv.cornerHarris(src, blockSize, ksize, k[, dst[, borderType]])

#include <opencv2/imgproc.hpp>

Harris corner detector.

The function runs the Harris corner detector on the image. Similarly to cornerMinEigenVal and cornerEigenValsAndVecs , for each pixel \((x, y)\) it calculates a \(2\times2\) gradient covariance matrix \(M^{(x,y)}\) over a \(\texttt{blockSize} \times \texttt{blockSize}\) neighborhood. Then, it computes the following characteristic:

\[\texttt{dst} (x,y) = \mathrm{det} M^{(x,y)} - k \cdot \left ( \mathrm{tr} M^{(x,y)} \right )^2\]

Corners in the image can be found as the local maxima of this response map.

Parameters
srcInput single-channel 8-bit or floating-point image.
dstImage to store the Harris detector responses. It has the type CV_32FC1 and the same size as src .
blockSizeNeighborhood size (see the details on cornerEigenValsAndVecs ).
ksizeAperture parameter for the Sobel operator.
kHarris detector free parameter. See the formula above.
borderTypePixel extrapolation method. See BorderTypes. BORDER_WRAP is not supported.

◆ cornerMinEigenVal()

void cv::cornerMinEigenVal ( InputArray  src,
OutputArray  dst,
int  blockSize,
int  ksize = 3,
int  borderType = BORDER_DEFAULT 
)
Python:
dst=cv.cornerMinEigenVal(src, blockSize[, dst[, ksize[, borderType]]])

#include <opencv2/imgproc.hpp>

Calculates the minimal eigenvalue of gradient matrices for corner detection.

The function is similar to cornerEigenValsAndVecs but it calculates and stores only the minimal eigenvalue of the covariance matrix of derivatives, that is, \(\min(\lambda_1, \lambda_2)\) in terms of the formulae in the cornerEigenValsAndVecs description.

Parameters
srcInput single-channel 8-bit or floating-point image.
dstImage to store the minimal eigenvalues. It has the type CV_32FC1 and the same size as src .
blockSizeNeighborhood size (see the details on cornerEigenValsAndVecs ).
ksizeAperture parameter for the Sobel operator.
borderTypePixel extrapolation method. See BorderTypes. BORDER_WRAP is not supported.

◆ cornerSubPix()

void cv::cornerSubPix ( InputArray  image,
InputOutputArray  corners,
Size  winSize,
Size  zeroZone,
TermCriteria  criteria 
)
Python:
corners=cv.cornerSubPix(image, corners, winSize, zeroZone, criteria)

#include <opencv2/imgproc.hpp>

Refines the corner locations.

The function iterates to find the sub-pixel accurate location of corners or radial saddle points, as shown on the figure below.

cornersubpix.png
image

Sub-pixel accurate corner locator is based on the observation that every vector from the center \(q\) to a point \(p\) located within a neighborhood of \(q\) is orthogonal to the image gradient at \(p\) subject to image and measurement noise. Consider the expression:

\[\epsilon _i = {DI_{p_i}}^T \cdot (q - p_i)\]

where \({DI_{p_i}}\) is an image gradient at one of the points \(p_i\) in a neighborhood of \(q\) . The value of \(q\) is to be found so that \(\epsilon_i\) is minimized. A system of equations may be set up with \(\epsilon_i\) set to zero:

\[\sum _i(DI_{p_i} \cdot {DI_{p_i}}^T) \cdot q - \sum _i(DI_{p_i} \cdot {DI_{p_i}}^T \cdot p_i)\]

where the gradients are summed within a neighborhood ("search window") of \(q\) . Calling the first gradient term \(G\) and the second gradient term \(b\) gives:

\[q = G^{-1} \cdot b\]

The algorithm sets the center of the neighborhood window at this new center \(q\) and then iterates until the center stays within a set threshold.

Parameters
imageInput single-channel, 8-bit or float image.
cornersInitial coordinates of the input corners and refined coordinates provided for output.
winSizeHalf of the side length of the search window. For example, if winSize=Size(5,5) , then a \((5*2+1) \times (5*2+1) = 11 \times 11\) search window is used.
zeroZoneHalf of the size of the dead region in the middle of the search zone over which the summation in the formula below is not done. It is used sometimes to avoid possible singularities of the autocorrelation matrix. The value of (-1,-1) indicates that there is no such a size.
criteriaCriteria for termination of the iterative process of corner refinement. That is, the process of corner position refinement stops either after criteria.maxCount iterations or when the corner position moves by less than criteria.epsilon on some iteration.
Examples:
samples/cpp/lkdemo.cpp.

◆ createLineSegmentDetector()

Ptr<LineSegmentDetector> cv::createLineSegmentDetector ( int  _refine = LSD_REFINE_STD,
double  _scale = 0.8,
double  _sigma_scale = 0.6,
double  _quant = 2.0,
double  _ang_th = 22.5,
double  _log_eps = 0,
double  _density_th = 0.7,
int  _n_bins = 1024 
)
Python:
retval=cv.createLineSegmentDetector([, _refine[, _scale[, _sigma_scale[, _quant[, _ang_th[, _log_eps[, _density_th[, _n_bins]]]]]]]])

#include <opencv2/imgproc.hpp>

Creates a smart pointer to a LineSegmentDetector object and initializes it.

The LineSegmentDetector algorithm is defined using the standard values. Only advanced users may want to edit those, as to tailor it for their own application.

Parameters
_refineThe way found lines will be refined, see LineSegmentDetectorModes
_scaleThe scale of the image that will be used to find the lines. Range (0..1].
_sigma_scaleSigma for Gaussian filter. It is computed as sigma = _sigma_scale/_scale.
_quantBound to the quantization error on the gradient norm.
_ang_thGradient angle tolerance in degrees.
_log_epsDetection threshold: -log10(NFA) > log_eps. Used only when advance refinement is chosen.
_density_thMinimal density of aligned region points in the enclosing rectangle.
_n_binsNumber of bins in pseudo-ordering of gradient modulus.
Note
Implementation has been removed due original code license conflict

◆ goodFeaturesToTrack() [1/2]

void cv::goodFeaturesToTrack ( InputArray  image,
OutputArray  corners,
int  maxCorners,
double  qualityLevel,
double  minDistance,
InputArray  mask = noArray(),
int  blockSize = 3,
bool  useHarrisDetector = false,
double  k = 0.04 
)
Python:
corners=cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance[, corners[, mask[, blockSize[, useHarrisDetector[, k]]]]])
corners=cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance, mask, blockSize, gradientSize[, corners[, useHarrisDetector[, k]]])

#include <opencv2/imgproc.hpp>

Determines strong corners on an image.

The function finds the most prominent corners in the image or in the specified image region, as described in [212]

  • Function calculates the corner quality measure at every source image pixel using the cornerMinEigenVal or cornerHarris .
  • Function performs a non-maximum suppression (the local maximums in 3 x 3 neighborhood are retained).
  • The corners with the minimal eigenvalue less than \(\texttt{qualityLevel} \cdot \max_{x,y} qualityMeasureMap(x,y)\) are rejected.
  • The remaining corners are sorted by the quality measure in the descending order.
  • Function throws away each corner for which there is a stronger corner at a distance less than maxDistance.

The function can be used to initialize a point-based tracker of an object.

Note
If the function is called with different values A and B of the parameter qualityLevel , and A > B, the vector of returned corners with qualityLevel=A will be the prefix of the output vector with qualityLevel=B .
Parameters
imageInput 8-bit or floating-point 32-bit, single-channel image.
cornersOutput vector of detected corners.
maxCornersMaximum number of corners to return. If there are more corners than are found, the strongest of them is returned. maxCorners <= 0 implies that no limit on the maximum is set and all detected corners are returned.
qualityLevelParameter characterizing the minimal accepted quality of image corners. The parameter value is multiplied by the best corner quality measure, which is the minimal eigenvalue (see cornerMinEigenVal ) or the Harris function response (see cornerHarris ). The corners with the quality measure less than the product are rejected. For example, if the best corner has the quality measure = 1500, and the qualityLevel=0.01 , then all the corners with the quality measure less than 15 are rejected.
minDistanceMinimum possible Euclidean distance between the returned corners.
maskOptional region of interest. If the image is not empty (it needs to have the type CV_8UC1 and the same size as image ), it specifies the region in which the corners are detected.
blockSizeSize of an average block for computing a derivative covariation matrix over each pixel neighborhood. See cornerEigenValsAndVecs .
useHarrisDetectorParameter indicating whether to use a Harris detector (see cornerHarris) or cornerMinEigenVal.
kFree parameter of the Harris detector.
See also
cornerMinEigenVal, cornerHarris, calcOpticalFlowPyrLK, estimateRigidTransform,
Examples:
samples/cpp/lkdemo.cpp.

◆ goodFeaturesToTrack() [2/2]

void cv::goodFeaturesToTrack ( InputArray  image,
OutputArray  corners,
int  maxCorners,
double  qualityLevel,
double  minDistance,
InputArray  mask,
int  blockSize,
int  gradientSize,
bool  useHarrisDetector = false,
double  k = 0.04 
)
Python:
corners=cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance[, corners[, mask[, blockSize[, useHarrisDetector[, k]]]]])
corners=cv.goodFeaturesToTrack(image, maxCorners, qualityLevel, minDistance, mask, blockSize, gradientSize[, corners[, useHarrisDetector[, k]]])

#include <opencv2/imgproc.hpp>

◆ HoughCircles()

void cv::HoughCircles ( InputArray  image,
OutputArray  circles,
int  method,
double  dp,
double  minDist,
double  param1 = 100,
double  param2 = 100,
int  minRadius = 0,
int  maxRadius = 0 
)
Python:
circles=cv.HoughCircles(image, method, dp, minDist[, circles[, param1[, param2[, minRadius[, maxRadius]]]]])

#include <opencv2/imgproc.hpp>

Finds circles in a grayscale image using the Hough transform.

The function finds circles in a grayscale image using a modification of the Hough transform.

Example: :

#include <math.h>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
Mat img, gray;
if( argc != 2 || !(img=imread(argv[1], 1)).data)
return -1;
cvtColor(img, gray, COLOR_BGR2GRAY);
// smooth it, otherwise a lot of false circles may be detected
GaussianBlur( gray, gray, Size(9, 9), 2, 2 );
vector<Vec3f> circles;
HoughCircles(gray, circles, HOUGH_GRADIENT,
2, gray.rows/4, 200, 100 );
for( size_t i = 0; i < circles.size(); i++ )
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
// draw the circle center
circle( img, center, 3, Scalar(0,255,0), -1, 8, 0 );
// draw the circle outline
circle( img, center, radius, Scalar(0,0,255), 3, 8, 0 );
}
namedWindow( "circles", 1 );
imshow( "circles", img );
waitKey(0);
return 0;
}
Note
Usually the function detects the centers of circles well. However, it may fail to find correct radii. You can assist to the function by specifying the radius range ( minRadius and maxRadius ) if you know it. Or, in the case of HOUGH_GRADIENT method you may set maxRadius to a negative number to return centers only without radius search, and find the correct radius using an additional procedure.

It also helps to smooth image a bit unless it's already soft. For example, GaussianBlur() with 7x7 kernel and 1.5x1.5 sigma or similar blurring may help.

Parameters
image8-bit, single-channel, grayscale input image.
circlesOutput vector of found circles. Each vector is encoded as 3 or 4 element floating-point vector \((x, y, radius)\) or \((x, y, radius, votes)\) .
methodDetection method, see HoughModes. The available methods are HOUGH_GRADIENT and HOUGH_GRADIENT_ALT.
dpInverse ratio of the accumulator resolution to the image resolution. For example, if dp=1 , the accumulator has the same resolution as the input image. If dp=2 , the accumulator has half as big width and height. For HOUGH_GRADIENT_ALT the recommended value is dp=1.5, unless some small very circles need to be detected.
minDistMinimum distance between the centers of the detected circles. If the parameter is too small, multiple neighbor circles may be falsely detected in addition to a true one. If it is too large, some circles may be missed.
param1First method-specific parameter. In case of HOUGH_GRADIENT and HOUGH_GRADIENT_ALT, it is the higher threshold of the two passed to the Canny edge detector (the lower one is twice smaller). Note that HOUGH_GRADIENT_ALT uses Scharr algorithm to compute image derivatives, so the threshold value shough normally be higher, such as 300 or normally exposed and contrasty images.
param2Second method-specific parameter. In case of HOUGH_GRADIENT, it is the accumulator threshold for the circle centers at the detection stage. The smaller it is, the more false circles may be detected. Circles, corresponding to the larger accumulator values, will be returned first. In the case of HOUGH_GRADIENT_ALT algorithm, this is the circle "perfectness" measure. The closer it to 1, the better shaped circles algorithm selects. In most cases 0.9 should be fine. If you want get better detection of small circles, you may decrease it to 0.85, 0.8 or even less. But then also try to limit the search range [minRadius, maxRadius] to avoid many false circles.
minRadiusMinimum circle radius.
maxRadiusMaximum circle radius. If <= 0, uses the maximum image dimension. If < 0, HOUGH_GRADIENT returns centers without finding the radius. HOUGH_GRADIENT_ALT always computes circle radiuses.
See also
fitEllipse, minEnclosingCircle
Examples:
samples/cpp/tutorial_code/ImgTrans/houghcircles.cpp.

◆ HoughLines()

void cv::HoughLines ( InputArray  image,
OutputArray  lines,
double  rho,
double  theta,
int  threshold,
double  srn = 0,
double  stn = 0,
double  min_theta = 0,
double  max_theta = CV_PI 
)
Python:
lines=cv.HoughLines(image, rho, theta, threshold[, lines[, srn[, stn[, min_theta[, max_theta]]]]])

#include <opencv2/imgproc.hpp>

Finds lines in a binary image using the standard Hough transform.

The function implements the standard or standard multi-scale Hough transform algorithm for line detection. See http://homepages.inf.ed.ac.uk/rbf/HIPR2/hough.htm for a good explanation of Hough transform.

Parameters
image8-bit, single-channel binary source image. The image may be modified by the function.
linesOutput vector of lines. Each line is represented by a 2 or 3 element vector \((\rho, \theta)\) or \((\rho, \theta, \textrm{votes})\) . \(\rho\) is the distance from the coordinate origin \((0,0)\) (top-left corner of the image). \(\theta\) is the line rotation angle in radians ( \(0 \sim \textrm{vertical line}, \pi/2 \sim \textrm{horizontal line}\) ). \(\textrm{votes}\) is the value of accumulator.
rhoDistance resolution of the accumulator in pixels.
thetaAngle resolution of the accumulator in radians.
thresholdAccumulator threshold parameter. Only those lines are returned that get enough votes ( \(>\texttt{threshold}\) ).
srnFor the multi-scale Hough transform, it is a divisor for the distance resolution rho . The coarse accumulator distance resolution is rho and the accurate accumulator resolution is rho/srn . If both srn=0 and stn=0 , the classical Hough transform is used. Otherwise, both these parameters should be positive.
stnFor the multi-scale Hough transform, it is a divisor for the distance resolution theta.
min_thetaFor standard and multi-scale Hough transform, minimum angle to check for lines. Must fall between 0 and max_theta.
max_thetaFor standard and multi-scale Hough transform, maximum angle to check for lines. Must fall between min_theta and CV_PI.
Examples:
samples/cpp/tutorial_code/ImgTrans/houghlines.cpp.

◆ HoughLinesP()

void cv::HoughLinesP ( InputArray  image,
OutputArray  lines,
double  rho,
double  theta,
int  threshold,
double  minLineLength = 0,
double  maxLineGap = 0 
)
Python:
lines=cv.HoughLinesP(image, rho, theta, threshold[, lines[, minLineLength[, maxLineGap]]])

#include <opencv2/imgproc.hpp>

Finds line segments in a binary image using the probabilistic Hough transform.

The function implements the probabilistic Hough transform algorithm for line detection, described in [157]

See the line detection example below:

using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
Mat src, dst, color_dst;
if( argc != 2 || !(src=imread(argv[1], 0)).data)
return -1;
Canny( src, dst, 50, 200, 3 );
cvtColor( dst, color_dst, COLOR_GRAY2BGR );
vector<Vec4i> lines;
HoughLinesP( dst, lines, 1, CV_PI/180, 80, 30, 10 );
for( size_t i = 0; i < lines.size(); i++ )
{
line( color_dst, Point(lines[i][0], lines[i][1]),
Point( lines[i][2], lines[i][3]), Scalar(0,0,255), 3, 8 );
}
namedWindow( "Source", 1 );
imshow( "Source", src );
namedWindow( "Detected Lines", 1 );
imshow( "Detected Lines", color_dst );
waitKey(0);
return 0;
}

This is a sample picture the function parameters have been tuned for:

building.jpg
image

And this is the output of the above program in case of the probabilistic Hough transform:

houghp.png
image
Parameters
image8-bit, single-channel binary source image. The image may be modified by the function.
linesOutput vector of lines. Each line is represented by a 4-element vector \((x_1, y_1, x_2, y_2)\) , where \((x_1,y_1)\) and \((x_2, y_2)\) are the ending points of each detected line segment.
rhoDistance resolution of the accumulator in pixels.
thetaAngle resolution of the accumulator in radians.
thresholdAccumulator threshold parameter. Only those lines are returned that get enough votes ( \(>\texttt{threshold}\) ).
minLineLengthMinimum line length. Line segments shorter than that are rejected.
maxLineGapMaximum allowed gap between points on the same line to link them.
See also
LineSegmentDetector
Examples:
samples/cpp/tutorial_code/ImgTrans/houghlines.cpp.

◆ HoughLinesPointSet()

void cv::HoughLinesPointSet ( InputArray  _point,
OutputArray  _lines,
int  lines_max,
int  threshold,
double  min_rho,
double  max_rho,
double  rho_step,
double  min_theta,
double  max_theta,
double  theta_step 
)
Python:
_lines=cv.HoughLinesPointSet(_point, lines_max, threshold, min_rho, max_rho, rho_step, min_theta, max_theta, theta_step[, _lines])

#include <opencv2/imgproc.hpp>

Finds lines in a set of points using the standard Hough transform.

The function finds lines in a set of points using a modification of the Hough transform.

#include <opencv2/core.hpp>
using namespace cv;
using namespace std;
int main()
{
Mat lines;
vector<Vec3d> line3d;
vector<Point2f> point;
const static float Points[20][2] = {
{ 0.0f, 369.0f }, { 10.0f, 364.0f }, { 20.0f, 358.0f }, { 30.0f, 352.0f },
{ 40.0f, 346.0f }, { 50.0f, 341.0f }, { 60.0f, 335.0f }, { 70.0f, 329.0f },
{ 80.0f, 323.0f }, { 90.0f, 318.0f }, { 100.0f, 312.0f }, { 110.0f, 306.0f },
{ 120.0f, 300.0f }, { 130.0f, 295.0f }, { 140.0f, 289.0f }, { 150.0f, 284.0f },
{ 160.0f, 277.0f }, { 170.0f, 271.0f }, { 180.0f, 266.0f }, { 190.0f, 260.0f }
};
for (int i = 0; i < 20; i++)
{
point.push_back(Point2f(Points[i][0],Points[i][1]));
}
double rhoMin = 0.0f, rhoMax = 360.0f, rhoStep = 1;
double thetaMin = 0.0f, thetaMax = CV_PI / 2.0f, thetaStep = CV_PI / 180.0f;
HoughLinesPointSet(point, lines, 20, 1,
rhoMin, rhoMax, rhoStep,
thetaMin, thetaMax, thetaStep);
lines.copyTo(line3d);
printf("votes:%d, rho:%.7f, theta:%.7f\n",(int)line3d.at(0).val[0], line3d.at(0).val[1], line3d.at(0).val[2]);
}
Parameters
_pointInput vector of points. Each vector must be encoded as a Point vector \((x,y)\). Type must be CV_32FC2 or CV_32SC2.
_linesOutput vector of found lines. Each vector is encoded as a vector<Vec3d> \((votes, rho, theta)\). The larger the value of 'votes', the higher the reliability of the Hough line.
lines_maxMax count of hough lines.
thresholdAccumulator threshold parameter. Only those lines are returned that get enough votes ( \(>\texttt{threshold}\) )
min_rhoMinimum Distance value of the accumulator in pixels.
max_rhoMaximum Distance value of the accumulator in pixels.
rho_stepDistance resolution of the accumulator in pixels.
min_thetaMinimum angle value of the accumulator in radians.
max_thetaMaximum angle value of the accumulator in radians.
theta_stepAngle resolution of the accumulator in radians.

◆ preCornerDetect()

void cv::preCornerDetect ( InputArray  src,
OutputArray  dst,
int  ksize,
int  borderType = BORDER_DEFAULT 
)
Python:
dst=cv.preCornerDetect(src, ksize[, dst[, borderType]])

#include <opencv2/imgproc.hpp>

Calculates a feature map for corner detection.

The function calculates the complex spatial derivative-based function of the source image

\[\texttt{dst} = (D_x \texttt{src} )^2 \cdot D_{yy} \texttt{src} + (D_y \texttt{src} )^2 \cdot D_{xx} \texttt{src} - 2 D_x \texttt{src} \cdot D_y \texttt{src} \cdot D_{xy} \texttt{src}\]

where \(D_x\), \(D_y\) are the first image derivatives, \(D_{xx}\), \(D_{yy}\) are the second image derivatives, and \(D_{xy}\) is the mixed derivative.

The corners can be found as local maximums of the functions, as shown below:

Mat corners, dilated_corners;
preCornerDetect(image, corners, 3);
// dilation with 3x3 rectangular structuring element
dilate(corners, dilated_corners, Mat(), 1);
Mat corner_mask = corners == dilated_corners;
Parameters
srcSource single-channel 8-bit of floating-point image.
dstOutput image that has the type CV_32F and the same size as src .
ksizeAperture size of the Sobel .
borderTypePixel extrapolation method. See BorderTypes. BORDER_WRAP is not supported.