OpenCV
4.3.0pre
Open Source Computer Vision

Classes  
class  cv::line_descriptor::BinaryDescriptor 
Class implements both functionalities for detection of lines and computation of their binary descriptor. More...  
class  cv::line_descriptor::BinaryDescriptorMatcher 
furnishes all functionalities for querying a dataset provided by user or internal to class (that user must, anyway, populate) on the model of Descriptor Matchers More...  
struct  cv::line_descriptor::DrawLinesMatchesFlags 
struct  cv::line_descriptor::KeyLine 
A class to represent a line. More...  
class  cv::line_descriptor::LSDDetector 
struct  cv::line_descriptor::LSDParam 
Functions  
void  cv::line_descriptor::drawKeylines (const Mat &image, const std::vector< KeyLine > &keylines, Mat &outImage, const Scalar &color=Scalar::all(1), int flags=DrawLinesMatchesFlags::DEFAULT) 
Draws keylines. More...  
void  cv::line_descriptor::drawLineMatches (const Mat &img1, const std::vector< KeyLine > &keylines1, const Mat &img2, const std::vector< KeyLine > &keylines2, const std::vector< DMatch > &matches1to2, Mat &outImg, const Scalar &matchColor=Scalar::all(1), const Scalar &singleLineColor=Scalar::all(1), const std::vector< char > &matchesMask=std::vector< char >(), int flags=DrawLinesMatchesFlags::DEFAULT) 
Draws the found matches of keylines from two images. More...  
One of the most challenging activities in computer vision is the extraction of useful information from a given image. Such information, usually comes in the form of points that preserve some kind of property (for instance, they are scaleinvariant) and are actually representative of input image.
The goal of this module is seeking a new kind of representative information inside an image and providing the functionalities for its extraction and representation. In particular, differently from previous methods for detection of relevant elements inside an image, lines are extracted in place of points; a new class is defined ad hoc to summarize a line's properties, for reuse and plotting purposes.
To obtatin a binary descriptor representing a certain line detected from a certain octave of an image, we first compute a nonbinary descriptor as described in [269] . Such algorithm works on lines extracted using EDLine detector, as explained in [249] . Given a line, we consider a rectangular region centered at it and called line support region (LSR). Such region is divided into a set of bands \(\{B_1, B_2, ..., B_m\}\), whose length equals the one of line.
If we indicate with \(\bf{d}_L\) the direction of line, the orthogonal and clockwise direction to line \(\bf{d}_{\perp}\) can be determined; these two directions, are used to construct a reference frame centered in the middle point of line. The gradients of pixels \(\bf{g'}\) inside LSR can be projected to the newly determined frame, obtaining their local equivalent \(\bf{g'} = (\bf{g}^T \cdot \bf{d}_{\perp}, \bf{g}^T \cdot \bf{d}_L)^T \triangleq (\bf{g'}_{d_{\perp}}, \bf{g'}_{d_L})^T\).
Later on, a Gaussian function is applied to all LSR's pixels along \(\bf{d}_\perp\) direction; first, we assign a global weighting coefficient \(f_g(i) = (1/\sqrt{2\pi}\sigma_g)e^{d^2_i/2\sigma^2_g}\) to i*th row in LSR, where \(d_i\) is the distance of ith row from the center row in LSR, \(\sigma_g = 0.5(m \cdot w  1)\) and \(w\) is the width of bands (the same for every band). Secondly, considering a band \(B_j\) and its neighbor bands \(B_{j1}, B_{j+1}\), we assign a local weighting \(F_l(k) = (1/\sqrt{2\pi}\sigma_l)e^{d'^2_k/2\sigma_l^2}\), where \(d'_k\) is the distance of kth row from the center row in \(B_j\) and \(\sigma_l = w\). Using the global and local weights, we obtain, at the same time, the reduction of role played by gradients far from line and of boundary effect, respectively.
Each band \(B_j\) in LSR has an associated band descriptor(BD) which is computed considering previous and next band (top and bottom bands are ignored when computing descriptor for first and last band). Once each band has been assignen its BD, the LBD descriptor of line is simply given by
\[LBD = (BD_1^T, BD_2^T, ... , BD^T_m)^T.\]
To compute a band descriptor \(B_j\), each kth row in it is considered and the gradients in such row are accumulated:
\[\begin{matrix} \bf{V1}^k_j = \lambda \sum\limits_{\bf{g}'_{d_\perp}>0}\bf{g}'_{d_\perp}, & \bf{V2}^k_j = \lambda \sum\limits_{\bf{g}'_{d_\perp}<0} \bf{g}'_{d_\perp}, \\ \bf{V3}^k_j = \lambda \sum\limits_{\bf{g}'_{d_L}>0}\bf{g}'_{d_L}, & \bf{V4}^k_j = \lambda \sum\limits_{\bf{g}'_{d_L}<0} \bf{g}'_{d_L}\end{matrix}.\]
with \(\lambda = f_g(k)f_l(k)\).
By stacking previous results, we obtain the band description matrix (BDM)
\[BDM_j = \left(\begin{matrix} \bf{V1}_j^1 & \bf{V1}_j^2 & \ldots & \bf{V1}_j^n \\ \bf{V2}_j^1 & \bf{V2}_j^2 & \ldots & \bf{V2}_j^n \\ \bf{V3}_j^1 & \bf{V3}_j^2 & \ldots & \bf{V3}_j^n \\ \bf{V4}_j^1 & \bf{V4}_j^2 & \ldots & \bf{V4}_j^n \end{matrix} \right) \in \mathbb{R}^{4\times n},\]
with \(n\) the number of rows in band \(B_j\):
\[n = \begin{cases} 2w, & j = 1m; \\ 3w, & \mbox{else}. \end{cases}\]
Each \(BD_j\) can be obtained using the standard deviation vector \(S_j\) and mean vector \(M_j\) of \(BDM_J\). Thus, finally:
\[LBD = (M_1^T, S_1^T, M_2^T, S_2^T, \ldots, M_m^T, S_m^T)^T \in \mathbb{R}^{8m}\]
Once the LBD has been obtained, it must be converted into a binary form. For such purpose, we consider 32 possible pairs of BD inside it; each couple of BD is compared bit by bit and comparison generates an 8 bit string. Concatenating 32 comparison strings, we get the 256bit final binary representation of a single LBD.
void cv::line_descriptor::drawKeylines  (  const Mat &  image, 
const std::vector< KeyLine > &  keylines,  
Mat &  outImage,  
const Scalar &  color = Scalar::all(1) , 

int  flags = DrawLinesMatchesFlags::DEFAULT 

) 
Python:  

outImage  =  cv.line_descriptor.drawKeylines(  image, keylines[, outImage[, color[, flags]]]  ) 
#include <opencv2/line_descriptor/descriptor.hpp>
Draws keylines.
image  input image 
keylines  keylines to be drawn 
outImage  output image to draw on 
color  color of lines to be drawn (if set to defaul value, color is chosen randomly) 
flags  drawing flags 
void cv::line_descriptor::drawLineMatches  (  const Mat &  img1, 
const std::vector< KeyLine > &  keylines1,  
const Mat &  img2,  
const std::vector< KeyLine > &  keylines2,  
const std::vector< DMatch > &  matches1to2,  
Mat &  outImg,  
const Scalar &  matchColor = Scalar::all(1) , 

const Scalar &  singleLineColor = Scalar::all(1) , 

const std::vector< char > &  matchesMask = std::vector< char >() , 

int  flags = DrawLinesMatchesFlags::DEFAULT 

) 
Python:  

outImg  =  cv.line_descriptor.drawLineMatches(  img1, keylines1, img2, keylines2, matches1to2[, outImg[, matchColor[, singleLineColor[, matchesMask[, flags]]]]]  ) 
#include <opencv2/line_descriptor/descriptor.hpp>
Draws the found matches of keylines from two images.
img1  first image 
keylines1  keylines extracted from first image 
img2  second image 
keylines2  keylines extracted from second image 
matches1to2  vector of matches 
outImg  output matrix to draw on 
matchColor  drawing color for matches (chosen randomly in case of default value) 
singleLineColor  drawing color for keylines (chosen randomly in case of default value) 
matchesMask  mask to indicate which matches must be drawn 
flags  drawing flags, see DrawLinesMatchesFlags 