OpenCV  4.7.0-dev
Open Source Computer Vision
Modules | Classes | Enumerations | Functions

Modules

 Color space processing
 
 Histogram Calculation
 
 Hough Transform
 
 Feature Detection
 

Classes

class  cv::cuda::CannyEdgeDetector
 Base class for Canny Edge Detector. : More...
 
class  cv::cuda::TemplateMatching
 Base class for Template Matching. : More...
 

Enumerations

enum  cv::cuda::ConnectedComponentsAlgorithmsTypes {
  cv::cuda::CCL_DEFAULT = -1,
  cv::cuda::CCL_BKE = 0
}
 Connected Components Algorithm. More...
 

Functions

void cv::cuda::bilateralFilter (InputArray src, OutputArray dst, int kernel_size, float sigma_color, float sigma_spatial, int borderMode=BORDER_DEFAULT, Stream &stream=Stream::Null())
 Performs bilateral filtering of passed image. More...
 
void cv::cuda::blendLinear (InputArray img1, InputArray img2, InputArray weights1, InputArray weights2, OutputArray result, Stream &stream=Stream::Null())
 Performs linear blending of two images. More...
 
void cv::cuda::connectedComponents (InputArray image, OutputArray labels, int connectivity, int ltype, cv::cuda::ConnectedComponentsAlgorithmsTypes ccltype)
 Computes the Connected Components Labeled image of a binary image. More...
 
void cv::cuda::connectedComponents (InputArray image, OutputArray labels, int connectivity=8, int ltype=CV_32S)
 
Ptr< CannyEdgeDetectorcv::cuda::createCannyEdgeDetector (double low_thresh, double high_thresh, int apperture_size=3, bool L2gradient=false)
 Creates implementation for cuda::CannyEdgeDetector . More...
 
Ptr< TemplateMatchingcv::cuda::createTemplateMatching (int srcType, int method, Size user_block_size=Size())
 Creates implementation for cuda::TemplateMatching . More...
 
void cv::cuda::meanShiftFiltering (InputArray src, OutputArray dst, int sp, int sr, TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1), Stream &stream=Stream::Null())
 Performs mean-shift filtering for each point of the source image. More...
 
void cv::cuda::meanShiftProc (InputArray src, OutputArray dstr, OutputArray dstsp, int sp, int sr, TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1), Stream &stream=Stream::Null())
 Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images. More...
 
void cv::cuda::meanShiftSegmentation (InputArray src, OutputArray dst, int sp, int sr, int minsize, TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1), Stream &stream=Stream::Null())
 Performs a mean-shift segmentation of the source image and eliminates small segments. More...
 

Detailed Description

Enumeration Type Documentation

◆ ConnectedComponentsAlgorithmsTypes

#include <opencv2/cudaimgproc.hpp>

Connected Components Algorithm.

Enumerator
CCL_DEFAULT 

BKE [11] algorithm for 8-way connectivity.

CCL_BKE 

BKE [11] algorithm for 8-way connectivity.

Function Documentation

◆ bilateralFilter()

void cv::cuda::bilateralFilter ( InputArray  src,
OutputArray  dst,
int  kernel_size,
float  sigma_color,
float  sigma_spatial,
int  borderMode = BORDER_DEFAULT,
Stream stream = Stream::Null() 
)

#include <opencv2/cudaimgproc.hpp>

Performs bilateral filtering of passed image.

Parameters
srcSource image. Supports only (channels != 2 && depth() != CV_8S && depth() != CV_32S && depth() != CV_64F).
dstDestination imagwe.
kernel_sizeKernel window size.
sigma_colorFilter sigma in the color space.
sigma_spatialFilter sigma in the coordinate space.
borderModeBorder type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now.
streamStream for the asynchronous version.
See also
bilateralFilter

◆ blendLinear()

void cv::cuda::blendLinear ( InputArray  img1,
InputArray  img2,
InputArray  weights1,
InputArray  weights2,
OutputArray  result,
Stream stream = Stream::Null() 
)

#include <opencv2/cudaimgproc.hpp>

Performs linear blending of two images.

Parameters
img1First image. Supports only CV_8U and CV_32F depth.
img2Second image. Must have the same size and the same type as img1 .
weights1Weights for first image. Must have tha same size as img1 . Supports only CV_32F type.
weights2Weights for second image. Must have tha same size as img2 . Supports only CV_32F type.
resultDestination image.
streamStream for the asynchronous version.

◆ connectedComponents() [1/2]

void cv::cuda::connectedComponents ( InputArray  image,
OutputArray  labels,
int  connectivity,
int  ltype,
cv::cuda::ConnectedComponentsAlgorithmsTypes  ccltype 
)

#include <opencv2/cudaimgproc.hpp>

Computes the Connected Components Labeled image of a binary image.

The function takes as input a binary image and performs Connected Components Labeling. The output is an image where each Connected Component is assigned a unique label (integer value). ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently BKE [11] is supported, see the ConnectedComponentsAlgorithmsTypes for details. Note that labels in the output are not required to be sequential.

Parameters
imageThe 8-bit single-channel image to be labeled.
labelsDestination labeled image.
connectivityConnectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.
ltypeOutput image label type. Currently CV_32S is supported.
ccltypeConnected components algorithm type (see the ConnectedComponentsAlgorithmsTypes).
Note
A sample program demonstrating Connected Components Labeling in CUDA can be found at
opencv_contrib_source_code/modules/cudaimgproc/samples/connected_components.cpp

◆ connectedComponents() [2/2]

void cv::cuda::connectedComponents ( InputArray  image,
OutputArray  labels,
int  connectivity = 8,
int  ltype = CV_32S 
)

#include <opencv2/cudaimgproc.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
imageThe 8-bit single-channel image to be labeled.
labelsDestination labeled image.
connectivityConnectivity to use for the labeling procedure. 8 for 8-way connectivity is supported.
ltypeOutput image label type. Currently CV_32S is supported.

◆ createCannyEdgeDetector()

Ptr<CannyEdgeDetector> cv::cuda::createCannyEdgeDetector ( double  low_thresh,
double  high_thresh,
int  apperture_size = 3,
bool  L2gradient = false 
)

#include <opencv2/cudaimgproc.hpp>

Creates implementation for cuda::CannyEdgeDetector .

Parameters
low_threshFirst threshold for the hysteresis procedure.
high_threshSecond threshold for the hysteresis procedure.
apperture_sizeAperture size for the Sobel operator.
L2gradientFlag indicating whether a more accurate \(L_2\) norm \(=\sqrt{(dI/dx)^2 + (dI/dy)^2}\) should be used to compute the image gradient magnitude ( L2gradient=true ), or a faster default \(L_1\) norm \(=|dI/dx|+|dI/dy|\) is enough ( L2gradient=false ).

◆ createTemplateMatching()

Ptr<TemplateMatching> cv::cuda::createTemplateMatching ( int  srcType,
int  method,
Size  user_block_size = Size() 
)

#include <opencv2/cudaimgproc.hpp>

Creates implementation for cuda::TemplateMatching .

Parameters
srcTypeInput source type. CV_32F and CV_8U depth images (1..4 channels) are supported for now.
methodSpecifies the way to compare the template with the image.
user_block_sizeYou can use field user_block_size to set specific block size. If you leave its default value Size(0,0) then automatic estimation of block size will be used (which is optimized for speed). By varying user_block_size you can reduce memory requirements at the cost of speed.

The following methods are supported for the CV_8U depth images for now:

  • CV_TM_SQDIFF
  • CV_TM_SQDIFF_NORMED
  • CV_TM_CCORR
  • CV_TM_CCORR_NORMED
  • CV_TM_CCOEFF
  • CV_TM_CCOEFF_NORMED

The following methods are supported for the CV_32F images for now:

  • CV_TM_SQDIFF
  • CV_TM_CCORR
See also
matchTemplate

◆ meanShiftFiltering()

void cv::cuda::meanShiftFiltering ( InputArray  src,
OutputArray  dst,
int  sp,
int  sr,
TermCriteria  criteria = TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1),
Stream stream = Stream::Null() 
)

#include <opencv2/cudaimgproc.hpp>

Performs mean-shift filtering for each point of the source image.

Parameters
srcSource image. Only CV_8UC4 images are supported for now.
dstDestination image containing the color of mapped points. It has the same size and type as src .
spSpatial window radius.
srColor window radius.
criteriaTermination criteria. See TermCriteria.
streamStream for the asynchronous version.

It maps each point of the source image into another point. As a result, you have a new color and new position of each point.

◆ meanShiftProc()

void cv::cuda::meanShiftProc ( InputArray  src,
OutputArray  dstr,
OutputArray  dstsp,
int  sp,
int  sr,
TermCriteria  criteria = TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1),
Stream stream = Stream::Null() 
)

#include <opencv2/cudaimgproc.hpp>

Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images.

Parameters
srcSource image. Only CV_8UC4 images are supported for now.
dstrDestination image containing the color of mapped points. The size and type is the same as src .
dstspDestination image containing the position of mapped points. The size is the same as src size. The type is CV_16SC2 .
spSpatial window radius.
srColor window radius.
criteriaTermination criteria. See TermCriteria.
streamStream for the asynchronous version.
See also
cuda::meanShiftFiltering

◆ meanShiftSegmentation()

void cv::cuda::meanShiftSegmentation ( InputArray  src,
OutputArray  dst,
int  sp,
int  sr,
int  minsize,
TermCriteria  criteria = TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1),
Stream stream = Stream::Null() 
)

#include <opencv2/cudaimgproc.hpp>

Performs a mean-shift segmentation of the source image and eliminates small segments.

Parameters
srcSource image. Only CV_8UC4 images are supported for now.
dstSegmented image with the same size and type as src (host or gpu memory).
spSpatial window radius.
srColor window radius.
minsizeMinimum segment size. Smaller segments are merged.
criteriaTermination criteria. See TermCriteria.
streamStream for the asynchronous version.