OpenCV
5.0.0-pre
Open Source Computer Vision
|
Modules | |
Color space processing | |
Histogram Calculation | |
Structural Analysis and Shape Descriptors | |
Hough Transform | |
Feature Detection | |
Classes | |
class | cv::cuda::CannyEdgeDetector |
Base class for Canny Edge Detector. : More... | |
class | cv::cuda::TemplateMatching |
Base class for Template Matching. : More... | |
Enumerations | |
enum | cv::cuda::ConnectedComponentsAlgorithmsTypes { cv::cuda::CCL_DEFAULT = -1 , cv::cuda::CCL_BKE = 0 } |
Connected Components Algorithm. More... | |
Functions | |
void | cv::cuda::bilateralFilter (InputArray src, OutputArray dst, int kernel_size, float sigma_color, float sigma_spatial, int borderMode=BORDER_DEFAULT, Stream &stream=Stream::Null()) |
Performs bilateral filtering of passed image. | |
void | cv::cuda::blendLinear (InputArray img1, InputArray img2, InputArray weights1, InputArray weights2, OutputArray result, Stream &stream=Stream::Null()) |
Performs linear blending of two images. | |
void | cv::cuda::connectedComponents (InputArray image, OutputArray labels, int connectivity, int ltype, cv::cuda::ConnectedComponentsAlgorithmsTypes ccltype) |
Computes the Connected Components Labeled image of a binary image. | |
void | cv::cuda::connectedComponents (InputArray image, OutputArray labels, int connectivity=8, int ltype=CV_32S) |
Ptr< CannyEdgeDetector > | cv::cuda::createCannyEdgeDetector (double low_thresh, double high_thresh, int apperture_size=3, bool L2gradient=false) |
Creates implementation for cuda::CannyEdgeDetector . | |
Ptr< TemplateMatching > | cv::cuda::createTemplateMatching (int srcType, int method, Size user_block_size=Size()) |
Creates implementation for cuda::TemplateMatching . | |
void | cv::cuda::meanShiftFiltering (InputArray src, OutputArray dst, int sp, int sr, TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1), Stream &stream=Stream::Null()) |
Performs mean-shift filtering for each point of the source image. | |
void | cv::cuda::meanShiftProc (InputArray src, OutputArray dstr, OutputArray dstsp, int sp, int sr, TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1), Stream &stream=Stream::Null()) |
Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images. | |
void | cv::cuda::meanShiftSegmentation (InputArray src, OutputArray dst, int sp, int sr, int minsize, TermCriteria criteria=TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1), Stream &stream=Stream::Null()) |
Performs a mean-shift segmentation of the source image and eliminates small segments. | |
#include <opencv2/cudaimgproc.hpp>
Connected Components Algorithm.
Enumerator | |
---|---|
CCL_DEFAULT | BKE [11] algorithm for 8-way connectivity. |
CCL_BKE | BKE [11] algorithm for 8-way connectivity. |
void cv::cuda::bilateralFilter | ( | InputArray | src, |
OutputArray | dst, | ||
int | kernel_size, | ||
float | sigma_color, | ||
float | sigma_spatial, | ||
int | borderMode = BORDER_DEFAULT , |
||
Stream & | stream = Stream::Null() |
||
) |
#include <opencv2/cudaimgproc.hpp>
Performs bilateral filtering of passed image.
src | Source image. Supports only (channels != 2 && depth() != CV_8S && depth() != CV_32S && depth() != CV_64F). |
dst | Destination imagwe. |
kernel_size | Kernel window size. |
sigma_color | Filter sigma in the color space. |
sigma_spatial | Filter sigma in the coordinate space. |
borderMode | Border type. See borderInterpolate for details. BORDER_REFLECT101 , BORDER_REPLICATE , BORDER_CONSTANT , BORDER_REFLECT and BORDER_WRAP are supported for now. |
stream | Stream for the asynchronous version. |
void cv::cuda::blendLinear | ( | InputArray | img1, |
InputArray | img2, | ||
InputArray | weights1, | ||
InputArray | weights2, | ||
OutputArray | result, | ||
Stream & | stream = Stream::Null() |
||
) |
#include <opencv2/cudaimgproc.hpp>
Performs linear blending of two images.
img1 | First image. Supports only CV_8U and CV_32F depth. |
img2 | Second image. Must have the same size and the same type as img1 . |
weights1 | Weights for first image. Must have tha same size as img1 . Supports only CV_32F type. |
weights2 | Weights for second image. Must have tha same size as img2 . Supports only CV_32F type. |
result | Destination image. |
stream | Stream for the asynchronous version. |
void cv::cuda::connectedComponents | ( | InputArray | image, |
OutputArray | labels, | ||
int | connectivity, | ||
int | ltype, | ||
cv::cuda::ConnectedComponentsAlgorithmsTypes | ccltype | ||
) |
#include <opencv2/cudaimgproc.hpp>
Computes the Connected Components Labeled image of a binary image.
The function takes as input a binary image and performs Connected Components Labeling. The output is an image where each Connected Component is assigned a unique label (integer value). ltype specifies the output label image type, an important consideration based on the total number of labels or alternatively the total number of pixels in the source image. ccltype specifies the connected components labeling algorithm to use, currently BKE [11] is supported, see the ConnectedComponentsAlgorithmsTypes for details. Note that labels in the output are not required to be sequential.
image | The 8-bit single-channel image to be labeled. |
labels | Destination labeled image. |
connectivity | Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported. |
ltype | Output image label type. Currently CV_32S is supported. |
ccltype | Connected components algorithm type (see the ConnectedComponentsAlgorithmsTypes). |
void cv::cuda::connectedComponents | ( | InputArray | image, |
OutputArray | labels, | ||
int | connectivity = 8 , |
||
int | ltype = CV_32S |
||
) |
#include <opencv2/cudaimgproc.hpp>
This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.
image | The 8-bit single-channel image to be labeled. |
labels | Destination labeled image. |
connectivity | Connectivity to use for the labeling procedure. 8 for 8-way connectivity is supported. |
ltype | Output image label type. Currently CV_32S is supported. |
Ptr< CannyEdgeDetector > cv::cuda::createCannyEdgeDetector | ( | double | low_thresh, |
double | high_thresh, | ||
int | apperture_size = 3 , |
||
bool | L2gradient = false |
||
) |
#include <opencv2/cudaimgproc.hpp>
Creates implementation for cuda::CannyEdgeDetector .
low_thresh | First threshold for the hysteresis procedure. |
high_thresh | Second threshold for the hysteresis procedure. |
apperture_size | Aperture size for the Sobel operator. |
L2gradient | Flag indicating whether a more accurate \(L_2\) norm \(=\sqrt{(dI/dx)^2 + (dI/dy)^2}\) should be used to compute the image gradient magnitude ( L2gradient=true ), or a faster default \(L_1\) norm \(=|dI/dx|+|dI/dy|\) is enough ( L2gradient=false ). |
Ptr< TemplateMatching > cv::cuda::createTemplateMatching | ( | int | srcType, |
int | method, | ||
Size | user_block_size = Size() |
||
) |
#include <opencv2/cudaimgproc.hpp>
Creates implementation for cuda::TemplateMatching .
srcType | Input source type. CV_32F and CV_8U depth images (1..4 channels) are supported for now. |
method | Specifies the way to compare the template with the image. |
user_block_size | You can use field user_block_size to set specific block size. If you leave its default value Size(0,0) then automatic estimation of block size will be used (which is optimized for speed). By varying user_block_size you can reduce memory requirements at the cost of speed. |
The following methods are supported for the CV_8U depth images for now:
The following methods are supported for the CV_32F images for now:
void cv::cuda::meanShiftFiltering | ( | InputArray | src, |
OutputArray | dst, | ||
int | sp, | ||
int | sr, | ||
TermCriteria | criteria = TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1) , |
||
Stream & | stream = Stream::Null() |
||
) |
#include <opencv2/cudaimgproc.hpp>
Performs mean-shift filtering for each point of the source image.
src | Source image. Only CV_8UC4 images are supported for now. |
dst | Destination image containing the color of mapped points. It has the same size and type as src . |
sp | Spatial window radius. |
sr | Color window radius. |
criteria | Termination criteria. See TermCriteria. |
stream | Stream for the asynchronous version. |
It maps each point of the source image into another point. As a result, you have a new color and new position of each point.
void cv::cuda::meanShiftProc | ( | InputArray | src, |
OutputArray | dstr, | ||
OutputArray | dstsp, | ||
int | sp, | ||
int | sr, | ||
TermCriteria | criteria = TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1) , |
||
Stream & | stream = Stream::Null() |
||
) |
#include <opencv2/cudaimgproc.hpp>
Performs a mean-shift procedure and stores information about processed points (their colors and positions) in two images.
src | Source image. Only CV_8UC4 images are supported for now. |
dstr | Destination image containing the color of mapped points. The size and type is the same as src . |
dstsp | Destination image containing the position of mapped points. The size is the same as src size. The type is CV_16SC2 . |
sp | Spatial window radius. |
sr | Color window radius. |
criteria | Termination criteria. See TermCriteria. |
stream | Stream for the asynchronous version. |
void cv::cuda::meanShiftSegmentation | ( | InputArray | src, |
OutputArray | dst, | ||
int | sp, | ||
int | sr, | ||
int | minsize, | ||
TermCriteria | criteria = TermCriteria(TermCriteria::MAX_ITER+TermCriteria::EPS, 5, 1) , |
||
Stream & | stream = Stream::Null() |
||
) |
#include <opencv2/cudaimgproc.hpp>
Performs a mean-shift segmentation of the source image and eliminates small segments.
src | Source image. Only CV_8UC4 images are supported for now. |
dst | Segmented image with the same size and type as src (host or gpu memory). |
sp | Spatial window radius. |
sr | Color window radius. |
minsize | Minimum segment size. Smaller segments are merged. |
criteria | Termination criteria. See TermCriteria. |
stream | Stream for the asynchronous version. |