OpenCV  5.0.0-pre
Open Source Computer Vision
Loading...
Searching...
No Matches
Classes | Functions

Detailed Description

Classes

class  cv::cuda::BufferPool
 BufferPool for use with CUDA streams. More...
 
class  cv::cuda::Event
 
struct  cv::cuda::EventAccessor
 Class that enables getting cudaEvent_t from cuda::Event. More...
 
struct  cv::cuda::GpuData
 
class  cv::cuda::GpuMat
 Base storage class for GPU memory with reference counting. More...
 
class  cv::cuda::GpuMatND
 
class  cv::cuda::HostMem
 Class with reference counting wrapping special memory type allocation functions from CUDA. More...
 
class  cv::cuda::Stream
 This class encapsulates a queue of asynchronous calls. More...
 
struct  cv::cuda::StreamAccessor
 Class that enables getting cudaStream_t from cuda::Stream. More...
 

Functions

void cv::cuda::createContinuous (int rows, int cols, int type, OutputArray arr)
 Creates a continuous matrix.
 
GpuMat cv::cuda::createGpuMatFromCudaMemory (int rows, int cols, int type, size_t cudaMemoryAddress, size_t step=Mat::AUTO_STEP)
 Bindings overload to create a GpuMat from existing GPU memory.
 
GpuMat cv::cuda::createGpuMatFromCudaMemory (Size size, int type, size_t cudaMemoryAddress, size_t step=Mat::AUTO_STEP)
 
void cv::cuda::ensureSizeIsEnough (int rows, int cols, int type, OutputArray arr)
 Ensures that the size of a matrix is big enough and the matrix has a proper type.
 
void cv::cuda::registerPageLocked (Mat &m)
 Page-locks the memory of matrix and maps it for the device(s).
 
void cv::cuda::setBufferPoolConfig (int deviceId, size_t stackSize, int stackCount)
 
void cv::cuda::setBufferPoolUsage (bool on)
 BufferPool management (must be called before Stream creation)
 
void cv::cuda::unregisterPageLocked (Mat &m)
 Unmaps the memory of matrix and makes it pageable again.
 
Stream cv::cuda::wrapStream (size_t cudaStreamMemoryAddress)
 Bindings overload to create a Stream object from the address stored in an existing CUDA Runtime API stream pointer (cudaStream_t).
 

Function Documentation

◆ createContinuous()

void cv::cuda::createContinuous ( int  rows,
int  cols,
int  type,
OutputArray  arr 
)
Python:
cv.cuda.createContinuous(rows, cols, type[, arr]) -> arr

#include <opencv2/core/cuda.hpp>

Creates a continuous matrix.

Parameters
rowsRow count.
colsColumn count.
typeType of the matrix.
arrDestination matrix. This parameter changes only if it has a proper type and area ( \(\texttt{rows} \times \texttt{cols}\) ).

Matrix is called continuous if its elements are stored continuously, that is, without gaps at the end of each row.

◆ createGpuMatFromCudaMemory() [1/2]

GpuMat cv::cuda::createGpuMatFromCudaMemory ( int  rows,
int  cols,
int  type,
size_t  cudaMemoryAddress,
size_t  step = Mat::AUTO_STEP 
)
inline
Python:
cv.cuda.createGpuMatFromCudaMemory(rows, cols, type, cudaMemoryAddress[, step]) -> retval
cv.cuda.createGpuMatFromCudaMemory(size, type, cudaMemoryAddress[, step]) -> retval

#include <opencv2/core/cuda.hpp>

Bindings overload to create a GpuMat from existing GPU memory.

Parameters
rowsRow count.
colsColumn count.
typeType of the matrix.
cudaMemoryAddressAddress of the allocated GPU memory on the device. This does not allocate matrix data. Instead, it just initializes the matrix header that points to the specified cudaMemoryAddress, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it.
stepNumber of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to Mat::AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize(). See GpuMat::elemSize.
Note
Overload for generation of bindings only, not exported or intended for use internally from C++.

◆ createGpuMatFromCudaMemory() [2/2]

GpuMat cv::cuda::createGpuMatFromCudaMemory ( Size  size,
int  type,
size_t  cudaMemoryAddress,
size_t  step = Mat::AUTO_STEP 
)
inline
Python:
cv.cuda.createGpuMatFromCudaMemory(rows, cols, type, cudaMemoryAddress[, step]) -> retval
cv.cuda.createGpuMatFromCudaMemory(size, type, cudaMemoryAddress[, step]) -> retval

#include <opencv2/core/cuda.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

Parameters
size2D array size: Size(cols, rows). In the Size() constructor, the number of rows and the number of columns go in the reverse order.
typeType of the matrix.
cudaMemoryAddressAddress of the allocated GPU memory on the device. This does not allocate matrix data. Instead, it just initializes the matrix header that points to the specified cudaMemoryAddress, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it.
stepNumber of bytes each matrix row occupies. The value should include the padding bytes at the end of each row, if any. If the parameter is missing (set to Mat::AUTO_STEP ), no padding is assumed and the actual step is calculated as cols*elemSize(). See GpuMat::elemSize.
Note
Overload for generation of bindings only, not exported or intended for use internally from C++.

◆ ensureSizeIsEnough()

void cv::cuda::ensureSizeIsEnough ( int  rows,
int  cols,
int  type,
OutputArray  arr 
)
Python:
cv.cuda.ensureSizeIsEnough(rows, cols, type[, arr]) -> arr

#include <opencv2/core/cuda.hpp>

Ensures that the size of a matrix is big enough and the matrix has a proper type.

Parameters
rowsMinimum desired number of rows.
colsMinimum desired number of columns.
typeDesired matrix type.
arrDestination matrix.

The function does not reallocate memory if the matrix has proper attributes already.

◆ registerPageLocked()

void cv::cuda::registerPageLocked ( Mat m)
Python:
cv.cuda.registerPageLocked(m) -> None

#include <opencv2/core/cuda.hpp>

Page-locks the memory of matrix and maps it for the device(s).

Parameters
mInput matrix.

◆ setBufferPoolConfig()

void cv::cuda::setBufferPoolConfig ( int  deviceId,
size_t  stackSize,
int  stackCount 
)
Python:
cv.cuda.setBufferPoolConfig(deviceId, stackSize, stackCount) -> None

◆ setBufferPoolUsage()

void cv::cuda::setBufferPoolUsage ( bool  on)
Python:
cv.cuda.setBufferPoolUsage(on) -> None

#include <opencv2/core/cuda.hpp>

BufferPool management (must be called before Stream creation)

◆ unregisterPageLocked()

void cv::cuda::unregisterPageLocked ( Mat m)
Python:
cv.cuda.unregisterPageLocked(m) -> None

#include <opencv2/core/cuda.hpp>

Unmaps the memory of matrix and makes it pageable again.

Parameters
mInput matrix.

◆ wrapStream()

Stream cv::cuda::wrapStream ( size_t  cudaStreamMemoryAddress)
Python:
cv.cuda.wrapStream(cudaStreamMemoryAddress) -> retval

#include <opencv2/core/cuda.hpp>

Bindings overload to create a Stream object from the address stored in an existing CUDA Runtime API stream pointer (cudaStream_t).

Parameters
cudaStreamMemoryAddressMemory address stored in a CUDA Runtime API stream pointer (cudaStream_t). The created Stream object does not perform any allocation or deallocation and simply wraps existing raw CUDA Runtime API stream pointer.
Note
Overload for generation of bindings only, not exported or intended for use internally from C++.