This class encapsulates a queue of asynchronous calls.
More...
#include <opencv2/core/cuda.hpp>
|
| Stream () |
| creates a new asynchronous stream
|
|
| Stream (const Ptr< GpuMat::Allocator > &allocator) |
| creates a new asynchronous stream with custom allocator
|
|
| Stream (const size_t cudaFlags) |
| creates a new Stream using the cudaFlags argument to determine the behaviors of the stream
|
|
void * | cudaPtr () const |
| return Pointer to CUDA stream
|
|
void | enqueueHostCallback (StreamCallback callback, void *userData) |
| Adds a callback to be called on the host after all currently enqueued items in the stream have completed.
|
|
| operator bool_type () const |
| returns true if stream object is not default (!= 0)
|
|
bool | queryIfComplete () const |
| Returns true if the current stream queue is finished. Otherwise, it returns false.
|
|
void | waitEvent (const Event &event) |
| Makes a compute stream wait on an event.
|
|
void | waitForCompletion () |
| Blocks the current CPU thread until all operations in the stream are complete.
|
|
This class encapsulates a queue of asynchronous calls.
- Note
- Currently, you may face problems if an operation is enqueued twice with different data. Some functions use the constant GPU memory, and next call may update the memory before the previous one has been finished. But calling different operations asynchronously is safe because each operation has its own constant buffer. Memory copy/upload/download/set operations to the buffers you hold are also safe.
-
The Stream class is not thread-safe. Please use different Stream objects for different CPU threads.
void thread1()
{
cv::cuda::func1(..., stream1);
}
void thread2()
{
cv::cuda::func2(..., stream2);
}
This class encapsulates a queue of asynchronous calls.
Definition cuda.hpp:908
- Note
- By default all CUDA routines are launched in Stream::Null() object, if the stream is not specified by user. In multi-threading environment the stream objects must be passed explicitly (see previous note).
◆ StreamCallback
typedef void(* cv::cuda::Stream::StreamCallback) (int status, void *userData) |
◆ Stream() [1/3]
cv::cuda::Stream::Stream |
( |
| ) |
|
Python: |
---|
| cv.cuda.Stream( | | ) -> | <cuda_Stream object> |
| cv.cuda.Stream( | allocator | ) -> | <cuda_Stream object> |
| cv.cuda.Stream( | cudaFlags | ) -> | <cuda_Stream object> |
creates a new asynchronous stream
◆ Stream() [2/3]
Python: |
---|
| cv.cuda.Stream( | | ) -> | <cuda_Stream object> |
| cv.cuda.Stream( | allocator | ) -> | <cuda_Stream object> |
| cv.cuda.Stream( | cudaFlags | ) -> | <cuda_Stream object> |
creates a new asynchronous stream with custom allocator
◆ Stream() [3/3]
cv::cuda::Stream::Stream |
( |
const size_t |
cudaFlags | ) |
|
Python: |
---|
| cv.cuda.Stream( | | ) -> | <cuda_Stream object> |
| cv.cuda.Stream( | allocator | ) -> | <cuda_Stream object> |
| cv.cuda.Stream( | cudaFlags | ) -> | <cuda_Stream object> |
creates a new Stream using the cudaFlags argument to determine the behaviors of the stream
- Note
- The cudaFlags parameter is passed to the underlying api cudaStreamCreateWithFlags() and supports the same parameter values.
◆ cudaPtr()
void * cv::cuda::Stream::cudaPtr |
( |
| ) |
const |
Python: |
---|
| cv.cuda.Stream.cudaPtr( | | ) -> | retval |
return Pointer to CUDA stream
◆ enqueueHostCallback()
void cv::cuda::Stream::enqueueHostCallback |
( |
StreamCallback |
callback, |
|
|
void * |
userData |
|
) |
| |
Adds a callback to be called on the host after all currently enqueued items in the stream have completed.
- Note
- Callbacks must not make any CUDA API calls. Callbacks must not perform any synchronization that may depend on outstanding device work or other callbacks that are not mandated to run earlier. Callbacks without a mandated order (in independent streams) execute in undefined order and may be serialized.
◆ Null()
static Stream & cv::cuda::Stream::Null |
( |
| ) |
|
|
static |
Python: |
---|
| cv.cuda.Stream.Null( | | ) -> | retval |
| cv.cuda.Stream_Null( | | ) -> | retval |
return Stream object for default CUDA stream
◆ operator bool_type()
cv::cuda::Stream::operator bool_type |
( |
| ) |
const |
returns true if stream object is not default (!= 0)
◆ queryIfComplete()
bool cv::cuda::Stream::queryIfComplete |
( |
| ) |
const |
Python: |
---|
| cv.cuda.Stream.queryIfComplete( | | ) -> | retval |
Returns true if the current stream queue is finished. Otherwise, it returns false.
◆ waitEvent()
void cv::cuda::Stream::waitEvent |
( |
const Event & |
event | ) |
|
Python: |
---|
| cv.cuda.Stream.waitEvent( | event | ) -> | None |
Makes a compute stream wait on an event.
◆ waitForCompletion()
void cv::cuda::Stream::waitForCompletion |
( |
| ) |
|
Python: |
---|
| cv.cuda.Stream.waitForCompletion( | | ) -> | None |
Blocks the current CPU thread until all operations in the stream are complete.
◆ BufferPool
◆ DefaultDeviceInitializer
friend class DefaultDeviceInitializer |
|
friend |
◆ StreamAccessor
The documentation for this class was generated from the following file: