Returns the number of installed CUDA-enabled devices. Use this function before any other GPU functions calls. If OpenCV is compiled without GPU support, this function returns 0.
Sets a device and initializes it for the current thread. If the call of this function is omitted, a default device is initialized at the fist GPU usage.
Parameters: |
|
---|
Returns the current device index set by {gpu::getDevice} or initialized by default.
Class providing GPU computing features.
enum GpuFeature
{
COMPUTE_10, COMPUTE_11,
COMPUTE_12, COMPUTE_13,
COMPUTE_20, COMPUTE_21,
ATOMICS, NATIVE_DOUBLE
};
Class providing functionality for querying the specified GPU properties.
class CV_EXPORTS DeviceInfo
{
public:
DeviceInfo();
DeviceInfo(int device_id);
string name() const;
int majorVersion() const;
int minorVersion() const;
int multiProcessorCount() const;
size_t freeMemory() const;
size_t totalMemory() const;
bool supports(GpuFeature feature) const;
bool isCompatible() const;
};
Constructs the DeviceInfo object for the specified device. If device_id parameter is missed, it constructs an object for the current device.
Parameters: |
|
---|
Returns the major compute capability version.
Returns the minor compute capability version.
Returns the number of streaming multiprocessors.
Returns the amount of free memory in bytes.
Returns the amount of total memory in bytes.
Provides information on GPU feature support. This function returns true if the device has the specified GPU feature. Otherwise, it returns false.
Parameters: |
|
---|
Checks the GPU module and device compatibility. This function returns true if the GPU module can be run on the specified device. Otherwise, it returns false.
Class providing a set of static methods to check what NVIDIA* card architecture the GPU module was built for.
The following method checks whether the module was built with the support of the given feature:
- C++: static bool gpu::TargetArchs::builtWith(GpuFeature feature)¶
Parameters:
- feature – Feature to be checked. See gpu::GpuFeature.
There is a set of methods to check whether the module contains intermediate (PTX) or binary GPU code for the given architecture(s):
- C++: static bool gpu::TargetArchs::has(int major, int minor)¶
- C++: static bool gpu::TargetArchs::hasPtx(int major, int minor)¶
- C++: static bool gpu::TargetArchs::hasBin(int major, int minor)¶
- C++: static bool gpu::TargetArchs::hasEqualOrLessPtx(int major, int minor)¶
- C++: static bool gpu::TargetArchs::hasEqualOrGreater(int major, int minor)¶
- C++: static bool gpu::TargetArchs::hasEqualOrGreaterPtx(int major, int minor)¶
- C++: static bool gpu::TargetArchs::hasEqualOrGreaterBin(int major, int minor)¶
Parameters:
- major – Major compute capability version.
- minor – Minor compute capability version.
According to the CUDA C Programming Guide Version 3.2: “PTX code produced for some specific compute capability can always be compiled to binary code of greater or equal compute capability”.