Class computing the optical flow for two images using Brox et al Optical Flow algorithm ([Brox2004]).
class BroxOpticalFlow
{
public:
BroxOpticalFlow(float alpha_, float gamma_, float scale_factor_, int inner_iterations_, int outer_iterations_, int solver_iterations_);
//! Compute optical flow
//! frame0 - source frame (supports only CV_32FC1 type)
//! frame1 - frame to track (with the same size and type as frame0)
//! u - flow horizontal component (along x axis)
//! v - flow vertical component (along y axis)
void operator ()(const GpuMat& frame0, const GpuMat& frame1, GpuMat& u, GpuMat& v, Stream& stream = Stream::Null());
//! flow smoothness
float alpha;
//! gradient constancy importance
float gamma;
//! pyramid scale factor
float scale_factor;
//! number of lagged non-linearity iterations (inner loop)
int inner_iterations;
//! number of warping iterations (number of pyramid levels)
int outer_iterations;
//! number of linear system solver iterations
int solver_iterations;
GpuMat buf;
};
Class used for strong corners detection on an image.
class GoodFeaturesToTrackDetector_GPU
{
public:
explicit GoodFeaturesToTrackDetector_GPU(int maxCorners_ = 1000, double qualityLevel_ = 0.01, double minDistance_ = 0.0,
int blockSize_ = 3, bool useHarrisDetector_ = false, double harrisK_ = 0.04);
void operator ()(const GpuMat& image, GpuMat& corners, const GpuMat& mask = GpuMat());
int maxCorners;
double qualityLevel;
double minDistance;
int blockSize;
bool useHarrisDetector;
double harrisK;
void releaseMemory();
};
The class finds the most prominent corners in the image.
See also
Constructor.
Parameters: |
|
---|
Finds the most prominent corners in the image.
Parameters: |
|
---|
See also
Releases inner buffers memory.
Class computing a dense optical flow using the Gunnar Farneback’s algorithm.
class CV_EXPORTS FarnebackOpticalFlow
{
public:
FarnebackOpticalFlow()
{
numLevels = 5;
pyrScale = 0.5;
fastPyramids = false;
winSize = 13;
numIters = 10;
polyN = 5;
polySigma = 1.1;
flags = 0;
}
int numLevels;
double pyrScale;
bool fastPyramids;
int winSize;
int numIters;
int polyN;
double polySigma;
int flags;
void operator ()(const GpuMat &frame0, const GpuMat &frame1, GpuMat &flowx, GpuMat &flowy, Stream &s = Stream::Null());
void releaseMemory();
private:
/* hidden */
};
Computes a dense optical flow using the Gunnar Farneback’s algorithm.
Parameters: |
|
---|
See also
Releases unused auxiliary memory buffers.
Class used for calculating an optical flow.
class PyrLKOpticalFlow
{
public:
PyrLKOpticalFlow();
void sparse(const GpuMat& prevImg, const GpuMat& nextImg, const GpuMat& prevPts, GpuMat& nextPts,
GpuMat& status, GpuMat* err = 0);
void dense(const GpuMat& prevImg, const GpuMat& nextImg, GpuMat& u, GpuMat& v, GpuMat* err = 0);
Size winSize;
int maxLevel;
int iters;
bool useInitialFlow;
void releaseMemory();
};
The class can calculate an optical flow for a sparse feature set or dense optical flow using the iterative Lucas-Kanade method with pyramids.
See also
Calculate an optical flow for a sparse feature set.
Parameters: |
|
---|
See also
Calculate dense optical flow.
Parameters: |
|
---|
Releases inner buffers memory.
Interpolates frames (images) using provided optical flow (displacement field).
Parameters: |
|
---|
Class used for background/foreground segmentation.
class FGDStatModel
{
public:
struct Params
{
...
};
explicit FGDStatModel(int out_cn = 3);
explicit FGDStatModel(const cv::gpu::GpuMat& firstFrame, const Params& params = Params(), int out_cn = 3);
~FGDStatModel();
void create(const cv::gpu::GpuMat& firstFrame, const Params& params = Params());
void release();
int update(const cv::gpu::GpuMat& curFrame);
//8UC3 or 8UC4 reference background image
cv::gpu::GpuMat background;
//8UC1 foreground image
cv::gpu::GpuMat foreground;
std::vector< std::vector<cv::Point> > foreground_regions;
};
The class discriminates between foreground and background pixels by building and maintaining a model of the background. Any pixel which does not fit this model is then deemed to be foreground. The class implements algorithm described in [FGD2003]_.
The results are available through the class fields:
.. ocv:member:: cv::gpu::GpuMat background
The output background image.
.. ocv:member:: cv::gpu::GpuMat foreground
The output foreground mask as an 8-bit binary image.
.. ocv:member:: cv::gpu::GpuMat foreground_regions
The output foreground regions calculated by :ocv:func:`findContours`.
Constructors.
Parameters: |
|
---|
See also
Initializes background model.
Releases all inner buffer’s memory.
Updates the background model and returns foreground regions count.
Parameters: |
|
---|
Gaussian Mixture-based Backbround/Foreground Segmentation Algorithm.
class MOG_GPU
{
public:
MOG_GPU(int nmixtures = -1);
void initialize(Size frameSize, int frameType);
void operator()(const GpuMat& frame, GpuMat& fgmask, float learningRate = 0.0f, Stream& stream = Stream::Null());
void getBackgroundImage(GpuMat& backgroundImage, Stream& stream = Stream::Null()) const;
void release();
int history;
float varThreshold;
float backgroundRatio;
float noiseSigma;
};
The class discriminates between foreground and background pixels by building and maintaining a model of the background. Any pixel which does not fit this model is then deemed to be foreground. The class implements algorithm described in [MOG2001].
See also
The constructor.
Parameters: |
|
---|
Default constructor sets all parameters to default values.
Updates the background model and returns the foreground mask.
Parameters: |
|
---|
Computes a background image.
Parameters: |
|
---|
Gaussian Mixture-based Background/Foreground Segmentation Algorithm.
class MOG2_GPU
{
public:
MOG2_GPU(int nmixtures = -1);
void initialize(Size frameSize, int frameType);
void operator()(const GpuMat& frame, GpuMat& fgmask, float learningRate = 0.0f, Stream& stream = Stream::Null());
void getBackgroundImage(GpuMat& backgroundImage, Stream& stream = Stream::Null()) const;
void release();
// parameters
...
};
The class discriminates between foreground and background pixels by building and maintaining a model of the background. Any pixel which does not fit this model is then deemed to be foreground. The class implements algorithm described in [MOG2004]_.
Here are important members of the class that control the algorithm, which you can set after constructing the class instance:
.. ocv:member:: float backgroundRatio
Threshold defining whether the component is significant enough to be included into the background model ( corresponds to ``TB=1-cf`` from the paper??which paper??). ``cf=0.1 => TB=0.9`` is default. For ``alpha=0.001``, it means that the mode should exist for approximately 105 frames before it is considered foreground.
.. ocv:member:: float varThreshold
Threshold for the squared Mahalanobis distance that helps decide when a sample is close to the existing components (corresponds to ``Tg``). If it is not close to any component, a new component is generated. ``3 sigma => Tg=3*3=9`` is default. A smaller ``Tg`` value generates more components. A higher ``Tg`` value may result in a small number of components but they can grow too large.
.. ocv:member:: float fVarInit
Initial variance for the newly generated components. It affects the speed of adaptation. The parameter value is based on your estimate of the typical standard deviation from the images. OpenCV uses 15 as a reasonable value.
.. ocv:member:: float fVarMin
Parameter used to further control the variance.
.. ocv:member:: float fVarMax
Parameter used to further control the variance.
.. ocv:member:: float fCT
Complexity reduction parameter. This parameter defines the number of samples needed to accept to prove the component exists. ``CT=0.05`` is a default value for all the samples. By setting ``CT=0`` you get an algorithm very similar to the standard Stauffer&Grimson algorithm.
.. ocv:member:: uchar nShadowDetection
The value for marking shadow pixels in the output foreground mask. Default value is 127.
.. ocv:member:: float fTau
Shadow threshold. The shadow is detected if the pixel is a darker version of the background. ``Tau`` is a threshold defining how much darker the shadow can be. ``Tau= 0.5`` means that if a pixel is more than twice darker then it is not shadow. See [ShadowDetect2003]_.
.. ocv:member:: bool bShadowDetection
Parameter defining whether shadow detection should be enabled.
See also
The constructor.
Parameters: |
|
---|
Default constructor sets all parameters to default values.
Updates the background model and returns the foreground mask.
Parameters: |
|
---|
Computes a background image.
Parameters: |
|
---|
Class used for background/foreground segmentation.
class GMG_GPU_GPU
{
public:
GMG_GPU();
void initialize(Size frameSize, float min = 0.0f, float max = 255.0f);
void operator ()(const GpuMat& frame, GpuMat& fgmask, float learningRate = -1.0f, Stream& stream = Stream::Null());
void release();
int maxFeatures;
float learningRate;
int numInitializationFrames;
int quantizationLevels;
float backgroundPrior;
float decisionThreshold;
int smoothingRadius;
...
};
The class discriminates between foreground and background pixels by building and maintaining a model of the background. Any pixel which does not fit this model is then deemed to be foreground. The class implements algorithm described in [GMG2012].
Here are important members of the class that control the algorithm, which you can set after constructing the class instance:
- int maxFeatures¶
Total number of distinct colors to maintain in histogram.
- float learningRate¶
Set between 0.0 and 1.0, determines how quickly features are “forgotten” from histograms.
- int numInitializationFrames¶
Number of frames of video to use to initialize histograms.
- int quantizationLevels¶
Number of discrete levels in each channel to be used in histograms.
- float backgroundPrior¶
Prior probability that any given pixel is a background pixel. A sensitivity parameter.
- float decisionThreshold¶
Value above which pixel is determined to be FG.
- float smoothingRadius¶
Smoothing radius, in pixels, for cleaning up FG image.
The default constructor.
Default constructor sets all parameters to default values.
Initialize background model and allocates all inner buffers.
Parameters: |
|
---|
Updates the background model and returns the foreground mask
Parameters: |
|
---|
Video writer class.
The class uses H264 video codec.
Note
Currently only Windows platform is supported.
Constructors.
Parameters: |
|
---|
The constructors initialize video writer. FFMPEG is used to write videos. User can implement own multiplexing with gpu::VideoWriter_GPU::EncoderCallBack .
Initializes or reinitializes video writer.
The method opens video writer. Parameters are the same as in the constructor gpu::VideoWriter_GPU::VideoWriter_GPU() . The method throws Exception if error occurs.
Returns true if video writer has been successfully initialized.
Writes the next video frame.
Parameters: |
|
---|
The method write the specified image to video file. The image must have the same size and the same surface format as has been specified when opening the video writer.
Different parameters for CUDA video encoder.
struct EncoderParams
{
int P_Interval; // NVVE_P_INTERVAL,
int IDR_Period; // NVVE_IDR_PERIOD,
int DynamicGOP; // NVVE_DYNAMIC_GOP,
int RCType; // NVVE_RC_TYPE,
int AvgBitrate; // NVVE_AVG_BITRATE,
int PeakBitrate; // NVVE_PEAK_BITRATE,
int QP_Level_Intra; // NVVE_QP_LEVEL_INTRA,
int QP_Level_InterP; // NVVE_QP_LEVEL_INTER_P,
int QP_Level_InterB; // NVVE_QP_LEVEL_INTER_B,
int DeblockMode; // NVVE_DEBLOCK_MODE,
int ProfileLevel; // NVVE_PROFILE_LEVEL,
int ForceIntra; // NVVE_FORCE_INTRA,
int ForceIDR; // NVVE_FORCE_IDR,
int ClearStat; // NVVE_CLEAR_STAT,
int DIMode; // NVVE_SET_DEINTERLACE,
int Presets; // NVVE_PRESETS,
int DisableCabac; // NVVE_DISABLE_CABAC,
int NaluFramingType; // NVVE_CONFIGURE_NALU_FRAMING_TYPE
int DisableSPSPPS; // NVVE_DISABLE_SPS_PPS
EncoderParams();
explicit EncoderParams(const std::string& configFile);
void load(const std::string& configFile);
void save(const std::string& configFile) const;
};
Constructors.
Parameters: |
|
---|
Creates default parameters or reads parameters from config file.
Reads parameters from config file.
Parameters: |
|
---|
Saves parameters to config file.
Parameters: |
|
---|
Callbacks for CUDA video encoder.
class EncoderCallBack
{
public:
enum PicType
{
IFRAME = 1,
PFRAME = 2,
BFRAME = 3
};
virtual ~EncoderCallBack() {}
virtual unsigned char* acquireBitStream(int* bufferSize) = 0;
virtual void releaseBitStream(unsigned char* data, int size) = 0;
virtual void onBeginFrame(int frameNumber, PicType picType) = 0;
virtual void onEndFrame(int frameNumber, PicType picType) = 0;
};
Callback function to signal the start of bitstream that is to be encoded.
Callback must allocate buffer for CUDA encoder and return pointer to it and it’s size.
Callback function to signal that the encoded bitstream is ready to be written to file.
Callback function to signal that the encoding operation on the frame has started.
Parameters: |
|
---|
Callback function signals that the encoding operation on the frame has finished.
Parameters: |
|
---|
Class for reading video from files.
Note
Currently only Windows and Linux platforms are supported.
Video codecs supported by gpu::VideoReader_GPU .
Y,U,V (4:2:0)
Y,V,U (4:2:0)
Y,UV (4:2:0)
YUYV/YUY2 (4:2:2)
UYVY (4:2:2)
Chroma formats supported by gpu::VideoReader_GPU .
Struct providing information about video file format.
struct FormatInfo
{
Codec codec;
ChromaFormat chromaFormat;
int width;
int height;
};
Constructors.
Parameters: |
|
---|
The constructors initialize video reader. FFMPEG is used to read videos. User can implement own demultiplexing with gpu::VideoReader_GPU::VideoSource .
Initializes or reinitializes video reader.
The method opens video reader. Parameters are the same as in the constructor gpu::VideoReader_GPU::VideoReader_GPU() . The method throws Exception if error occurs.
Returns true if video reader has been successfully initialized.
Grabs, decodes and returns the next video frame.
If no frames has been grabbed (there are no more frames in video file), the methods return false . The method throws Exception if error occurs.
Returns information about video file format.
The method throws Exception if video reader wasn’t initialized.
Dump information about video file format to specified stream.
Parameters: |
|
---|
The method throws Exception if video reader wasn’t initialized.
Interface for video demultiplexing.
class VideoSource
{
public:
VideoSource();
virtual ~VideoSource() {}
virtual FormatInfo format() const = 0;
virtual void start() = 0;
virtual void stop() = 0;
virtual bool isStarted() const = 0;
virtual bool hasError() const = 0;
protected:
bool parseVideoData(const unsigned char* data, size_t size, bool endOfStream = false);
};
User can implement own demultiplexing by implementing this interface.
Returns information about video file format.
Starts processing.
Implementation must create own thread with video processing and call periodic gpu::VideoReader_GPU::VideoSource::parseVideoData() .
Stops processing.
Returns true if processing was successfully started.
Returns true if error occured during processing.
Parse next video frame. Implementation must call this method after new frame was grabbed.
Parameters: |
|
---|
[Brox2004] |
|
[FGD2003] | (1, 2) Liyuan Li, Weimin Huang, Irene Y.H. Gu, and Qi Tian. Foreground Object Detection from Videos Containing Complex Background. ACM MM2003 9p, 2003. |
[MOG2001] |
|
[MOG2004] |
|
[ShadowDetect2003] | Prati, Mikic, Trivedi and Cucchiarra. Detecting Moving Shadows.... IEEE PAMI, 2003 |
[GMG2012] |
|