OpenCV  4.0.1
Open Source Computer Vision
Classes | Typedefs | Enumerations | Functions
Optical Flow Algorithms

Classes

class  cv::optflow::DualTVL1OpticalFlow
 "Dual TV L1" Optical Flow Algorithm. More...
 
class  cv::optflow::GPCDetails
 
class  cv::optflow::GPCForest< T >
 
struct  cv::optflow::GPCMatchingParams
 Class encapsulating matching parameters. More...
 
struct  cv::optflow::GPCPatchDescriptor
 
struct  cv::optflow::GPCPatchSample
 
struct  cv::optflow::GPCTrainingParams
 Class encapsulating training parameters. More...
 
class  cv::optflow::GPCTrainingSamples
 Class encapsulating training samples. More...
 
class  cv::optflow::GPCTree
 Class for individual tree. More...
 
class  cv::optflow::OpticalFlowPCAFlow
 PCAFlow algorithm. More...
 
class  cv::optflow::PCAPrior
 This class can be used for imposing a learned prior on the resulting optical flow. Solution will be regularized according to this prior. You need to generate appropriate prior file with "learn_prior.py" script beforehand. More...
 

Typedefs

typedef std::vector< GPCPatchSamplecv::optflow::GPCSamplesVector
 

Enumerations

enum  cv::optflow::GPCDescType {
  cv::optflow::GPC_DESCRIPTOR_DCT = 0,
  cv::optflow::GPC_DESCRIPTOR_WHT
}
 Descriptor types for the Global Patch Collider. More...
 

Functions

double cv::motempl::calcGlobalOrientation (InputArray orientation, InputArray mask, InputArray mhi, double timestamp, double duration)
 Calculates a global motion orientation in a selected region. More...
 
void cv::motempl::calcMotionGradient (InputArray mhi, OutputArray mask, OutputArray orientation, double delta1, double delta2, int apertureSize=3)
 Calculates a gradient orientation of a motion history image. More...
 
void cv::optflow::calcOpticalFlowSF (InputArray from, InputArray to, OutputArray flow, int layers, int averaging_block_size, int max_flow)
 
void cv::optflow::calcOpticalFlowSF (InputArray from, InputArray to, OutputArray flow, int layers, int averaging_block_size, int max_flow, double sigma_dist, double sigma_color, int postprocess_window, double sigma_dist_fix, double sigma_color_fix, double occ_thr, int upscale_averaging_radius, double upscale_sigma_dist, double upscale_sigma_color, double speed_up_thr)
 Calculate an optical flow using "SimpleFlow" algorithm. More...
 
void cv::optflow::calcOpticalFlowSparseToDense (InputArray from, InputArray to, OutputArray flow, int grid_step=8, int k=128, float sigma=0.05f, bool use_post_proc=true, float fgs_lambda=500.0f, float fgs_sigma=1.5f)
 Fast dense optical flow based on PyrLK sparse matches interpolation. More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_DeepFlow ()
 DeepFlow optical flow algorithm implementation. More...
 
Ptr< DualTVL1OpticalFlowcv::optflow::createOptFlow_DualTVL1 ()
 Creates instance of cv::DenseOpticalFlow. More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_Farneback ()
 Additional interface to the Farneback's algorithm - calcOpticalFlowFarneback() More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_PCAFlow ()
 Creates an instance of PCAFlow. More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_SimpleFlow ()
 Additional interface to the SimpleFlow algorithm - calcOpticalFlowSF() More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_SparseToDense ()
 Additional interface to the SparseToDenseFlow algorithm - calcOpticalFlowSparseToDense() More...
 
void cv::optflow::GPCForest< T >::findCorrespondences (InputArray imgFrom, InputArray imgTo, std::vector< std::pair< Point2i, Point2i > > &corr, const GPCMatchingParams params=GPCMatchingParams()) const
 Find correspondences between two images. More...
 
void cv::motempl::segmentMotion (InputArray mhi, OutputArray segmask, std::vector< Rect > &boundingRects, double timestamp, double segThresh)
 Splits a motion history image into a few parts corresponding to separate independent motions (for example, left hand, right hand). More...
 
void cv::motempl::updateMotionHistory (InputArray silhouette, InputOutputArray mhi, double timestamp, double duration)
 Updates the motion history image by a moving silhouette. More...
 

Detailed Description

Dense optical flow algorithms compute motion for each point:

Motion templates is alternative technique for detecting motion and computing its direction. See samples/motempl.py.

Functions reading and writing .flo files in "Middlebury" format, see: http://vision.middlebury.edu/flow/code/flow-code/README.txt

Typedef Documentation

§ GPCSamplesVector

Enumeration Type Documentation

§ GPCDescType

Descriptor types for the Global Patch Collider.

Enumerator
GPC_DESCRIPTOR_DCT 
Python: cv.optflow.GPC_DESCRIPTOR_DCT

Better quality but slow.

GPC_DESCRIPTOR_WHT 
Python: cv.optflow.GPC_DESCRIPTOR_WHT

Worse quality but much faster.

Function Documentation

§ calcGlobalOrientation()

double cv::motempl::calcGlobalOrientation ( InputArray  orientation,
InputArray  mask,
InputArray  mhi,
double  timestamp,
double  duration 
)
Python:
retval=cv.motempl.calcGlobalOrientation(orientation, mask, mhi, timestamp, duration)

Calculates a global motion orientation in a selected region.

Parameters
orientationMotion gradient orientation image calculated by the function calcMotionGradient
maskMask image. It may be a conjunction of a valid gradient mask, also calculated by calcMotionGradient , and the mask of a region whose direction needs to be calculated.
mhiMotion history image calculated by updateMotionHistory .
timestampTimestamp passed to updateMotionHistory .
durationMaximum duration of a motion track in milliseconds, passed to updateMotionHistory

The function calculates an average motion direction in the selected region and returns the angle between 0 degrees and 360 degrees. The average direction is computed from the weighted orientation histogram, where a recent motion has a larger weight and the motion occurred in the past has a smaller weight, as recorded in mhi .

§ calcMotionGradient()

void cv::motempl::calcMotionGradient ( InputArray  mhi,
OutputArray  mask,
OutputArray  orientation,
double  delta1,
double  delta2,
int  apertureSize = 3 
)
Python:
mask, orientation=cv.motempl.calcMotionGradient(mhi, delta1, delta2[, mask[, orientation[, apertureSize]]])

Calculates a gradient orientation of a motion history image.

Parameters
mhiMotion history single-channel floating-point image.
maskOutput mask image that has the type CV_8UC1 and the same size as mhi . Its non-zero elements mark pixels where the motion gradient data is correct.
orientationOutput motion gradient orientation image that has the same type and the same size as mhi . Each pixel of the image is a motion orientation, from 0 to 360 degrees.
delta1Minimal (or maximal) allowed difference between mhi values within a pixel neighborhood.
delta2Maximal (or minimal) allowed difference between mhi values within a pixel neighborhood. That is, the function finds the minimum ( \(m(x,y)\) ) and maximum ( \(M(x,y)\) ) mhi values over \(3 \times 3\) neighborhood of each pixel and marks the motion orientation at \((x, y)\) as valid only if

\[\min ( \texttt{delta1} , \texttt{delta2} ) \le M(x,y)-m(x,y) \le \max ( \texttt{delta1} , \texttt{delta2} ).\]

apertureSizeAperture size of the Sobel operator.

The function calculates a gradient orientation at each pixel \((x, y)\) as:

\[\texttt{orientation} (x,y)= \arctan{\frac{d\texttt{mhi}/dy}{d\texttt{mhi}/dx}}\]

In fact, fastAtan2 and phase are used so that the computed angle is measured in degrees and covers the full range 0..360. Also, the mask is filled to indicate pixels where the computed angle is valid.

Note
  • (Python) An example on how to perform a motion template technique can be found at opencv_source_code/samples/python2/motempl.py

§ calcOpticalFlowSF() [1/2]

void cv::optflow::calcOpticalFlowSF ( InputArray  from,
InputArray  to,
OutputArray  flow,
int  layers,
int  averaging_block_size,
int  max_flow 
)
Python:
flow=cv.optflow.calcOpticalFlowSF(from, to, layers, averaging_block_size, max_flow[, flow])
flow=cv.optflow.calcOpticalFlowSF(from, to, layers, averaging_block_size, max_flow, sigma_dist, sigma_color, postprocess_window, sigma_dist_fix, sigma_color_fix, occ_thr, upscale_averaging_radius, upscale_sigma_dist, upscale_sigma_color, speed_up_thr[, flow])

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

§ calcOpticalFlowSF() [2/2]

void cv::optflow::calcOpticalFlowSF ( InputArray  from,
InputArray  to,
OutputArray  flow,
int  layers,
int  averaging_block_size,
int  max_flow,
double  sigma_dist,
double  sigma_color,
int  postprocess_window,
double  sigma_dist_fix,
double  sigma_color_fix,
double  occ_thr,
int  upscale_averaging_radius,
double  upscale_sigma_dist,
double  upscale_sigma_color,
double  speed_up_thr 
)
Python:
flow=cv.optflow.calcOpticalFlowSF(from, to, layers, averaging_block_size, max_flow[, flow])
flow=cv.optflow.calcOpticalFlowSF(from, to, layers, averaging_block_size, max_flow, sigma_dist, sigma_color, postprocess_window, sigma_dist_fix, sigma_color_fix, occ_thr, upscale_averaging_radius, upscale_sigma_dist, upscale_sigma_color, speed_up_thr[, flow])

Calculate an optical flow using "SimpleFlow" algorithm.

Parameters
fromFirst 8-bit 3-channel image.
toSecond 8-bit 3-channel image of the same size as prev
flowcomputed flow image that has the same size as prev and type CV_32FC2
layersNumber of layers
averaging_block_sizeSize of block through which we sum up when calculate cost function for pixel
max_flowmaximal flow that we search at each level
sigma_distvector smooth spatial sigma parameter
sigma_colorvector smooth color sigma parameter
postprocess_windowwindow size for postprocess cross bilateral filter
sigma_dist_fixspatial sigma for postprocess cross bilateralf filter
sigma_color_fixcolor sigma for postprocess cross bilateral filter
occ_thrthreshold for detecting occlusions
upscale_averaging_radiuswindow size for bilateral upscale operation
upscale_sigma_distspatial sigma for bilateral upscale operation
upscale_sigma_colorcolor sigma for bilateral upscale operation
speed_up_thrthreshold to detect point with irregular flow - where flow should be recalculated after upscale

See [190] . And site of project - http://graphics.berkeley.edu/papers/Tao-SAN-2012-05/.

Note
  • An example using the simpleFlow algorithm can be found at samples/simpleflow_demo.cpp

§ calcOpticalFlowSparseToDense()

void cv::optflow::calcOpticalFlowSparseToDense ( InputArray  from,
InputArray  to,
OutputArray  flow,
int  grid_step = 8,
int  k = 128,
float  sigma = 0.05f,
bool  use_post_proc = true,
float  fgs_lambda = 500.0f,
float  fgs_sigma = 1.5f 
)
Python:
flow=cv.optflow.calcOpticalFlowSparseToDense(from, to[, flow[, grid_step[, k[, sigma[, use_post_proc[, fgs_lambda[, fgs_sigma]]]]]]])

Fast dense optical flow based on PyrLK sparse matches interpolation.

Parameters
fromfirst 8-bit 3-channel or 1-channel image.
tosecond 8-bit 3-channel or 1-channel image of the same size as from
flowcomputed flow image that has the same size as from and CV_32FC2 type
grid_stepstride used in sparse match computation. Lower values usually result in higher quality but slow down the algorithm.
knumber of nearest-neighbor matches considered, when fitting a locally affine model. Lower values can make the algorithm noticeably faster at the cost of some quality degradation.
sigmaparameter defining how fast the weights decrease in the locally-weighted affine fitting. Higher values can help preserve fine details, lower values can help to get rid of the noise in the output flow.
use_post_procdefines whether the ximgproc::fastGlobalSmootherFilter() is used for post-processing after interpolation
fgs_lambdasee the respective parameter of the ximgproc::fastGlobalSmootherFilter()
fgs_sigmasee the respective parameter of the ximgproc::fastGlobalSmootherFilter()

§ createOptFlow_DeepFlow()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_DeepFlow ( )
Python:
retval=cv.optflow.createOptFlow_DeepFlow()

DeepFlow optical flow algorithm implementation.

The class implements the DeepFlow optical flow algorithm described in [216] . See also http://lear.inrialpes.fr/src/deepmatching/ . Parameters - class fields - that may be modified after creating a class instance:

  • member float alpha Smoothness assumption weight
  • member float delta Color constancy assumption weight
  • member float gamma Gradient constancy weight
  • member float sigma Gaussian smoothing parameter
  • member int minSize Minimal dimension of an image in the pyramid (next, smaller images in the pyramid are generated until one of the dimensions reaches this size)
  • member float downscaleFactor Scaling factor in the image pyramid (must be < 1)
  • member int fixedPointIterations How many iterations on each level of the pyramid
  • member int sorIterations Iterations of Succesive Over-Relaxation (solver)
  • member float omega Relaxation factor in SOR

§ createOptFlow_DualTVL1()

Ptr<DualTVL1OpticalFlow> cv::optflow::createOptFlow_DualTVL1 ( )
Python:
retval=cv.optflow.createOptFlow_DualTVL1()

Creates instance of cv::DenseOpticalFlow.

§ createOptFlow_Farneback()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_Farneback ( )
Python:
retval=cv.optflow.createOptFlow_Farneback()

Additional interface to the Farneback's algorithm - calcOpticalFlowFarneback()

§ createOptFlow_PCAFlow()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_PCAFlow ( )
Python:
retval=cv.optflow.createOptFlow_PCAFlow()

Creates an instance of PCAFlow.

§ createOptFlow_SimpleFlow()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_SimpleFlow ( )
Python:
retval=cv.optflow.createOptFlow_SimpleFlow()

Additional interface to the SimpleFlow algorithm - calcOpticalFlowSF()

§ createOptFlow_SparseToDense()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_SparseToDense ( )
Python:
retval=cv.optflow.createOptFlow_SparseToDense()

Additional interface to the SparseToDenseFlow algorithm - calcOpticalFlowSparseToDense()

§ findCorrespondences()

template<int T>
void cv::optflow::GPCForest< T >::findCorrespondences ( InputArray  imgFrom,
InputArray  imgTo,
std::vector< std::pair< Point2i, Point2i > > &  corr,
const GPCMatchingParams  params = GPCMatchingParams() 
) const

Find correspondences between two images.

Parameters
[in]imgFromFirst image in a sequence.
[in]imgToSecond image in a sequence.
[out]corrOutput vector with pairs of corresponding points.
[in]paramsAdditional matching parameters for fine-tuning.

§ segmentMotion()

void cv::motempl::segmentMotion ( InputArray  mhi,
OutputArray  segmask,
std::vector< Rect > &  boundingRects,
double  timestamp,
double  segThresh 
)
Python:
segmask, boundingRects=cv.motempl.segmentMotion(mhi, timestamp, segThresh[, segmask])

Splits a motion history image into a few parts corresponding to separate independent motions (for example, left hand, right hand).

Parameters
mhiMotion history image.
segmaskImage where the found mask should be stored, single-channel, 32-bit floating-point.
boundingRectsVector containing ROIs of motion connected components.
timestampCurrent time in milliseconds or other units.
segThreshSegmentation threshold that is recommended to be equal to the interval between motion history "steps" or greater.

The function finds all of the motion segments and marks them in segmask with individual values (1,2,...). It also computes a vector with ROIs of motion connected components. After that the motion direction for every component can be calculated with calcGlobalOrientation using the extracted mask of the particular component.

§ updateMotionHistory()

void cv::motempl::updateMotionHistory ( InputArray  silhouette,
InputOutputArray  mhi,
double  timestamp,
double  duration 
)
Python:
mhi=cv.motempl.updateMotionHistory(silhouette, mhi, timestamp, duration)

Updates the motion history image by a moving silhouette.

Parameters
silhouetteSilhouette mask that has non-zero pixels where the motion occurs.
mhiMotion history image that is updated by the function (single-channel, 32-bit floating-point).
timestampCurrent time in milliseconds or other units.
durationMaximal duration of the motion track in the same units as timestamp .

The function updates the motion history image as follows:

\[\texttt{mhi} (x,y)= \forkthree{\texttt{timestamp}}{if \(\texttt{silhouette}(x,y) \ne 0\)}{0}{if \(\texttt{silhouette}(x,y) = 0\) and \(\texttt{mhi} < (\texttt{timestamp} - \texttt{duration})\)}{\texttt{mhi}(x,y)}{otherwise}\]

That is, MHI pixels where the motion occurs are set to the current timestamp , while the pixels where the motion happened last time a long time ago are cleared.

The function, together with calcMotionGradient and calcGlobalOrientation , implements a motion templates technique described in [42] and [24] .