OpenCV  4.1.2-dev
Open Source Computer Vision
Classes | Typedefs | Enumerations | Functions
Optical Flow Algorithms

Classes

class  cv::optflow::DenseRLOFOpticalFlow
 Fast dense optical flow computation based on robust local optical flow (RLOF) algorithms and sparse-to-dense interpolation scheme. More...
 
class  cv::optflow::DualTVL1OpticalFlow
 "Dual TV L1" Optical Flow Algorithm. More...
 
class  cv::optflow::GPCDetails
 
class  cv::optflow::GPCForest< T >
 
struct  cv::optflow::GPCMatchingParams
 Class encapsulating matching parameters. More...
 
struct  cv::optflow::GPCPatchDescriptor
 
struct  cv::optflow::GPCPatchSample
 
struct  cv::optflow::GPCTrainingParams
 Class encapsulating training parameters. More...
 
class  cv::optflow::GPCTrainingSamples
 Class encapsulating training samples. More...
 
class  cv::optflow::GPCTree
 Class for individual tree. More...
 
class  cv::optflow::OpticalFlowPCAFlow
 PCAFlow algorithm. More...
 
class  cv::optflow::PCAPrior
 This class can be used for imposing a learned prior on the resulting optical flow. Solution will be regularized according to this prior. You need to generate appropriate prior file with "learn_prior.py" script beforehand. More...
 
class  cv::optflow::RLOFOpticalFlowParameter
 This is used store and set up the parameters of the robust local optical flow (RLOF) algoritm. More...
 
class  cv::optflow::SparseRLOFOpticalFlow
 Class used for calculation sparse optical flow and feature tracking with robust local optical flow (RLOF) algorithms. More...
 

Typedefs

typedef std::vector< GPCPatchSamplecv::optflow::GPCSamplesVector
 

Enumerations

enum  cv::optflow::GPCDescType {
  cv::optflow::GPC_DESCRIPTOR_DCT = 0,
  cv::optflow::GPC_DESCRIPTOR_WHT
}
 Descriptor types for the Global Patch Collider. More...
 
enum  cv::optflow::InterpolationType {
  cv::optflow::INTERP_GEO = 0,
  cv::optflow::INTERP_EPIC = 1
}
 
enum  cv::optflow::SolverType {
  cv::optflow::ST_STANDART = 0,
  cv::optflow::ST_BILINEAR = 1
}
 
enum  cv::optflow::SupportRegionType {
  cv::optflow::SR_FIXED = 0,
  cv::optflow::SR_CROSS = 1
}
 

Functions

double cv::motempl::calcGlobalOrientation (InputArray orientation, InputArray mask, InputArray mhi, double timestamp, double duration)
 Calculates a global motion orientation in a selected region. More...
 
void cv::motempl::calcMotionGradient (InputArray mhi, OutputArray mask, OutputArray orientation, double delta1, double delta2, int apertureSize=3)
 Calculates a gradient orientation of a motion history image. More...
 
void cv::optflow::calcOpticalFlowDenseRLOF (InputArray I0, InputArray I1, InputOutputArray flow, Ptr< RLOFOpticalFlowParameter > rlofParam=Ptr< RLOFOpticalFlowParameter >(), float forwardBackwardThreshold=0, Size gridStep=Size(6, 6), InterpolationType interp_type=InterpolationType::INTERP_EPIC, int epicK=128, float epicSigma=0.05f, float epicLambda=100.f, bool use_post_proc=true, float fgsLambda=500.0f, float fgsSigma=1.5f)
 Fast dense optical flow computation based on robust local optical flow (RLOF) algorithms and sparse-to-dense interpolation scheme. More...
 
void cv::optflow::calcOpticalFlowSF (InputArray from, InputArray to, OutputArray flow, int layers, int averaging_block_size, int max_flow)
 
void cv::optflow::calcOpticalFlowSF (InputArray from, InputArray to, OutputArray flow, int layers, int averaging_block_size, int max_flow, double sigma_dist, double sigma_color, int postprocess_window, double sigma_dist_fix, double sigma_color_fix, double occ_thr, int upscale_averaging_radius, double upscale_sigma_dist, double upscale_sigma_color, double speed_up_thr)
 Calculate an optical flow using "SimpleFlow" algorithm. More...
 
void cv::optflow::calcOpticalFlowSparseRLOF (InputArray prevImg, InputArray nextImg, InputArray prevPts, InputOutputArray nextPts, OutputArray status, OutputArray err, Ptr< RLOFOpticalFlowParameter > rlofParam=Ptr< RLOFOpticalFlowParameter >(), float forwardBackwardThreshold=0)
 Calculates fast optical flow for a sparse feature set using the robust local optical flow (RLOF) similar to optflow::calcOpticalFlowPyrLK(). More...
 
void cv::optflow::calcOpticalFlowSparseToDense (InputArray from, InputArray to, OutputArray flow, int grid_step=8, int k=128, float sigma=0.05f, bool use_post_proc=true, float fgs_lambda=500.0f, float fgs_sigma=1.5f)
 Fast dense optical flow based on PyrLK sparse matches interpolation. More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_DeepFlow ()
 DeepFlow optical flow algorithm implementation. More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_DenseRLOF ()
 Additional interface to the Dense RLOF algorithm - optflow::calcOpticalFlowDenseRLOF() More...
 
Ptr< DualTVL1OpticalFlowcv::optflow::createOptFlow_DualTVL1 ()
 Creates instance of cv::DenseOpticalFlow. More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_Farneback ()
 Additional interface to the Farneback's algorithm - calcOpticalFlowFarneback() More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_PCAFlow ()
 Creates an instance of PCAFlow. More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_SimpleFlow ()
 Additional interface to the SimpleFlow algorithm - calcOpticalFlowSF() More...
 
Ptr< SparseOpticalFlowcv::optflow::createOptFlow_SparseRLOF ()
 Additional interface to the Sparse RLOF algorithm - optflow::calcOpticalFlowSparseRLOF() More...
 
Ptr< DenseOpticalFlowcv::optflow::createOptFlow_SparseToDense ()
 Additional interface to the SparseToDenseFlow algorithm - calcOpticalFlowSparseToDense() More...
 
void cv::optflow::GPCForest< T >::findCorrespondences (InputArray imgFrom, InputArray imgTo, std::vector< std::pair< Point2i, Point2i > > &corr, const GPCMatchingParams params=GPCMatchingParams()) const
 Find correspondences between two images. More...
 
void cv::motempl::segmentMotion (InputArray mhi, OutputArray segmask, std::vector< Rect > &boundingRects, double timestamp, double segThresh)
 Splits a motion history image into a few parts corresponding to separate independent motions (for example, left hand, right hand). More...
 
void cv::motempl::updateMotionHistory (InputArray silhouette, InputOutputArray mhi, double timestamp, double duration)
 Updates the motion history image by a moving silhouette. More...
 

Detailed Description

Dense optical flow algorithms compute motion for each point:

Motion templates is alternative technique for detecting motion and computing its direction. See samples/motempl.py.

Functions reading and writing .flo files in "Middlebury" format, see: http://vision.middlebury.edu/flow/code/flow-code/README.txt

Typedef Documentation

◆ GPCSamplesVector

Enumeration Type Documentation

◆ GPCDescType

#include <opencv2/optflow/sparse_matching_gpc.hpp>

Descriptor types for the Global Patch Collider.

Enumerator
GPC_DESCRIPTOR_DCT 
Python: cv.optflow.GPC_DESCRIPTOR_DCT

Better quality but slow.

GPC_DESCRIPTOR_WHT 
Python: cv.optflow.GPC_DESCRIPTOR_WHT

Worse quality but much faster.

◆ InterpolationType

#include <opencv2/optflow/rlofflow.hpp>

Enumerator
INTERP_GEO 
Python: cv.optflow.INTERP_GEO

Fast geodesic interpolation, see [78]

INTERP_EPIC 
Python: cv.optflow.INTERP_EPIC

Edge-preserving interpolation, see [184],Geistert2016.

◆ SolverType

#include <opencv2/optflow/rlofflow.hpp>

Enumerator
ST_STANDART 
Python: cv.optflow.ST_STANDART

Apply standard iterative refinement

ST_BILINEAR 
Python: cv.optflow.ST_BILINEAR

Apply optimized iterative refinement based bilinear equation solutions as described in [198]

◆ SupportRegionType

#include <opencv2/optflow/rlofflow.hpp>

Enumerator
SR_FIXED 
Python: cv.optflow.SR_FIXED

Apply a constant support region

SR_CROSS 
Python: cv.optflow.SR_CROSS

Apply a adaptive support region obtained by cross-based segmentation as described in [199]

Function Documentation

◆ calcGlobalOrientation()

double cv::motempl::calcGlobalOrientation ( InputArray  orientation,
InputArray  mask,
InputArray  mhi,
double  timestamp,
double  duration 
)
Python:
retval=cv.motempl.calcGlobalOrientation(orientation, mask, mhi, timestamp, duration)

#include <opencv2/optflow/motempl.hpp>

Calculates a global motion orientation in a selected region.

Parameters
orientationMotion gradient orientation image calculated by the function calcMotionGradient
maskMask image. It may be a conjunction of a valid gradient mask, also calculated by calcMotionGradient , and the mask of a region whose direction needs to be calculated.
mhiMotion history image calculated by updateMotionHistory .
timestampTimestamp passed to updateMotionHistory .
durationMaximum duration of a motion track in milliseconds, passed to updateMotionHistory

The function calculates an average motion direction in the selected region and returns the angle between 0 degrees and 360 degrees. The average direction is computed from the weighted orientation histogram, where a recent motion has a larger weight and the motion occurred in the past has a smaller weight, as recorded in mhi .

◆ calcMotionGradient()

void cv::motempl::calcMotionGradient ( InputArray  mhi,
OutputArray  mask,
OutputArray  orientation,
double  delta1,
double  delta2,
int  apertureSize = 3 
)
Python:
mask, orientation=cv.motempl.calcMotionGradient(mhi, delta1, delta2[, mask[, orientation[, apertureSize]]])

#include <opencv2/optflow/motempl.hpp>

Calculates a gradient orientation of a motion history image.

Parameters
mhiMotion history single-channel floating-point image.
maskOutput mask image that has the type CV_8UC1 and the same size as mhi . Its non-zero elements mark pixels where the motion gradient data is correct.
orientationOutput motion gradient orientation image that has the same type and the same size as mhi . Each pixel of the image is a motion orientation, from 0 to 360 degrees.
delta1Minimal (or maximal) allowed difference between mhi values within a pixel neighborhood.
delta2Maximal (or minimal) allowed difference between mhi values within a pixel neighborhood. That is, the function finds the minimum ( \(m(x,y)\) ) and maximum ( \(M(x,y)\) ) mhi values over \(3 \times 3\) neighborhood of each pixel and marks the motion orientation at \((x, y)\) as valid only if

\[\min ( \texttt{delta1} , \texttt{delta2} ) \le M(x,y)-m(x,y) \le \max ( \texttt{delta1} , \texttt{delta2} ).\]

apertureSizeAperture size of the Sobel operator.

The function calculates a gradient orientation at each pixel \((x, y)\) as:

\[\texttt{orientation} (x,y)= \arctan{\frac{d\texttt{mhi}/dy}{d\texttt{mhi}/dx}}\]

In fact, fastAtan2 and phase are used so that the computed angle is measured in degrees and covers the full range 0..360. Also, the mask is filled to indicate pixels where the computed angle is valid.

Note
  • (Python) An example on how to perform a motion template technique can be found at opencv_source_code/samples/python2/motempl.py

◆ calcOpticalFlowDenseRLOF()

void cv::optflow::calcOpticalFlowDenseRLOF ( InputArray  I0,
InputArray  I1,
InputOutputArray  flow,
Ptr< RLOFOpticalFlowParameter rlofParam = PtrRLOFOpticalFlowParameter >(),
float  forwardBackwardThreshold = 0,
Size  gridStep = Size(6, 6),
InterpolationType  interp_type = InterpolationType::INTERP_EPIC,
int  epicK = 128,
float  epicSigma = 0.05f,
float  epicLambda = 100.f,
bool  use_post_proc = true,
float  fgsLambda = 500.0f,
float  fgsSigma = 1.5f 
)
Python:
flow=cv.optflow.calcOpticalFlowDenseRLOF(I0, I1, flow[, rlofParam[, forwardBackwardThreshold[, gridStep[, interp_type[, epicK[, epicSigma[, epicLambda[, use_post_proc[, fgsLambda[, fgsSigma]]]]]]]]]])

#include <opencv2/optflow/rlofflow.hpp>

Fast dense optical flow computation based on robust local optical flow (RLOF) algorithms and sparse-to-dense interpolation scheme.

The RLOF is a fast local optical flow approach described in [197] [198] [199] and [200] similar to the pyramidal iterative Lucas-Kanade method as proposed by [23]. The implementation is derived from optflow::calcOpticalFlowPyrLK().

The sparse-to-dense interpolation scheme allows for fast computation of dense optical flow using RLOF (see [78]). For this scheme the following steps are applied:

  1. motion vector seeded at a regular sampled grid are computed. The sparsity of this grid can be configured with setGridStep
  2. (optinally) errornous motion vectors are filter based on the forward backward confidence. The threshold can be configured with setForwardBackward. The filter is only applied if the threshold >0 but than the runtime is doubled due to the estimation of the backward flow.
  3. Vector field interpolation is applied to the motion vector set to obtain a dense vector field.
Parameters
I0first 8-bit input image. If The cross-based RLOF is used (by selecting optflow::RLOFOpticalFlowParameter::supportRegionType = SupportRegionType::SR_CROSS) image has to be a 8-bit 3 channel image.
I1second 8-bit input image. If The cross-based RLOF is used (by selecting optflow::RLOFOpticalFlowParameter::supportRegionType = SupportRegionType::SR_CROSS) image has to be a 8-bit 3 channel image.
flowcomputed flow image that has the same size as I0 and type CV_32FC2.
rlofParamsee optflow::RLOFOpticalFlowParameter
forwardBackwardThresholdThreshold for the forward backward confidence check. For each grid point \( \mathbf{x} \) a motion vector \( d_{I0,I1}(\mathbf{x}) \) is computed. If the forward backward error

\[ EP_{FB} = || d_{I0,I1} + d_{I1,I0} || \]

is larger than threshold given by this function then the motion vector will not be used by the following vector field interpolation. \( d_{I1,I0} \) denotes the backward flow. Note, the forward backward test will only be applied if the threshold > 0. This may results into a doubled runtime for the motion estimation.
gridStepSize of the grid to spawn the motion vectors. For each grid point a motion vector is computed. Some motion vectors will be removed due to the forwatd backward threshold (if set >0). The rest will be the base of the vector field interpolation.
interp_typeinterpolation method used to compute the dense optical flow. Two interpolation algorithms are supported:
  • INTERP_GEO applies the fast geodesic interpolation, see [78].
  • INTERP_EPIC_RESIDUAL applies the edge-preserving interpolation, see [184],Geistert2016.
epicKsee ximgproc::EdgeAwareInterpolator() sets the respective parameter.
epicSigmasee ximgproc::EdgeAwareInterpolator() sets the respective parameter.
epicLambdasee ximgproc::EdgeAwareInterpolator() sets the respective parameter.
use_post_procenables ximgproc::fastGlobalSmootherFilter() parameter.
fgsLambdasets the respective ximgproc::fastGlobalSmootherFilter() parameter.
fgsSigmasets the respective ximgproc::fastGlobalSmootherFilter() parameter.

Parameters have been described in [197], [198], [199], [200]. For the RLOF configuration see optflow::RLOFOpticalFlowParameter for further details.

Note
If the grid size is set to (1,1) and the forward backward threshold <= 0 that the dense optical flow field is purely computed with the RLOF.
SIMD parallelization is only available when compiling with SSE4.1.
See also
optflow::DenseRLOFOpticalFlow, optflow::RLOFOpticalFlowParameter

◆ calcOpticalFlowSF() [1/2]

void cv::optflow::calcOpticalFlowSF ( InputArray  from,
InputArray  to,
OutputArray  flow,
int  layers,
int  averaging_block_size,
int  max_flow 
)
Python:
flow=cv.optflow.calcOpticalFlowSF(from, to, layers, averaging_block_size, max_flow[, flow])
flow=cv.optflow.calcOpticalFlowSF(from, to, layers, averaging_block_size, max_flow, sigma_dist, sigma_color, postprocess_window, sigma_dist_fix, sigma_color_fix, occ_thr, upscale_averaging_radius, upscale_sigma_dist, upscale_sigma_color, speed_up_thr[, flow])

#include <opencv2/optflow.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ calcOpticalFlowSF() [2/2]

void cv::optflow::calcOpticalFlowSF ( InputArray  from,
InputArray  to,
OutputArray  flow,
int  layers,
int  averaging_block_size,
int  max_flow,
double  sigma_dist,
double  sigma_color,
int  postprocess_window,
double  sigma_dist_fix,
double  sigma_color_fix,
double  occ_thr,
int  upscale_averaging_radius,
double  upscale_sigma_dist,
double  upscale_sigma_color,
double  speed_up_thr 
)
Python:
flow=cv.optflow.calcOpticalFlowSF(from, to, layers, averaging_block_size, max_flow[, flow])
flow=cv.optflow.calcOpticalFlowSF(from, to, layers, averaging_block_size, max_flow, sigma_dist, sigma_color, postprocess_window, sigma_dist_fix, sigma_color_fix, occ_thr, upscale_averaging_radius, upscale_sigma_dist, upscale_sigma_color, speed_up_thr[, flow])

#include <opencv2/optflow.hpp>

Calculate an optical flow using "SimpleFlow" algorithm.

Parameters
fromFirst 8-bit 3-channel image.
toSecond 8-bit 3-channel image of the same size as prev
flowcomputed flow image that has the same size as prev and type CV_32FC2
layersNumber of layers
averaging_block_sizeSize of block through which we sum up when calculate cost function for pixel
max_flowmaximal flow that we search at each level
sigma_distvector smooth spatial sigma parameter
sigma_colorvector smooth color sigma parameter
postprocess_windowwindow size for postprocess cross bilateral filter
sigma_dist_fixspatial sigma for postprocess cross bilateralf filter
sigma_color_fixcolor sigma for postprocess cross bilateral filter
occ_thrthreshold for detecting occlusions
upscale_averaging_radiuswindow size for bilateral upscale operation
upscale_sigma_distspatial sigma for bilateral upscale operation
upscale_sigma_colorcolor sigma for bilateral upscale operation
speed_up_thrthreshold to detect point with irregular flow - where flow should be recalculated after upscale

See [215] . And site of project - http://graphics.berkeley.edu/papers/Tao-SAN-2012-05/.

Note
  • An example using the simpleFlow algorithm can be found at samples/simpleflow_demo.cpp

◆ calcOpticalFlowSparseRLOF()

void cv::optflow::calcOpticalFlowSparseRLOF ( InputArray  prevImg,
InputArray  nextImg,
InputArray  prevPts,
InputOutputArray  nextPts,
OutputArray  status,
OutputArray  err,
Ptr< RLOFOpticalFlowParameter rlofParam = PtrRLOFOpticalFlowParameter >(),
float  forwardBackwardThreshold = 0 
)
Python:
nextPts, status, err=cv.optflow.calcOpticalFlowSparseRLOF(prevImg, nextImg, prevPts, nextPts[, status[, err[, rlofParam[, forwardBackwardThreshold]]]])

#include <opencv2/optflow/rlofflow.hpp>

Calculates fast optical flow for a sparse feature set using the robust local optical flow (RLOF) similar to optflow::calcOpticalFlowPyrLK().

The RLOF is a fast local optical flow approach described in [197] [198] [199] and [200] similar to the pyramidal iterative Lucas-Kanade method as proposed by [23]. The implementation is derived from optflow::calcOpticalFlowPyrLK().

Parameters
prevImgfirst 8-bit input image. If The cross-based RLOF is used (by selecting optflow::RLOFOpticalFlowParameter::supportRegionType = SupportRegionType::SR_CROSS) image has to be a 8-bit 3 channel image.
nextImgsecond 8-bit input image. If The cross-based RLOF is used (by selecting optflow::RLOFOpticalFlowParameter::supportRegionType = SupportRegionType::SR_CROSS) image has to be a 8-bit 3 channel image.
prevPtsvector of 2D points for which the flow needs to be found; point coordinates must be single-precision floating-point numbers.
nextPtsoutput vector of 2D points (with single-precision floating-point coordinates) containing the calculated new positions of input features in the second image; when optflow::RLOFOpticalFlowParameter::useInitialFlow variable is true the vector must have the same size as in the input and contain the initialization point correspondences.
statusoutput status vector (of unsigned chars); each element of the vector is set to 1 if the flow for the corresponding features has passed the forward backward check.
erroutput vector of errors; each element of the vector is set to the forward backward error for the corresponding feature.
rlofParamsee optflow::RLOFOpticalFlowParameter
forwardBackwardThresholdThreshold for the forward backward confidence check. If forewardBackwardThreshold <=0 the forward
Note
SIMD parallelization is only available when compiling with SSE4.1.

Parameters have been described in [197], [198], [199] and [200]. For the RLOF configuration see optflow::RLOFOpticalFlowParameter for further details.

◆ calcOpticalFlowSparseToDense()

void cv::optflow::calcOpticalFlowSparseToDense ( InputArray  from,
InputArray  to,
OutputArray  flow,
int  grid_step = 8,
int  k = 128,
float  sigma = 0.05f,
bool  use_post_proc = true,
float  fgs_lambda = 500.0f,
float  fgs_sigma = 1.5f 
)
Python:
flow=cv.optflow.calcOpticalFlowSparseToDense(from, to[, flow[, grid_step[, k[, sigma[, use_post_proc[, fgs_lambda[, fgs_sigma]]]]]]])

#include <opencv2/optflow.hpp>

Fast dense optical flow based on PyrLK sparse matches interpolation.

Parameters
fromfirst 8-bit 3-channel or 1-channel image.
tosecond 8-bit 3-channel or 1-channel image of the same size as from
flowcomputed flow image that has the same size as from and CV_32FC2 type
grid_stepstride used in sparse match computation. Lower values usually result in higher quality but slow down the algorithm.
knumber of nearest-neighbor matches considered, when fitting a locally affine model. Lower values can make the algorithm noticeably faster at the cost of some quality degradation.
sigmaparameter defining how fast the weights decrease in the locally-weighted affine fitting. Higher values can help preserve fine details, lower values can help to get rid of the noise in the output flow.
use_post_procdefines whether the ximgproc::fastGlobalSmootherFilter() is used for post-processing after interpolation
fgs_lambdasee the respective parameter of the ximgproc::fastGlobalSmootherFilter()
fgs_sigmasee the respective parameter of the ximgproc::fastGlobalSmootherFilter()

◆ createOptFlow_DeepFlow()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_DeepFlow ( )
Python:
retval=cv.optflow.createOptFlow_DeepFlow()

#include <opencv2/optflow.hpp>

DeepFlow optical flow algorithm implementation.

The class implements the DeepFlow optical flow algorithm described in [242] . See also http://lear.inrialpes.fr/src/deepmatching/ . Parameters - class fields - that may be modified after creating a class instance:

  • member float alpha Smoothness assumption weight
  • member float delta Color constancy assumption weight
  • member float gamma Gradient constancy weight
  • member float sigma Gaussian smoothing parameter
  • member int minSize Minimal dimension of an image in the pyramid (next, smaller images in the pyramid are generated until one of the dimensions reaches this size)
  • member float downscaleFactor Scaling factor in the image pyramid (must be < 1)
  • member int fixedPointIterations How many iterations on each level of the pyramid
  • member int sorIterations Iterations of Succesive Over-Relaxation (solver)
  • member float omega Relaxation factor in SOR

◆ createOptFlow_DenseRLOF()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_DenseRLOF ( )
Python:
retval=cv.optflow.createOptFlow_DenseRLOF()

#include <opencv2/optflow/rlofflow.hpp>

Additional interface to the Dense RLOF algorithm - optflow::calcOpticalFlowDenseRLOF()

◆ createOptFlow_DualTVL1()

Ptr<DualTVL1OpticalFlow> cv::optflow::createOptFlow_DualTVL1 ( )
Python:
retval=cv.optflow.createOptFlow_DualTVL1()

#include <opencv2/optflow.hpp>

Creates instance of cv::DenseOpticalFlow.

◆ createOptFlow_Farneback()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_Farneback ( )
Python:
retval=cv.optflow.createOptFlow_Farneback()

#include <opencv2/optflow.hpp>

Additional interface to the Farneback's algorithm - calcOpticalFlowFarneback()

◆ createOptFlow_PCAFlow()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_PCAFlow ( )
Python:
retval=cv.optflow.createOptFlow_PCAFlow()

#include <opencv2/optflow/pcaflow.hpp>

Creates an instance of PCAFlow.

◆ createOptFlow_SimpleFlow()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_SimpleFlow ( )
Python:
retval=cv.optflow.createOptFlow_SimpleFlow()

#include <opencv2/optflow.hpp>

Additional interface to the SimpleFlow algorithm - calcOpticalFlowSF()

◆ createOptFlow_SparseRLOF()

Ptr<SparseOpticalFlow> cv::optflow::createOptFlow_SparseRLOF ( )
Python:
retval=cv.optflow.createOptFlow_SparseRLOF()

#include <opencv2/optflow/rlofflow.hpp>

Additional interface to the Sparse RLOF algorithm - optflow::calcOpticalFlowSparseRLOF()

◆ createOptFlow_SparseToDense()

Ptr<DenseOpticalFlow> cv::optflow::createOptFlow_SparseToDense ( )
Python:
retval=cv.optflow.createOptFlow_SparseToDense()

#include <opencv2/optflow.hpp>

Additional interface to the SparseToDenseFlow algorithm - calcOpticalFlowSparseToDense()

◆ findCorrespondences()

template<int T>
void cv::optflow::GPCForest< T >::findCorrespondences ( InputArray  imgFrom,
InputArray  imgTo,
std::vector< std::pair< Point2i, Point2i > > &  corr,
const GPCMatchingParams  params = GPCMatchingParams() 
) const

#include <opencv2/optflow/sparse_matching_gpc.hpp>

Find correspondences between two images.

Parameters
[in]imgFromFirst image in a sequence.
[in]imgToSecond image in a sequence.
[out]corrOutput vector with pairs of corresponding points.
[in]paramsAdditional matching parameters for fine-tuning.

◆ segmentMotion()

void cv::motempl::segmentMotion ( InputArray  mhi,
OutputArray  segmask,
std::vector< Rect > &  boundingRects,
double  timestamp,
double  segThresh 
)
Python:
segmask, boundingRects=cv.motempl.segmentMotion(mhi, timestamp, segThresh[, segmask])

#include <opencv2/optflow/motempl.hpp>

Splits a motion history image into a few parts corresponding to separate independent motions (for example, left hand, right hand).

Parameters
mhiMotion history image.
segmaskImage where the found mask should be stored, single-channel, 32-bit floating-point.
boundingRectsVector containing ROIs of motion connected components.
timestampCurrent time in milliseconds or other units.
segThreshSegmentation threshold that is recommended to be equal to the interval between motion history "steps" or greater.

The function finds all of the motion segments and marks them in segmask with individual values (1,2,...). It also computes a vector with ROIs of motion connected components. After that the motion direction for every component can be calculated with calcGlobalOrientation using the extracted mask of the particular component.

◆ updateMotionHistory()

void cv::motempl::updateMotionHistory ( InputArray  silhouette,
InputOutputArray  mhi,
double  timestamp,
double  duration 
)
Python:
mhi=cv.motempl.updateMotionHistory(silhouette, mhi, timestamp, duration)

#include <opencv2/optflow/motempl.hpp>

Updates the motion history image by a moving silhouette.

Parameters
silhouetteSilhouette mask that has non-zero pixels where the motion occurs.
mhiMotion history image that is updated by the function (single-channel, 32-bit floating-point).
timestampCurrent time in milliseconds or other units.
durationMaximal duration of the motion track in the same units as timestamp .

The function updates the motion history image as follows:

\[\texttt{mhi} (x,y)= \forkthree{\texttt{timestamp}}{if \(\texttt{silhouette}(x,y) \ne 0\)}{0}{if \(\texttt{silhouette}(x,y) = 0\) and \(\texttt{mhi} < (\texttt{timestamp} - \texttt{duration})\)}{\texttt{mhi}(x,y)}{otherwise}\]

That is, MHI pixels where the motion occurs are set to the current timestamp , while the pixels where the motion happened last time a long time ago are cleared.

The function, together with calcMotionGradient and calcGlobalOrientation , implements a motion templates technique described in [46] and [25] .