OpenCV  4.4.0-dev
Open Source Computer Vision
Classes | Functions
silhouette based 3D object tracking


class  cv::rapid::OLSTracker
class  cv::rapid::Rapid
 wrapper around silhouette based 3D object tracking function for uniform access More...
class  cv::rapid::Tracker
 Abstract base class for stateful silhouette trackers. More...


void cv::rapid::convertCorrespondencies (InputArray cols, InputArray srcLocations, OutputArray pts2d, InputOutputArray pts3d=noArray(), InputArray mask=noArray())
void cv::rapid::drawCorrespondencies (InputOutputArray bundle, InputArray cols, InputArray colors=noArray())
void cv::rapid::drawSearchLines (InputOutputArray img, InputArray locations, const Scalar &color)
void cv::rapid::drawWireframe (InputOutputArray img, InputArray pts2d, InputArray tris, const Scalar &color, int type=LINE_8, bool cullBackface=false)
void cv::rapid::extractControlPoints (int num, int len, InputArray pts3d, InputArray rvec, InputArray tvec, InputArray K, const Size &imsize, InputArray tris, OutputArray ctl2d, OutputArray ctl3d)
void cv::rapid::extractLineBundle (int len, InputArray ctl2d, InputArray img, OutputArray bundle, OutputArray srcLocations)
void cv::rapid::findCorrespondencies (InputArray bundle, OutputArray cols, OutputArray response=noArray())
float cv::rapid::rapid (InputArray img, int num, int len, InputArray pts3d, InputArray tris, InputArray K, InputOutputArray rvec, InputOutputArray tvec, double *rmsd=0)

Detailed Description

implements "RAPID-a video rate object tracker" [95] with the dynamic control point extraction of [55]

Function Documentation

◆ convertCorrespondencies()

void cv::rapid::convertCorrespondencies ( InputArray  cols,
InputArray  srcLocations,
OutputArray  pts2d,
InputOutputArray  pts3d = noArray(),
InputArray  mask = noArray() 
pts2d, pts3d=cv.rapid.convertCorrespondencies(cols, srcLocations[, pts2d[, pts3d[, mask]]])

#include <opencv2/rapid.hpp>

Collect corresponding 2d and 3d points based on correspondencies and mask

colscorrespondence-position per line in line-bundle-space
srcLocationsthe source image location
pts2d2d points
pts3d3d points
maskmask containing non-zero values for the elements to be retained

◆ drawCorrespondencies()

void cv::rapid::drawCorrespondencies ( InputOutputArray  bundle,
InputArray  cols,
InputArray  colors = noArray() 
bundle=cv.rapid.drawCorrespondencies(bundle, cols[, colors])

#include <opencv2/rapid.hpp>

Debug draw markers of matched correspondences onto a lineBundle

bundlethe lineBundle
colscolumn coordinates in the line bundle
colorscolors for the markers. Defaults to white.

◆ drawSearchLines()

void cv::rapid::drawSearchLines ( InputOutputArray  img,
InputArray  locations,
const Scalar color 
img=cv.rapid.drawSearchLines(img, locations, color)

#include <opencv2/rapid.hpp>

Debug draw search lines onto an image

imgthe output image
locationsthe source locations of a line bundle
colorthe line color

◆ drawWireframe()

void cv::rapid::drawWireframe ( InputOutputArray  img,
InputArray  pts2d,
InputArray  tris,
const Scalar color,
int  type = LINE_8,
bool  cullBackface = false 
img=cv.rapid.drawWireframe(img, pts2d, tris, color[, type[, cullBackface]])

#include <opencv2/rapid.hpp>

Draw a wireframe of a triangle mesh

imgthe output image
pts2dthe 2d points obtained by projectPoints
tristriangle face connectivity
colorline color
typeline type. See LineTypes.
cullBackfaceenable back-face culling based on CCW order

◆ extractControlPoints()

void cv::rapid::extractControlPoints ( int  num,
int  len,
InputArray  pts3d,
InputArray  rvec,
InputArray  tvec,
InputArray  K,
const Size imsize,
InputArray  tris,
OutputArray  ctl2d,
OutputArray  ctl3d 
ctl2d, ctl3d=cv.rapid.extractControlPoints(num, len, pts3d, rvec, tvec, K, imsize, tris[, ctl2d[, ctl3d]])

#include <opencv2/rapid.hpp>

Extract control points from the projected silhouette of a mesh

see [55] Sec 2.1, Step b

numnumber of control points
lensearch radius (used to restrict the ROI)
pts3dthe 3D points of the mesh
rvecrotation between mesh and camera
tvectranslation between mesh and camera
Kcamera intrinsic
imsizesize of the video frame
tristriangle face connectivity
ctl2dthe 2D locations of the control points
ctl3dmatching 3D points of the mesh

◆ extractLineBundle()

void cv::rapid::extractLineBundle ( int  len,
InputArray  ctl2d,
InputArray  img,
OutputArray  bundle,
OutputArray  srcLocations 
bundle, srcLocations=cv.rapid.extractLineBundle(len, ctl2d, img[, bundle[, srcLocations]])

#include <opencv2/rapid.hpp>

Extract the line bundle from an image

lenthe search radius. The bundle will have 2*len + 1 columns.
ctl2dthe search lines will be centered at this points and orthogonal to the contour defined by them. The bundle will have as many rows.
imgthe image to read the pixel intensities values from
bundleline bundle image with size ctl2d.rows() x (2 * len + 1) and the same type as img
srcLocationsthe source pixel locations of bundle in img as CV_16SC2

◆ findCorrespondencies()

void cv::rapid::findCorrespondencies ( InputArray  bundle,
OutputArray  cols,
OutputArray  response = noArray() 
cols, response=cv.rapid.findCorrespondencies(bundle[, cols[, response]])

#include <opencv2/rapid.hpp>

Find corresponding image locations by searching for a maximal sobel edge along the search line (a single row in the bundle)

bundlethe line bundle
colscorrespondence-position per line in line-bundle-space
responsethe sobel response for the selected point

◆ rapid()

float cv::rapid::rapid ( InputArray  img,
int  num,
int  len,
InputArray  pts3d,
InputArray  tris,
InputArray  K,
InputOutputArray  rvec,
InputOutputArray  tvec,
double *  rmsd = 0 
retval, rvec, tvec, rmsd=cv.rapid.rapid(img, num, len, pts3d, tris, K, rvec, tvec)

#include <opencv2/rapid.hpp>

High level function to execute a single rapid [95] iteration

  1. extractControlPoints
  2. extractLineBundle
  3. findCorrespondencies
  4. convertCorrespondencies
  5. solvePnPRefineLM
imgthe video frame
numnumber of search lines
lensearch line radius
pts3dthe 3D points of the mesh
tristriangle face connectivity
Kcamera matrix
rvecrotation between mesh and camera. Input values are used as an initial solution.
tvectranslation between mesh and camera. Input values are used as an initial solution.
rmsdthe 2d reprojection difference
ratio of search lines that could be extracted and matched