OpenCV  4.2.0
Open Source Computer Vision
Public Member Functions | Static Public Member Functions | List of all members
cv::dynafu::DynaFu Class Referenceabstract

DynamicFusion implementation. More...

#include <opencv2/rgbd/dynafu.hpp>

Public Member Functions

virtual ~DynaFu ()
 
virtual void getCloud (OutputArray points, OutputArray normals) const =0
 Gets points and normals of current 3d mesh. More...
 
virtual std::vector< Point3fgetNodesPos () const =0
 
virtual void getNormals (InputArray points, OutputArray normals) const =0
 Calculates normals for given points. More...
 
virtual const ParamsgetParams () const =0
 Get current parameters. More...
 
virtual void getPoints (OutputArray points) const =0
 Gets points of current 3d mesh. More...
 
virtual const Affine3f getPose () const =0
 Get current pose in voxel space. More...
 
virtual void marchCubes (OutputArray vertices, OutputArray edges) const =0
 
virtual void render (OutputArray image, const Matx44f &cameraPose=Matx44f::eye()) const =0
 Renders a volume into an image. More...
 
virtual void renderSurface (OutputArray depthImage, OutputArray vertImage, OutputArray normImage, bool warp=true)=0
 
virtual void reset ()=0
 Resets the algorithm. More...
 
virtual bool update (InputArray depth)=0
 Process next depth frame. More...
 

Static Public Member Functions

static Ptr< DynaFucreate (const Ptr< Params > &_params)
 

Detailed Description

DynamicFusion implementation.

This class implements a 3d reconstruction algorithm as described in [171].

It takes a sequence of depth images taken from depth sensor (or any depth images source such as stereo camera matching algorithm or even raymarching renderer). The output can be obtained as a vector of points and their normals or can be Phong-rendered from given camera pose.

It extends the KinectFusion algorithm to handle non-rigidly deforming scenes by maintaining a sparse set of nodes covering the geometry such that each node contains a warp to transform it from a canonical space to the live frame.

An internal representation of a model is a voxel cuboid that keeps TSDF values which are a sort of distances to the surface (for details read the [109] article about TSDF). There is no interface to that representation yet.

Note that DynamicFusion is based on the KinectFusion algorithm which is patented and its use may be restricted by the list of patents mentioned in README.md file in this module directory.

That's why you need to set the OPENCV_ENABLE_NONFREE option in CMake to use DynamicFusion.

Constructor & Destructor Documentation

◆ ~DynaFu()

virtual cv::dynafu::DynaFu::~DynaFu ( )
virtual

Member Function Documentation

◆ create()

static Ptr<DynaFu> cv::dynafu::DynaFu::create ( const Ptr< Params > &  _params)
static
Python:
retval=cv.dynafu.DynaFu_create(_params)

◆ getCloud()

virtual void cv::dynafu::DynaFu::getCloud ( OutputArray  points,
OutputArray  normals 
) const
pure virtual
Python:
points, normals=cv.dynafu_DynaFu.getCloud([, points[, normals]])

Gets points and normals of current 3d mesh.

The order of normals corresponds to order of points. The order of points is undefined.

Parameters
pointsvector of points which are 4-float vectors
normalsvector of normals which are 4-float vectors

◆ getNodesPos()

virtual std::vector<Point3f> cv::dynafu::DynaFu::getNodesPos ( ) const
pure virtual

◆ getNormals()

virtual void cv::dynafu::DynaFu::getNormals ( InputArray  points,
OutputArray  normals 
) const
pure virtual
Python:
normals=cv.dynafu_DynaFu.getNormals(points[, normals])

Calculates normals for given points.

Parameters
pointsinput vector of points which are 4-float vectors
normalsoutput vector of corresponding normals which are 4-float vectors

◆ getParams()

virtual const Params& cv::dynafu::DynaFu::getParams ( ) const
pure virtual

Get current parameters.

◆ getPoints()

virtual void cv::dynafu::DynaFu::getPoints ( OutputArray  points) const
pure virtual
Python:
points=cv.dynafu_DynaFu.getPoints([, points])

Gets points of current 3d mesh.

The order of points is undefined.

Parameters
pointsvector of points which are 4-float vectors

◆ getPose()

virtual const Affine3f cv::dynafu::DynaFu::getPose ( ) const
pure virtual

Get current pose in voxel space.

◆ marchCubes()

virtual void cv::dynafu::DynaFu::marchCubes ( OutputArray  vertices,
OutputArray  edges 
) const
pure virtual

◆ render()

virtual void cv::dynafu::DynaFu::render ( OutputArray  image,
const Matx44f cameraPose = Matx44f::eye() 
) const
pure virtual
Python:
image=cv.dynafu_DynaFu.render([, image[, cameraPose]])

Renders a volume into an image.

Renders a 0-surface of TSDF using Phong shading into a CV_8UC4 Mat. Light pose is fixed in DynaFu params.

Parameters
imageresulting image
cameraPosepose of camera to render from. If empty then render from current pose which is a last frame camera pose.

◆ renderSurface()

virtual void cv::dynafu::DynaFu::renderSurface ( OutputArray  depthImage,
OutputArray  vertImage,
OutputArray  normImage,
bool  warp = true 
)
pure virtual

◆ reset()

virtual void cv::dynafu::DynaFu::reset ( )
pure virtual
Python:
None=cv.dynafu_DynaFu.reset()

Resets the algorithm.

Clears current model and resets a pose.

◆ update()

virtual bool cv::dynafu::DynaFu::update ( InputArray  depth)
pure virtual
Python:
retval=cv.dynafu_DynaFu.update(depth)

Process next depth frame.

Integrates depth into voxel space with respect to its ICP-calculated pose. Input image is converted to CV_32F internally if has another type.

Parameters
depthone-channel image which size and depth scale is described in algorithm's parameters
Returns
true if succeeded to align new frame with current scene, false if opposite

The documentation for this class was generated from the following file: