OpenCV  4.7.0
Open Source Computer Vision
Public Member Functions | Static Public Member Functions | List of all members
cv::kinfu::KinFu Class Referenceabstract

KinectFusion implementation. More...

#include <opencv2/rgbd/kinfu.hpp>

Public Member Functions

virtual ~KinFu ()
 
virtual void getCloud (OutputArray points, OutputArray normals) const =0
 Gets points and normals of current 3d mesh. More...
 
virtual void getNormals (InputArray points, OutputArray normals) const =0
 Calculates normals for given points. More...
 
virtual const ParamsgetParams () const =0
 Get current parameters. More...
 
virtual void getPoints (OutputArray points) const =0
 Gets points of current 3d mesh. More...
 
virtual Affine3f getPose () const =0
 Get current pose in voxel space. More...
 
virtual void render (OutputArray image) const =0
 Renders a volume into an image. More...
 
virtual void render (OutputArray image, const Matx44f &cameraPose) const =0
 Renders a volume into an image. More...
 
virtual void reset ()=0
 Resets the algorithm. More...
 
virtual bool update (InputArray depth)=0
 Process next depth frame. More...
 

Static Public Member Functions

static Ptr< KinFucreate (const Ptr< Params > &_params)
 

Detailed Description

KinectFusion implementation.

This class implements a 3d reconstruction algorithm described in [120] paper.

It takes a sequence of depth images taken from depth sensor (or any depth images source such as stereo camera matching algorithm or even raymarching renderer). The output can be obtained as a vector of points and their normals or can be Phong-rendered from given camera pose.

An internal representation of a model is a voxel cuboid that keeps TSDF values which are a sort of distances to the surface (for details read the [120] article about TSDF). There is no interface to that representation yet.

KinFu uses OpenCL acceleration automatically if available. To enable or disable it explicitly use cv::setUseOptimized() or cv::ocl::setUseOpenCL().

This implementation is based on kinfu-remake.

Note that the KinectFusion algorithm was patented and its use may be restricted by the list of patents mentioned in README.md file in this module directory.

That's why you need to set the OPENCV_ENABLE_NONFREE option in CMake to use KinectFusion.

Constructor & Destructor Documentation

◆ ~KinFu()

virtual cv::kinfu::KinFu::~KinFu ( )
virtual

Member Function Documentation

◆ create()

static Ptr<KinFu> cv::kinfu::KinFu::create ( const Ptr< Params > &  _params)
static
Python:
cv.kinfu.KinFu.create(_params) -> retval
cv.kinfu.KinFu_create(_params) -> retval

◆ getCloud()

virtual void cv::kinfu::KinFu::getCloud ( OutputArray  points,
OutputArray  normals 
) const
pure virtual
Python:
cv.kinfu.KinFu.getCloud([, points[, normals]]) -> points, normals

Gets points and normals of current 3d mesh.

The order of normals corresponds to order of points. The order of points is undefined.

Parameters
pointsvector of points which are 4-float vectors
normalsvector of normals which are 4-float vectors

◆ getNormals()

virtual void cv::kinfu::KinFu::getNormals ( InputArray  points,
OutputArray  normals 
) const
pure virtual
Python:
cv.kinfu.KinFu.getNormals(points[, normals]) -> normals

Calculates normals for given points.

Parameters
pointsinput vector of points which are 4-float vectors
normalsoutput vector of corresponding normals which are 4-float vectors

◆ getParams()

virtual const Params& cv::kinfu::KinFu::getParams ( ) const
pure virtual

Get current parameters.

◆ getPoints()

virtual void cv::kinfu::KinFu::getPoints ( OutputArray  points) const
pure virtual
Python:
cv.kinfu.KinFu.getPoints([, points]) -> points

Gets points of current 3d mesh.

The order of points is undefined.

Parameters
pointsvector of points which are 4-float vectors

◆ getPose()

virtual Affine3f cv::kinfu::KinFu::getPose ( ) const
pure virtual

Get current pose in voxel space.

◆ render() [1/2]

virtual void cv::kinfu::KinFu::render ( OutputArray  image) const
pure virtual
Python:
cv.kinfu.KinFu.render([, image]) -> image
cv.kinfu.KinFu.render(cameraPose[, image]) -> image

Renders a volume into an image.

Renders a 0-surface of TSDF using Phong shading into a CV_8UC4 Mat. Light pose is fixed in KinFu params.

Parameters
imageresulting image

◆ render() [2/2]

virtual void cv::kinfu::KinFu::render ( OutputArray  image,
const Matx44f cameraPose 
) const
pure virtual
Python:
cv.kinfu.KinFu.render([, image]) -> image
cv.kinfu.KinFu.render(cameraPose[, image]) -> image

Renders a volume into an image.

Renders a 0-surface of TSDF using Phong shading into a CV_8UC4 Mat. Light pose is fixed in KinFu params.

Parameters
imageresulting image
cameraPosepose of camera to render from. If empty then render from current pose which is a last frame camera pose.

◆ reset()

virtual void cv::kinfu::KinFu::reset ( )
pure virtual
Python:
cv.kinfu.KinFu.reset() -> None

Resets the algorithm.

Clears current model and resets a pose.

◆ update()

virtual bool cv::kinfu::KinFu::update ( InputArray  depth)
pure virtual
Python:
cv.kinfu.KinFu.update(depth) -> retval

Process next depth frame.

Integrates depth into voxel space with respect to its ICP-calculated pose. Input image is converted to CV_32F internally if has another type.

Parameters
depthone-channel image which size and depth scale is described in algorithm's parameters
Returns
true if succeeded to align new frame with current scene, false if opposite

The documentation for this class was generated from the following file: