OpenCV  4.5.2
Open Source Computer Vision
Public Member Functions | Static Public Member Functions | List of all members
cv::large_kinfu::LargeKinfu Class Referenceabstract

Large Scale Dense Depth Fusion implementation. More...

#include <opencv2/rgbd/large_kinfu.hpp>

Public Member Functions

virtual ~LargeKinfu ()=default
 
virtual void getCloud (OutputArray points, OutputArray normals) const =0
 
virtual void getNormals (InputArray points, OutputArray normals) const =0
 
virtual const ParamsgetParams () const =0
 
virtual void getPoints (OutputArray points) const =0
 
virtual const Affine3f getPose () const =0
 
virtual void render (OutputArray image, const Matx44f &cameraPose=Matx44f::eye()) const =0
 
virtual void reset ()=0
 
virtual bool update (InputArray depth)=0
 

Static Public Member Functions

static Ptr< LargeKinfucreate (const Ptr< Params > &_params)
 

Detailed Description

Large Scale Dense Depth Fusion implementation.

This class implements a 3d reconstruction algorithm for larger environments using Spatially hashed TSDF volume "Submaps". It also runs a periodic posegraph optimization to minimize drift in tracking over long sequences. Currently the algorithm does not implement a relocalization or loop closure module. Potentially a Bag of words implementation or RGBD relocalization as described in Glocker et al. ISMAR 2013 will be implemented

It takes a sequence of depth images taken from depth sensor (or any depth images source such as stereo camera matching algorithm or even raymarching renderer). The output can be obtained as a vector of points and their normals or can be Phong-rendered from given camera pose.

An internal representation of a model is a spatially hashed voxel cube that stores TSDF values which represent the distance to the closest surface (for details read the [119] article about TSDF). There is no interface to that representation yet.

For posegraph optimization, a Submap abstraction over the Volume class is created. New submaps are added to the model when there is low visibility overlap between current viewing frustrum and the existing volume/model. Multiple submaps are simultaneously tracked and a posegraph is created and optimized periodically.

LargeKinfu does not use any OpenCL acceleration yet. To enable or disable it explicitly use cv::setUseOptimized() or cv::ocl::setUseOpenCL().

This implementation is inspired from Kintinuous, InfiniTAM and other SOTA algorithms

You need to set the OPENCV_ENABLE_NONFREE option in CMake to use KinectFusion.

Constructor & Destructor Documentation

◆ ~LargeKinfu()

virtual cv::large_kinfu::LargeKinfu::~LargeKinfu ( )
virtualdefault

Member Function Documentation

◆ create()

static Ptr<LargeKinfu> cv::large_kinfu::LargeKinfu::create ( const Ptr< Params > &  _params)
static
Python:
retval=cv.large_kinfu.LargeKinfu_create(_params)

◆ getCloud()

virtual void cv::large_kinfu::LargeKinfu::getCloud ( OutputArray  points,
OutputArray  normals 
) const
pure virtual
Python:
points, normals=cv.large_kinfu_LargeKinfu.getCloud([, points[, normals]])

◆ getNormals()

virtual void cv::large_kinfu::LargeKinfu::getNormals ( InputArray  points,
OutputArray  normals 
) const
pure virtual
Python:
normals=cv.large_kinfu_LargeKinfu.getNormals(points[, normals])

◆ getParams()

virtual const Params& cv::large_kinfu::LargeKinfu::getParams ( ) const
pure virtual

◆ getPoints()

virtual void cv::large_kinfu::LargeKinfu::getPoints ( OutputArray  points) const
pure virtual
Python:
points=cv.large_kinfu_LargeKinfu.getPoints([, points])

◆ getPose()

virtual const Affine3f cv::large_kinfu::LargeKinfu::getPose ( ) const
pure virtual

◆ render()

virtual void cv::large_kinfu::LargeKinfu::render ( OutputArray  image,
const Matx44f cameraPose = Matx44f::eye() 
) const
pure virtual
Python:
image=cv.large_kinfu_LargeKinfu.render([, image[, cameraPose]])

◆ reset()

virtual void cv::large_kinfu::LargeKinfu::reset ( )
pure virtual
Python:
None=cv.large_kinfu_LargeKinfu.reset()

◆ update()

virtual bool cv::large_kinfu::LargeKinfu::update ( InputArray  depth)
pure virtual
Python:
retval=cv.large_kinfu_LargeKinfu.update(depth)

The documentation for this class was generated from the following file: