OpenCV  4.10.0-dev
Open Source Computer Vision
Loading...
Searching...
No Matches
Public Member Functions | Static Public Member Functions | List of all members
cv::large_kinfu::LargeKinfu Class Referenceabstract

Large Scale Dense Depth Fusion implementation. More...

#include <opencv2/rgbd/large_kinfu.hpp>

Collaboration diagram for cv::large_kinfu::LargeKinfu:

Public Member Functions

virtual ~LargeKinfu ()=default
 
virtual void getCloud (OutputArray points, OutputArray normals) const =0
 
virtual void getNormals (InputArray points, OutputArray normals) const =0
 
virtual const ParamsgetParams () const =0
 
virtual void getPoints (OutputArray points) const =0
 
virtual Affine3f getPose () const =0
 
virtual void render (OutputArray image) const =0
 
virtual void render (OutputArray image, const Matx44f &cameraPose) const =0
 
virtual void reset ()=0
 
virtual bool update (InputArray depth)=0
 

Static Public Member Functions

static Ptr< LargeKinfucreate (const Ptr< Params > &_params)
 

Detailed Description

Large Scale Dense Depth Fusion implementation.

This class implements a 3d reconstruction algorithm for larger environments using Spatially hashed TSDF volume "Submaps". It also runs a periodic posegraph optimization to minimize drift in tracking over long sequences. Currently the algorithm does not implement a relocalization or loop closure module. Potentially a Bag of words implementation or RGBD relocalization as described in Glocker et al. ISMAR 2013 will be implemented

It takes a sequence of depth images taken from depth sensor (or any depth images source such as stereo camera matching algorithm or even raymarching renderer). The output can be obtained as a vector of points and their normals or can be Phong-rendered from given camera pose.

An internal representation of a model is a spatially hashed voxel cube that stores TSDF values which represent the distance to the closest surface (for details read the [134] article about TSDF). There is no interface to that representation yet.

For posegraph optimization, a Submap abstraction over the Volume class is created. New submaps are added to the model when there is low visibility overlap between current viewing frustrum and the existing volume/model. Multiple submaps are simultaneously tracked and a posegraph is created and optimized periodically.

LargeKinfu does not use any OpenCL acceleration yet. To enable or disable it explicitly use cv::setUseOptimized() or cv::ocl::setUseOpenCL().

This implementation is inspired from Kintinuous, InfiniTAM and other SOTA algorithms

You need to set the OPENCV_ENABLE_NONFREE option in CMake to use KinectFusion.

Constructor & Destructor Documentation

◆ ~LargeKinfu()

virtual cv::large_kinfu::LargeKinfu::~LargeKinfu ( )
virtualdefault

Member Function Documentation

◆ create()

static Ptr< LargeKinfu > cv::large_kinfu::LargeKinfu::create ( const Ptr< Params > &  _params)
static
Python:
cv.large_kinfu.LargeKinfu.create(_params) -> retval
cv.large_kinfu.LargeKinfu_create(_params) -> retval

◆ getCloud()

virtual void cv::large_kinfu::LargeKinfu::getCloud ( OutputArray  points,
OutputArray  normals 
) const
pure virtual
Python:
cv.large_kinfu.LargeKinfu.getCloud([, points[, normals]]) -> points, normals

◆ getNormals()

virtual void cv::large_kinfu::LargeKinfu::getNormals ( InputArray  points,
OutputArray  normals 
) const
pure virtual
Python:
cv.large_kinfu.LargeKinfu.getNormals(points[, normals]) -> normals

◆ getParams()

virtual const Params & cv::large_kinfu::LargeKinfu::getParams ( ) const
pure virtual

◆ getPoints()

virtual void cv::large_kinfu::LargeKinfu::getPoints ( OutputArray  points) const
pure virtual
Python:
cv.large_kinfu.LargeKinfu.getPoints([, points]) -> points

◆ getPose()

virtual Affine3f cv::large_kinfu::LargeKinfu::getPose ( ) const
pure virtual

◆ render() [1/2]

virtual void cv::large_kinfu::LargeKinfu::render ( OutputArray  image) const
pure virtual
Python:
cv.large_kinfu.LargeKinfu.render([, image]) -> image
cv.large_kinfu.LargeKinfu.render(cameraPose[, image]) -> image

◆ render() [2/2]

virtual void cv::large_kinfu::LargeKinfu::render ( OutputArray  image,
const Matx44f cameraPose 
) const
pure virtual
Python:
cv.large_kinfu.LargeKinfu.render([, image]) -> image
cv.large_kinfu.LargeKinfu.render(cameraPose[, image]) -> image

◆ reset()

virtual void cv::large_kinfu::LargeKinfu::reset ( )
pure virtual
Python:
cv.large_kinfu.LargeKinfu.reset() -> None

◆ update()

virtual bool cv::large_kinfu::LargeKinfu::update ( InputArray  depth)
pure virtual
Python:
cv.large_kinfu.LargeKinfu.update(depth) -> retval

The documentation for this class was generated from the following file: