OpenCV  4.0.0-rc
Open Source Computer Vision
Namespaces | Classes | Macros | Enumerations | Functions
Custom Calibration Pattern for 3D reconstruction

Namespaces

 cv::omnidir::internal
 

Classes

class  cv::ccalib::CustomPattern
 
class  cv::multicalib::MultiCameraCalibration
 Class for multiple camera calibration that supports pinhole camera and omnidirection camera. For omnidirectional camera model, please refer to omnidir.hpp in ccalib module. It first calibrate each camera individually, then a bundle adjustment like optimization is applied to refine extrinsic parameters. So far, it only support "random" pattern for calibration, see randomPattern.hpp in ccalib module for details. Images that are used should be named by "cameraIdx-timestamp.*", several images with the same timestamp means that they are the same pattern that are photographed. cameraIdx should start from 0. More...
 
class  cv::randpattern::RandomPatternCornerFinder
 Class for finding features points and corresponding 3D in world coordinate of a "random" pattern, which can be to be used in calibration. It is useful when pattern is partly occluded or only a part of pattern can be observed in multiple cameras calibration. The pattern can be generated by RandomPatternGenerator class described in this file. More...
 
class  cv::randpattern::RandomPatternGenerator
 

Macros

#define HEAD   -1
 
#define INVALID   -2
 

Enumerations

enum  {
  cv::omnidir::CALIB_USE_GUESS = 1,
  cv::omnidir::CALIB_FIX_SKEW = 2,
  cv::omnidir::CALIB_FIX_K1 = 4,
  cv::omnidir::CALIB_FIX_K2 = 8,
  cv::omnidir::CALIB_FIX_P1 = 16,
  cv::omnidir::CALIB_FIX_P2 = 32,
  cv::omnidir::CALIB_FIX_XI = 64,
  cv::omnidir::CALIB_FIX_GAMMA = 128,
  cv::omnidir::CALIB_FIX_CENTER = 256
}
 
enum  {
  cv::omnidir::RECTIFY_PERSPECTIVE = 1,
  cv::omnidir::RECTIFY_CYLINDRICAL = 2,
  cv::omnidir::RECTIFY_LONGLATI = 3,
  cv::omnidir::RECTIFY_STEREOGRAPHIC = 4
}
 
enum  {
  cv::omnidir::XYZRGB = 1,
  cv::omnidir::XYZ = 2
}
 

Functions

double cv::omnidir::calibrate (InputArrayOfArrays objectPoints, InputArrayOfArrays imagePoints, Size size, InputOutputArray K, InputOutputArray xi, InputOutputArray D, OutputArrayOfArrays rvecs, OutputArrayOfArrays tvecs, int flags, TermCriteria criteria, OutputArray idx=noArray())
 Perform omnidirectional camera calibration, the default depth of outputs is CV_64F. More...
 
void cv::omnidir::initUndistortRectifyMap (InputArray K, InputArray D, InputArray xi, InputArray R, InputArray P, const cv::Size &size, int mltype, OutputArray map1, OutputArray map2, int flags)
 Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used. More...
 
void cv::omnidir::projectPoints (InputArray objectPoints, OutputArray imagePoints, InputArray rvec, InputArray tvec, InputArray K, double xi, InputArray D, OutputArray jacobian=noArray())
 Projects points for omnidirectional camera using CMei's model. More...
 
double cv::omnidir::stereoCalibrate (InputOutputArrayOfArrays objectPoints, InputOutputArrayOfArrays imagePoints1, InputOutputArrayOfArrays imagePoints2, const Size &imageSize1, const Size &imageSize2, InputOutputArray K1, InputOutputArray xi1, InputOutputArray D1, InputOutputArray K2, InputOutputArray xi2, InputOutputArray D2, OutputArray rvec, OutputArray tvec, OutputArrayOfArrays rvecsL, OutputArrayOfArrays tvecsL, int flags, TermCriteria criteria, OutputArray idx=noArray())
 Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F. More...
 
void cv::omnidir::stereoReconstruct (InputArray image1, InputArray image2, InputArray K1, InputArray D1, InputArray xi1, InputArray K2, InputArray D2, InputArray xi2, InputArray R, InputArray T, int flag, int numDisparities, int SADWindowSize, OutputArray disparity, OutputArray image1Rec, OutputArray image2Rec, const Size &newSize=Size(), InputArray Knew=cv::noArray(), OutputArray pointCloud=cv::noArray(), int pointType=XYZRGB)
 Stereo 3D reconstruction from a pair of images. More...
 
void cv::omnidir::stereoRectify (InputArray R, InputArray T, OutputArray R1, OutputArray R2)
 Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras. More...
 
void cv::omnidir::undistortImage (InputArray distorted, OutputArray undistorted, InputArray K, InputArray D, InputArray xi, int flags, InputArray Knew=cv::noArray(), const Size &new_size=Size(), InputArray R=Mat::eye(3, 3, CV_64F))
 Undistort omnidirectional images to perspective images. More...
 
void cv::omnidir::undistortPoints (InputArray distorted, OutputArray undistorted, InputArray K, InputArray D, InputArray xi, InputArray R)
 Undistort 2D image points for omnidirectional camera using CMei's model. More...
 

Detailed Description

Macro Definition Documentation

§ HEAD

#define HEAD   -1

§ INVALID

#define INVALID   -2

Enumeration Type Documentation

§ anonymous enum

anonymous enum
Enumerator
CALIB_USE_GUESS 
Python: cv.omnidir.CALIB_USE_GUESS
CALIB_FIX_SKEW 
Python: cv.omnidir.CALIB_FIX_SKEW
CALIB_FIX_K1 
Python: cv.omnidir.CALIB_FIX_K1
CALIB_FIX_K2 
Python: cv.omnidir.CALIB_FIX_K2
CALIB_FIX_P1 
Python: cv.omnidir.CALIB_FIX_P1
CALIB_FIX_P2 
Python: cv.omnidir.CALIB_FIX_P2
CALIB_FIX_XI 
Python: cv.omnidir.CALIB_FIX_XI
CALIB_FIX_GAMMA 
Python: cv.omnidir.CALIB_FIX_GAMMA
CALIB_FIX_CENTER 
Python: cv.omnidir.CALIB_FIX_CENTER

§ anonymous enum

anonymous enum
Enumerator
RECTIFY_PERSPECTIVE 
Python: cv.omnidir.RECTIFY_PERSPECTIVE
RECTIFY_CYLINDRICAL 
Python: cv.omnidir.RECTIFY_CYLINDRICAL
RECTIFY_LONGLATI 
Python: cv.omnidir.RECTIFY_LONGLATI
RECTIFY_STEREOGRAPHIC 
Python: cv.omnidir.RECTIFY_STEREOGRAPHIC

§ anonymous enum

anonymous enum
Enumerator
XYZRGB 
Python: cv.omnidir.XYZRGB
XYZ 
Python: cv.omnidir.XYZ

Function Documentation

§ calibrate()

double cv::omnidir::calibrate ( InputArrayOfArrays  objectPoints,
InputArrayOfArrays  imagePoints,
Size  size,
InputOutputArray  K,
InputOutputArray  xi,
InputOutputArray  D,
OutputArrayOfArrays  rvecs,
OutputArrayOfArrays  tvecs,
int  flags,
TermCriteria  criteria,
OutputArray  idx = noArray() 
)
Python:
retval, K, xi, D, rvecs, tvecs, idx=cv.omnidir.calibrate(objectPoints, imagePoints, size, K, xi, D, flags, criteria[, rvecs[, tvecs[, idx]]])

Perform omnidirectional camera calibration, the default depth of outputs is CV_64F.

Parameters
objectPointsVector of vector of Vec3f object points in world (pattern) coordinate. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.
imagePointsVector of vector of Vec2f corresponding image points of objectPoints. It must be the same size and the same type with objectPoints.
sizeImage size of calibration images.
KOutput calibrated camera matrix.
xiOutput parameter xi for CMei's model
DOutput distortion parameters \((k_1, k_2, p_1, p_2)\)
rvecsOutput rotations for each calibration images
tvecsOutput translation for each calibration images
flagsThe flags that control calibrate
criteriaTermination criteria for optimization
idxIndices of images that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total().

§ initUndistortRectifyMap()

void cv::omnidir::initUndistortRectifyMap ( InputArray  K,
InputArray  D,
InputArray  xi,
InputArray  R,
InputArray  P,
const cv::Size size,
int  mltype,
OutputArray  map1,
OutputArray  map2,
int  flags 
)
Python:
map1, map2=cv.omnidir.initUndistortRectifyMap(K, D, xi, R, P, size, mltype, flags[, map1[, map2]])

Computes undistortion and rectification maps for omnidirectional camera image transform by a rotation R. It output two maps that are used for cv::remap(). If D is empty then zero distortion is used, if R or P is empty then identity matrices are used.

Parameters
KCamera matrix \(K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\), with depth CV_32F or CV_64F
DInput vector of distortion coefficients \((k_1, k_2, p_1, p_2)\), with depth CV_32F or CV_64F
xiThe parameter xi for CMei's model
RRotation transform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3, with depth CV_32F or CV_64F
PNew camera matrix (3x3) or new projection matrix (3x4)
sizeUndistorted image size.
mltypeType of the first output map that can be CV_32FC1 or CV_16SC2 . See convertMaps() for details.
map1The first output map.
map2The second output map.
flagsFlags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC are supported.

§ projectPoints()

void cv::omnidir::projectPoints ( InputArray  objectPoints,
OutputArray  imagePoints,
InputArray  rvec,
InputArray  tvec,
InputArray  K,
double  xi,
InputArray  D,
OutputArray  jacobian = noArray() 
)
Python:
imagePoints, jacobian=cv.omnidir.projectPoints(objectPoints, rvec, tvec, K, xi, D[, imagePoints[, jacobian]])

Projects points for omnidirectional camera using CMei's model.

This module was accepted as a GSoC 2015 project for OpenCV, authored by Baisheng Lai, mentored by Bo Li.

Parameters
objectPointsObject points in world coordinate, vector of vector of Vec3f or Mat of 1xN/Nx1 3-channel of type CV_32F and N is the number of points. 64F is also acceptable.
imagePointsOutput array of image points, vector of vector of Vec2f or 1xN/Nx1 2-channel of type CV_32F. 64F is also acceptable.
rvecvector of rotation between world coordinate and camera coordinate, i.e., om
tvecvector of translation between pattern coordinate and camera coordinate
KCamera matrix \(K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\).
DInput vector of distortion coefficients \((k_1, k_2, p_1, p_2)\).
xiThe parameter xi for CMei's model
jacobianOptional output 2Nx16 of type CV_64F jacobian matrix, contains the derivatives of image pixel points wrt parameters including \(om, T, f_x, f_y, s, c_x, c_y, xi, k_1, k_2, p_1, p_2\). This matrix will be used in calibration by optimization.

The function projects object 3D points of world coordinate to image pixels, parameter by intrinsic and extrinsic parameters. Also, it optionally compute a by-product: the jacobian matrix containing contains the derivatives of image pixel points wrt intrinsic and extrinsic parameters.

§ stereoCalibrate()

double cv::omnidir::stereoCalibrate ( InputOutputArrayOfArrays  objectPoints,
InputOutputArrayOfArrays  imagePoints1,
InputOutputArrayOfArrays  imagePoints2,
const Size imageSize1,
const Size imageSize2,
InputOutputArray  K1,
InputOutputArray  xi1,
InputOutputArray  D1,
InputOutputArray  K2,
InputOutputArray  xi2,
InputOutputArray  D2,
OutputArray  rvec,
OutputArray  tvec,
OutputArrayOfArrays  rvecsL,
OutputArrayOfArrays  tvecsL,
int  flags,
TermCriteria  criteria,
OutputArray  idx = noArray() 
)
Python:
retval, objectPoints, imagePoints1, imagePoints2, K1, xi1, D1, K2, xi2, D2, rvec, tvec, rvecsL, tvecsL, idx=cv.omnidir.stereoCalibrate(objectPoints, imagePoints1, imagePoints2, imageSize1, imageSize2, K1, xi1, D1, K2, xi2, D2, flags, criteria[, rvec[, tvec[, rvecsL[, tvecsL[, idx]]]]])

Stereo calibration for omnidirectional camera model. It computes the intrinsic parameters for two cameras and the extrinsic parameters between two cameras. The default depth of outputs is CV_64F.

Parameters
objectPointsObject points in world (pattern) coordinate. Its type is vector<vector<Vec3f> >. It also can be vector of Mat with size 1xN/Nx1 and type CV_32FC3. Data with depth of 64_F is also acceptable.
imagePoints1The corresponding image points of the first camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.
imagePoints2The corresponding image points of the second camera, with type vector<vector<Vec2f> >. It must be the same size and the same type as objectPoints.
imageSize1Image size of calibration images of the first camera.
imageSize2Image size of calibration images of the second camera.
K1Output camera matrix for the first camera.
xi1Output parameter xi of Mei's model for the first camera
D1Output distortion parameters \((k_1, k_2, p_1, p_2)\) for the first camera
K2Output camera matrix for the first camera.
xi2Output parameter xi of CMei's model for the second camera
D2Output distortion parameters \((k_1, k_2, p_1, p_2)\) for the second camera
rvecOutput rotation between the first and second camera
tvecOutput translation between the first and second camera
rvecsLOutput rotation for each image of the first camera
tvecsLOutput translation for each image of the first camera
flagsThe flags that control stereoCalibrate
criteriaTermination criteria for optimization
idxIndices of image pairs that pass initialization, which are really used in calibration. So the size of rvecs is the same as idx.total(). @

§ stereoReconstruct()

void cv::omnidir::stereoReconstruct ( InputArray  image1,
InputArray  image2,
InputArray  K1,
InputArray  D1,
InputArray  xi1,
InputArray  K2,
InputArray  D2,
InputArray  xi2,
InputArray  R,
InputArray  T,
int  flag,
int  numDisparities,
int  SADWindowSize,
OutputArray  disparity,
OutputArray  image1Rec,
OutputArray  image2Rec,
const Size newSize = Size(),
InputArray  Knew = cv::noArray(),
OutputArray  pointCloud = cv::noArray(),
int  pointType = XYZRGB 
)
Python:
disparity, image1Rec, image2Rec, pointCloud=cv.omnidir.stereoReconstruct(image1, image2, K1, D1, xi1, K2, D2, xi2, R, T, flag, numDisparities, SADWindowSize[, disparity[, image1Rec[, image2Rec[, newSize[, Knew[, pointCloud[, pointType]]]]]]])

Stereo 3D reconstruction from a pair of images.

Parameters
image1The first input image
image2The second input image
K1Input camera matrix of the first camera
D1Input distortion parameters \((k_1, k_2, p_1, p_2)\) for the first camera
xi1Input parameter xi for the first camera for CMei's model
K2Input camera matrix of the second camera
D2Input distortion parameters \((k_1, k_2, p_1, p_2)\) for the second camera
xi2Input parameter xi for the second camera for CMei's model
RRotation between the first and second camera
TTranslation between the first and second camera
flagFlag of rectification type, RECTIFY_PERSPECTIVE or RECTIFY_LONGLATI
numDisparitiesThe parameter 'numDisparities' in StereoSGBM, see StereoSGBM for details.
SADWindowSizeThe parameter 'SADWindowSize' in StereoSGBM, see StereoSGBM for details.
disparityDisparity map generated by stereo matching
image1RecRectified image of the first image
image2Recrectified image of the second image
newSizeImage size of rectified image, see omnidir::undistortImage
KnewNew camera matrix of rectified image, see omnidir::undistortImage
pointCloudPoint cloud of 3D reconstruction, with type CV_64FC3
pointTypePoint cloud type, it can be XYZRGB or XYZ

§ stereoRectify()

void cv::omnidir::stereoRectify ( InputArray  R,
InputArray  T,
OutputArray  R1,
OutputArray  R2 
)
Python:
R1, R2=cv.omnidir.stereoRectify(R, T[, R1[, R2]])

Stereo rectification for omnidirectional camera model. It computes the rectification rotations for two cameras.

Parameters
RRotation between the first and second camera
TTranslation between the first and second camera
R1Output 3x3 rotation matrix for the first camera
R2Output 3x3 rotation matrix for the second camera

§ undistortImage()

void cv::omnidir::undistortImage ( InputArray  distorted,
OutputArray  undistorted,
InputArray  K,
InputArray  D,
InputArray  xi,
int  flags,
InputArray  Knew = cv::noArray(),
const Size new_size = Size(),
InputArray  R = Mat::eye(3, 3, CV_64F) 
)
Python:
undistorted=cv.omnidir.undistortImage(distorted, K, D, xi, flags[, undistorted[, Knew[, new_size[, R]]]])

Undistort omnidirectional images to perspective images.

Parameters
distortedThe input omnidirectional image.
undistortedThe output undistorted image.
KCamera matrix \(K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\).
DInput vector of distortion coefficients \((k_1, k_2, p_1, p_2)\).
xiThe parameter xi for CMei's model.
flagsFlags indicates the rectification type, RECTIFY_PERSPECTIVE, RECTIFY_CYLINDRICAL, RECTIFY_LONGLATI and RECTIFY_STEREOGRAPHIC
KnewCamera matrix of the distorted image. If it is not assigned, it is just K.
new_sizeThe new image size. By default, it is the size of distorted.
RRotation matrix between the input and output images. By default, it is identity matrix.

§ undistortPoints()

void cv::omnidir::undistortPoints ( InputArray  distorted,
OutputArray  undistorted,
InputArray  K,
InputArray  D,
InputArray  xi,
InputArray  R 
)
Python:
undistorted=cv.omnidir.undistortPoints(distorted, K, D, xi, R[, undistorted])

Undistort 2D image points for omnidirectional camera using CMei's model.

Parameters
distortedArray of distorted image points, vector of Vec2f or 1xN/Nx1 2-channel Mat of type CV_32F, 64F depth is also acceptable
KCamera matrix \(K = \vecthreethree{f_x}{s}{c_x}{0}{f_y}{c_y}{0}{0}{_1}\).
DDistortion coefficients \((k_1, k_2, p_1, p_2)\).
xiThe parameter xi for CMei's model
RRotation trainsform between the original and object space : 3x3 1-channel, or vector: 3x1/1x3 1-channel or 1x1 3-channel
undistortedarray of normalized object points, vector of Vec2f/Vec2d or 1xN/Nx1 2-channel Mat with the same depth of distorted points.