Package org.opencv.calib3d
Class Calib3d
 java.lang.Object

 org.opencv.calib3d.Calib3d

public class Calib3d extends java.lang.Object


Field Summary

Constructor Summary
Constructors Constructor Description Calib3d()

Method Summary
All Methods Static Methods Concrete Methods Modifier and Type Method Description static double
calibrateCamera(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs)
static double
calibrateCamera(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags)
static double
calibrateCamera(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags, TermCriteria criteria)
static double
calibrateCameraExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat perViewErrors)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.static double
calibrateCameraExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat perViewErrors, int flags)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.static double
calibrateCameraExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat perViewErrors, int flags, TermCriteria criteria)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.static double
calibrateCameraRO(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints)
static double
calibrateCameraRO(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, int flags)
static double
calibrateCameraRO(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, int flags, TermCriteria criteria)
static double
calibrateCameraROExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat stdDeviationsObjPoints, Mat perViewErrors)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.static double
calibrateCameraROExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat stdDeviationsObjPoints, Mat perViewErrors, int flags)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.static double
calibrateCameraROExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat stdDeviationsObjPoints, Mat perViewErrors, int flags, TermCriteria criteria)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern.static void
calibrateHandEye(java.util.List<Mat> R_gripper2base, java.util.List<Mat> t_gripper2base, java.util.List<Mat> R_target2cam, java.util.List<Mat> t_target2cam, Mat R_cam2gripper, Mat t_cam2gripper)
Computes HandEye calibration: \(_{}^{g}\textrm{T}_c\)static void
calibrateHandEye(java.util.List<Mat> R_gripper2base, java.util.List<Mat> t_gripper2base, java.util.List<Mat> R_target2cam, java.util.List<Mat> t_target2cam, Mat R_cam2gripper, Mat t_cam2gripper, int method)
Computes HandEye calibration: \(_{}^{g}\textrm{T}_c\)static void
calibrateRobotWorldHandEye(java.util.List<Mat> R_world2cam, java.util.List<Mat> t_world2cam, java.util.List<Mat> R_base2gripper, java.util.List<Mat> t_base2gripper, Mat R_base2world, Mat t_base2world, Mat R_gripper2cam, Mat t_gripper2cam)
Computes RobotWorld/HandEye calibration: \(_{}^{w}\textrm{T}_b\) and \(_{}^{c}\textrm{T}_g\)static void
calibrateRobotWorldHandEye(java.util.List<Mat> R_world2cam, java.util.List<Mat> t_world2cam, java.util.List<Mat> R_base2gripper, java.util.List<Mat> t_base2gripper, Mat R_base2world, Mat t_base2world, Mat R_gripper2cam, Mat t_gripper2cam, int method)
Computes RobotWorld/HandEye calibration: \(_{}^{w}\textrm{T}_b\) and \(_{}^{c}\textrm{T}_g\)static void
calibrationMatrixValues(Mat cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, double[] fovx, double[] fovy, double[] focalLength, Point principalPoint, double[] aspectRatio)
Computes useful camera characteristics from the camera intrinsic matrix.static boolean
checkChessboard(Mat img, Size size)
static void
composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3)
Combines two rotationandshift transformations.static void
composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1)
Combines two rotationandshift transformations.static void
composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1)
Combines two rotationandshift transformations.static void
composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2)
Combines two rotationandshift transformations.static void
composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2)
Combines two rotationandshift transformations.static void
composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2, Mat dt3dr1)
Combines two rotationandshift transformations.static void
composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2, Mat dt3dr1, Mat dt3dt1)
Combines two rotationandshift transformations.static void
composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2, Mat dt3dr1, Mat dt3dt1, Mat dt3dr2)
Combines two rotationandshift transformations.static void
composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2, Mat dt3dr1, Mat dt3dt1, Mat dt3dr2, Mat dt3dt2)
Combines two rotationandshift transformations.static void
computeCorrespondEpilines(Mat points, int whichImage, Mat F, Mat lines)
For points in an image of a stereo pair, computes the corresponding epilines in the other image.static void
convertPointsFromHomogeneous(Mat src, Mat dst)
Converts points from homogeneous to Euclidean space.static void
convertPointsToHomogeneous(Mat src, Mat dst)
Converts points from Euclidean to homogeneous space.static void
correctMatches(Mat F, Mat points1, Mat points2, Mat newPoints1, Mat newPoints2)
Refines coordinates of corresponding points.static void
decomposeEssentialMat(Mat E, Mat R1, Mat R2, Mat t)
Decompose an essential matrix to possible rotations and translation.static int
decomposeHomographyMat(Mat H, Mat K, java.util.List<Mat> rotations, java.util.List<Mat> translations, java.util.List<Mat> normals)
Decompose a homography matrix to rotation(s), translation(s) and plane normal(s).static void
decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.static void
decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect, Mat rotMatrixX)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.static void
decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect, Mat rotMatrixX, Mat rotMatrixY)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.static void
decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect, Mat rotMatrixX, Mat rotMatrixY, Mat rotMatrixZ)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.static void
decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect, Mat rotMatrixX, Mat rotMatrixY, Mat rotMatrixZ, Mat eulerAngles)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix.static void
drawChessboardCorners(Mat image, Size patternSize, MatOfPoint2f corners, boolean patternWasFound)
Renders the detected chessboard corners.static void
drawFrameAxes(Mat image, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, float length)
Draw axes of the world/object coordinate system from pose estimation.static void
drawFrameAxes(Mat image, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, float length, int thickness)
Draw axes of the world/object coordinate system from pose estimation.static Mat
estimateAffine2D(Mat from, Mat to)
Computes an optimal affine transformation between two 2D point sets.static Mat
estimateAffine2D(Mat from, Mat to, Mat inliers)
Computes an optimal affine transformation between two 2D point sets.static Mat
estimateAffine2D(Mat from, Mat to, Mat inliers, int method)
Computes an optimal affine transformation between two 2D point sets.static Mat
estimateAffine2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold)
Computes an optimal affine transformation between two 2D point sets.static Mat
estimateAffine2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters)
Computes an optimal affine transformation between two 2D point sets.static Mat
estimateAffine2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters, double confidence)
Computes an optimal affine transformation between two 2D point sets.static Mat
estimateAffine2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters, double confidence, long refineIters)
Computes an optimal affine transformation between two 2D point sets.static Mat
estimateAffine2D(Mat pts1, Mat pts2, Mat inliers, UsacParams params)
static Mat
estimateAffine3D(Mat src, Mat dst)
Computes an optimal affine transformation between two 3D point sets.static Mat
estimateAffine3D(Mat src, Mat dst, double[] scale)
Computes an optimal affine transformation between two 3D point sets.static Mat
estimateAffine3D(Mat src, Mat dst, double[] scale, boolean force_rotation)
Computes an optimal affine transformation between two 3D point sets.static int
estimateAffine3D(Mat src, Mat dst, Mat out, Mat inliers)
Computes an optimal affine transformation between two 3D point sets.static int
estimateAffine3D(Mat src, Mat dst, Mat out, Mat inliers, double ransacThreshold)
Computes an optimal affine transformation between two 3D point sets.static int
estimateAffine3D(Mat src, Mat dst, Mat out, Mat inliers, double ransacThreshold, double confidence)
Computes an optimal affine transformation between two 3D point sets.static Mat
estimateAffinePartial2D(Mat from, Mat to)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.static Mat
estimateAffinePartial2D(Mat from, Mat to, Mat inliers)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.static Mat
estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.static Mat
estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.static Mat
estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.static Mat
estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters, double confidence)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.static Mat
estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters, double confidence, long refineIters)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets.static Scalar
estimateChessboardSharpness(Mat image, Size patternSize, Mat corners)
Estimates the sharpness of a detected chessboard.static Scalar
estimateChessboardSharpness(Mat image, Size patternSize, Mat corners, float rise_distance)
Estimates the sharpness of a detected chessboard.static Scalar
estimateChessboardSharpness(Mat image, Size patternSize, Mat corners, float rise_distance, boolean vertical)
Estimates the sharpness of a detected chessboard.static Scalar
estimateChessboardSharpness(Mat image, Size patternSize, Mat corners, float rise_distance, boolean vertical, Mat sharpness)
Estimates the sharpness of a detected chessboard.static int
estimateTranslation3D(Mat src, Mat dst, Mat out, Mat inliers)
Computes an optimal translation between two 3D point sets.static int
estimateTranslation3D(Mat src, Mat dst, Mat out, Mat inliers, double ransacThreshold)
Computes an optimal translation between two 3D point sets.static int
estimateTranslation3D(Mat src, Mat dst, Mat out, Mat inliers, double ransacThreshold, double confidence)
Computes an optimal translation between two 3D point sets.static void
filterHomographyDecompByVisibleRefpoints(java.util.List<Mat> rotations, java.util.List<Mat> normals, Mat beforePoints, Mat afterPoints, Mat possibleSolutions)
Filters homography decompositions based on additional information.static void
filterHomographyDecompByVisibleRefpoints(java.util.List<Mat> rotations, java.util.List<Mat> normals, Mat beforePoints, Mat afterPoints, Mat possibleSolutions, Mat pointsMask)
Filters homography decompositions based on additional information.static void
filterSpeckles(Mat img, double newVal, int maxSpeckleSize, double maxDiff)
Filters off small noise blobs (speckles) in the disparity mapstatic void
filterSpeckles(Mat img, double newVal, int maxSpeckleSize, double maxDiff, Mat buf)
Filters off small noise blobs (speckles) in the disparity mapstatic boolean
find4QuadCornerSubpix(Mat img, Mat corners, Size region_size)
static boolean
findChessboardCorners(Mat image, Size patternSize, MatOfPoint2f corners)
Finds the positions of internal corners of the chessboard.static boolean
findChessboardCorners(Mat image, Size patternSize, MatOfPoint2f corners, int flags)
Finds the positions of internal corners of the chessboard.static boolean
findChessboardCornersSB(Mat image, Size patternSize, Mat corners)
static boolean
findChessboardCornersSB(Mat image, Size patternSize, Mat corners, int flags)
static boolean
findChessboardCornersSBWithMeta(Mat image, Size patternSize, Mat corners, int flags, Mat meta)
Finds the positions of internal corners of the chessboard using a sector based approach.static boolean
findCirclesGrid(Mat image, Size patternSize, Mat centers)
static boolean
findCirclesGrid(Mat image, Size patternSize, Mat centers, int flags)
static Mat
findEssentialMat(Mat points1, Mat points2)
static Mat
findEssentialMat(Mat points1, Mat points2, double focal)
static Mat
findEssentialMat(Mat points1, Mat points2, double focal, Point pp)
static Mat
findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method)
static Mat
findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob)
static Mat
findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob, double threshold)
static Mat
findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob, double threshold, int maxIters)
static Mat
findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob, double threshold, int maxIters, Mat mask)
static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix)
Calculates an essential matrix from the corresponding points in two images.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method)
Calculates an essential matrix from the corresponding points in two images.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method, double prob)
Calculates an essential matrix from the corresponding points in two images.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method, double prob, double threshold)
Calculates an essential matrix from the corresponding points in two images.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method, double prob, double threshold, int maxIters)
Calculates an essential matrix from the corresponding points in two images.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method, double prob, double threshold, int maxIters, Mat mask)
Calculates an essential matrix from the corresponding points in two images.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, int method)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, int method, double prob)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, int method, double prob, double threshold)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, int method, double prob, double threshold, Mat mask)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras.static Mat
findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat cameraMatrix2, Mat dist_coeff1, Mat dist_coeff2, Mat mask, UsacParams params)
static Mat
findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2)
static Mat
findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method)
static Mat
findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold)
static Mat
findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold, double confidence)
static Mat
findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold, double confidence, int maxIters)
Calculates a fundamental matrix from the corresponding points in two images.static Mat
findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold, double confidence, int maxIters, Mat mask)
Calculates a fundamental matrix from the corresponding points in two images.static Mat
findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold, double confidence, Mat mask)
static Mat
findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, Mat mask, UsacParams params)
static Mat
findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints)
Finds a perspective transformation between two planes.static Mat
findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method)
Finds a perspective transformation between two planes.static Mat
findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold)
Finds a perspective transformation between two planes.static Mat
findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold, Mat mask)
Finds a perspective transformation between two planes.static Mat
findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold, Mat mask, int maxIters)
Finds a perspective transformation between two planes.static Mat
findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold, Mat mask, int maxIters, double confidence)
Finds a perspective transformation between two planes.static Mat
findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, Mat mask, UsacParams params)
static double
fisheye_calibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size image_size, Mat K, Mat D, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs)
Performs camera calibrationstatic double
fisheye_calibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size image_size, Mat K, Mat D, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags)
Performs camera calibrationstatic double
fisheye_calibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size image_size, Mat K, Mat D, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags, TermCriteria criteria)
Performs camera calibrationstatic void
fisheye_distortPoints(Mat undistorted, Mat distorted, Mat K, Mat D)
Distorts 2D points using fisheye model.static void
fisheye_distortPoints(Mat undistorted, Mat distorted, Mat K, Mat D, double alpha)
Distorts 2D points using fisheye model.static void
fisheye_estimateNewCameraMatrixForUndistortRectify(Mat K, Mat D, Size image_size, Mat R, Mat P)
Estimates new camera intrinsic matrix for undistortion or rectification.static void
fisheye_estimateNewCameraMatrixForUndistortRectify(Mat K, Mat D, Size image_size, Mat R, Mat P, double balance)
Estimates new camera intrinsic matrix for undistortion or rectification.static void
fisheye_estimateNewCameraMatrixForUndistortRectify(Mat K, Mat D, Size image_size, Mat R, Mat P, double balance, Size new_size)
Estimates new camera intrinsic matrix for undistortion or rectification.static void
fisheye_estimateNewCameraMatrixForUndistortRectify(Mat K, Mat D, Size image_size, Mat R, Mat P, double balance, Size new_size, double fov_scale)
Estimates new camera intrinsic matrix for undistortion or rectification.static void
fisheye_initUndistortRectifyMap(Mat K, Mat D, Mat R, Mat P, Size size, int m1type, Mat map1, Mat map2)
Computes undistortion and rectification maps for image transform by #remap.static void
fisheye_projectPoints(Mat objectPoints, Mat imagePoints, Mat rvec, Mat tvec, Mat K, Mat D)
static void
fisheye_projectPoints(Mat objectPoints, Mat imagePoints, Mat rvec, Mat tvec, Mat K, Mat D, double alpha)
static void
fisheye_projectPoints(Mat objectPoints, Mat imagePoints, Mat rvec, Mat tvec, Mat K, Mat D, double alpha, Mat jacobian)
static double
fisheye_stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat T)
static double
fisheye_stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat T, int flags)
static double
fisheye_stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat T, int flags, TermCriteria criteria)
static double
fisheye_stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat T, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs)
Performs stereo calibrationstatic double
fisheye_stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat T, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags)
Performs stereo calibrationstatic double
fisheye_stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat T, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags, TermCriteria criteria)
Performs stereo calibrationstatic void
fisheye_stereoRectify(Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags)
Stereo rectification for fisheye camera modelstatic void
fisheye_stereoRectify(Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, Size newImageSize)
Stereo rectification for fisheye camera modelstatic void
fisheye_stereoRectify(Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, Size newImageSize, double balance)
Stereo rectification for fisheye camera modelstatic void
fisheye_stereoRectify(Mat K1, Mat D1, Mat K2, Mat D2, Size imageSize, Mat R, Mat tvec, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, Size newImageSize, double balance, double fov_scale)
Stereo rectification for fisheye camera modelstatic void
fisheye_undistortImage(Mat distorted, Mat undistorted, Mat K, Mat D)
Transforms an image to compensate for fisheye lens distortion.static void
fisheye_undistortImage(Mat distorted, Mat undistorted, Mat K, Mat D, Mat Knew)
Transforms an image to compensate for fisheye lens distortion.static void
fisheye_undistortImage(Mat distorted, Mat undistorted, Mat K, Mat D, Mat Knew, Size new_size)
Transforms an image to compensate for fisheye lens distortion.static void
fisheye_undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D)
Undistorts 2D points using fisheye modelstatic void
fisheye_undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D, Mat R)
Undistorts 2D points using fisheye modelstatic void
fisheye_undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D, Mat R, Mat P)
Undistorts 2D points using fisheye modelstatic void
fisheye_undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D, Mat R, Mat P, TermCriteria criteria)
Undistorts 2D points using fisheye modelstatic Mat
getDefaultNewCameraMatrix(Mat cameraMatrix)
Returns the default new camera matrix.static Mat
getDefaultNewCameraMatrix(Mat cameraMatrix, Size imgsize)
Returns the default new camera matrix.static Mat
getDefaultNewCameraMatrix(Mat cameraMatrix, Size imgsize, boolean centerPrincipalPoint)
Returns the default new camera matrix.static Mat
getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha)
Returns the new camera intrinsic matrix based on the free scaling parameter.static Mat
getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize)
Returns the new camera intrinsic matrix based on the free scaling parameter.static Mat
getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize, Rect validPixROI)
Returns the new camera intrinsic matrix based on the free scaling parameter.static Mat
getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize, Rect validPixROI, boolean centerPrincipalPoint)
Returns the new camera intrinsic matrix based on the free scaling parameter.static Rect
getValidDisparityROI(Rect roi1, Rect roi2, int minDisparity, int numberOfDisparities, int blockSize)
static Mat
initCameraMatrix2D(java.util.List<MatOfPoint3f> objectPoints, java.util.List<MatOfPoint2f> imagePoints, Size imageSize)
Finds an initial camera intrinsic matrix from 3D2D point correspondences.static Mat
initCameraMatrix2D(java.util.List<MatOfPoint3f> objectPoints, java.util.List<MatOfPoint2f> imagePoints, Size imageSize, double aspectRatio)
Finds an initial camera intrinsic matrix from 3D2D point correspondences.static void
initInverseRectificationMap(Mat cameraMatrix, Mat distCoeffs, Mat R, Mat newCameraMatrix, Size size, int m1type, Mat map1, Mat map2)
Computes the projection and inverserectification transformation map.static void
initUndistortRectifyMap(Mat cameraMatrix, Mat distCoeffs, Mat R, Mat newCameraMatrix, Size size, int m1type, Mat map1, Mat map2)
Computes the undistortion and rectification transformation map.static void
matMulDeriv(Mat A, Mat B, Mat dABdA, Mat dABdB)
Computes partial derivatives of the matrix product for each multiplied matrix.static void
projectPoints(MatOfPoint3f objectPoints, Mat rvec, Mat tvec, Mat cameraMatrix, MatOfDouble distCoeffs, MatOfPoint2f imagePoints)
Projects 3D points to an image plane.static void
projectPoints(MatOfPoint3f objectPoints, Mat rvec, Mat tvec, Mat cameraMatrix, MatOfDouble distCoeffs, MatOfPoint2f imagePoints, Mat jacobian)
Projects 3D points to an image plane.static void
projectPoints(MatOfPoint3f objectPoints, Mat rvec, Mat tvec, Mat cameraMatrix, MatOfDouble distCoeffs, MatOfPoint2f imagePoints, Mat jacobian, double aspectRatio)
Projects 3D points to an image plane.static int
recoverPose(Mat E, Mat points1, Mat points2, Mat R, Mat t)
static int
recoverPose(Mat E, Mat points1, Mat points2, Mat R, Mat t, double focal)
static int
recoverPose(Mat E, Mat points1, Mat points2, Mat R, Mat t, double focal, Point pp)
static int
recoverPose(Mat E, Mat points1, Mat points2, Mat R, Mat t, double focal, Point pp, Mat mask)
static int
recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t)
Recovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check.static int
recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t, double distanceThresh)
static int
recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t, double distanceThresh, Mat mask)
static int
recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t, double distanceThresh, Mat mask, Mat triangulatedPoints)
static int
recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t, Mat mask)
Recovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check.static int
recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.static int
recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t, int method)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.static int
recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t, int method, double prob)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.static int
recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t, int method, double prob, double threshold)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.static int
recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t, int method, double prob, double threshold, Mat mask)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check.static float
rectify3Collinear(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat cameraMatrix3, Mat distCoeffs3, java.util.List<Mat> imgpt1, java.util.List<Mat> imgpt3, Size imageSize, Mat R12, Mat T12, Mat R13, Mat T13, Mat R1, Mat R2, Mat R3, Mat P1, Mat P2, Mat P3, Mat Q, double alpha, Size newImgSize, Rect roi1, Rect roi2, int flags)
static void
reprojectImageTo3D(Mat disparity, Mat _3dImage, Mat Q)
Reprojects a disparity image to 3D space.static void
reprojectImageTo3D(Mat disparity, Mat _3dImage, Mat Q, boolean handleMissingValues)
Reprojects a disparity image to 3D space.static void
reprojectImageTo3D(Mat disparity, Mat _3dImage, Mat Q, boolean handleMissingValues, int ddepth)
Reprojects a disparity image to 3D space.static void
Rodrigues(Mat src, Mat dst)
Converts a rotation matrix to a rotation vector or vice versa.static void
Rodrigues(Mat src, Mat dst, Mat jacobian)
Converts a rotation matrix to a rotation vector or vice versa.static double[]
RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ)
Computes an RQ decomposition of 3x3 matrices.static double[]
RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ, Mat Qx)
Computes an RQ decomposition of 3x3 matrices.static double[]
RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ, Mat Qx, Mat Qy)
Computes an RQ decomposition of 3x3 matrices.static double[]
RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ, Mat Qx, Mat Qy, Mat Qz)
Computes an RQ decomposition of 3x3 matrices.static double
sampsonDistance(Mat pt1, Mat pt2, Mat F)
Calculates the Sampson Distance between two points.static int
solveP3P(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags)
Finds an object pose from 3 3D2D point correspondences.static boolean
solvePnP(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec)
Finds an object pose from 3D2D point correspondences.static boolean
solvePnP(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess)
Finds an object pose from 3D2D point correspondences.static boolean
solvePnP(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int flags)
Finds an object pose from 3D2D point correspondences.static int
solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs)
Finds an object pose from 3D2D point correspondences.static int
solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess)
Finds an object pose from 3D2D point correspondences.static int
solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess, int flags)
Finds an object pose from 3D2D point correspondences.static int
solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess, int flags, Mat rvec)
Finds an object pose from 3D2D point correspondences.static int
solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess, int flags, Mat rvec, Mat tvec)
Finds an object pose from 3D2D point correspondences.static int
solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess, int flags, Mat rvec, Mat tvec, Mat reprojectionError)
Finds an object pose from 3D2D point correspondences.static boolean
solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme.static boolean
solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme.static boolean
solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme.static boolean
solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme.static boolean
solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, double confidence)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme.static boolean
solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, double confidence, Mat inliers)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme.static boolean
solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, double confidence, Mat inliers, int flags)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme.static boolean
solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, Mat inliers)
static boolean
solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, Mat inliers, UsacParams params)
static void
solvePnPRefineLM(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution.static void
solvePnPRefineLM(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, TermCriteria criteria)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution.static void
solvePnPRefineVVS(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution.static void
solvePnPRefineVVS(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, TermCriteria criteria)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution.static void
solvePnPRefineVVS(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, TermCriteria criteria, double VVSlambda)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution.static double
stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F)
static double
stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, int flags)
static double
stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, int flags, TermCriteria criteria)
static double
stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, Mat perViewErrors)
static double
stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, Mat perViewErrors, int flags)
static double
stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, Mat perViewErrors, int flags, TermCriteria criteria)
static double
stereoCalibrateExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat perViewErrors)
Calibrates a stereo camera set up.static double
stereoCalibrateExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat perViewErrors, int flags)
Calibrates a stereo camera set up.static double
stereoCalibrateExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat perViewErrors, int flags, TermCriteria criteria)
Calibrates a stereo camera set up.static void
stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q)
Computes rectification transforms for each head of a calibrated stereo camera.static void
stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags)
Computes rectification transforms for each head of a calibrated stereo camera.static void
stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, double alpha)
Computes rectification transforms for each head of a calibrated stereo camera.static void
stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, double alpha, Size newImageSize)
Computes rectification transforms for each head of a calibrated stereo camera.static void
stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, double alpha, Size newImageSize, Rect validPixROI1)
Computes rectification transforms for each head of a calibrated stereo camera.static void
stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, double alpha, Size newImageSize, Rect validPixROI1, Rect validPixROI2)
Computes rectification transforms for each head of a calibrated stereo camera.static boolean
stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat H1, Mat H2)
Computes a rectification transform for an uncalibrated stereo camera.static boolean
stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat H1, Mat H2, double threshold)
Computes a rectification transform for an uncalibrated stereo camera.static void
triangulatePoints(Mat projMatr1, Mat projMatr2, Mat projPoints1, Mat projPoints2, Mat points4D)
This function reconstructs 3dimensional points (in homogeneous coordinates) by using their observations with a stereo camera.static void
undistort(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs)
Transforms an image to compensate for lens distortion.static void
undistort(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, Mat newCameraMatrix)
Transforms an image to compensate for lens distortion.static void
undistortImagePoints(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs)
Compute undistorted image points positionstatic void
undistortImagePoints(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, TermCriteria arg1)
Compute undistorted image points positionstatic void
undistortPoints(MatOfPoint2f src, MatOfPoint2f dst, Mat cameraMatrix, Mat distCoeffs)
Computes the ideal point coordinates from the observed point coordinates.static void
undistortPoints(MatOfPoint2f src, MatOfPoint2f dst, Mat cameraMatrix, Mat distCoeffs, Mat R)
Computes the ideal point coordinates from the observed point coordinates.static void
undistortPoints(MatOfPoint2f src, MatOfPoint2f dst, Mat cameraMatrix, Mat distCoeffs, Mat R, Mat P)
Computes the ideal point coordinates from the observed point coordinates.static void
undistortPointsIter(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, Mat R, Mat P, TermCriteria criteria)
Note: Default version of #undistortPoints does 5 iterations to compute undistorted points.static void
validateDisparity(Mat disparity, Mat cost, int minDisparity, int numberOfDisparities)
static void
validateDisparity(Mat disparity, Mat cost, int minDisparity, int numberOfDisparities, int disp12MaxDisp)



Field Detail

CV_ITERATIVE
public static final int CV_ITERATIVE
 See Also:
 Constant Field Values

CV_EPNP
public static final int CV_EPNP
 See Also:
 Constant Field Values

CV_P3P
public static final int CV_P3P
 See Also:
 Constant Field Values

CV_DLS
public static final int CV_DLS
 See Also:
 Constant Field Values

CvLevMarq_DONE
public static final int CvLevMarq_DONE
 See Also:
 Constant Field Values

CvLevMarq_STARTED
public static final int CvLevMarq_STARTED
 See Also:
 Constant Field Values

CvLevMarq_CALC_J
public static final int CvLevMarq_CALC_J
 See Also:
 Constant Field Values

CvLevMarq_CHECK_ERR
public static final int CvLevMarq_CHECK_ERR
 See Also:
 Constant Field Values

LMEDS
public static final int LMEDS
 See Also:
 Constant Field Values

RANSAC
public static final int RANSAC
 See Also:
 Constant Field Values

RHO
public static final int RHO
 See Also:
 Constant Field Values

USAC_DEFAULT
public static final int USAC_DEFAULT
 See Also:
 Constant Field Values

USAC_PARALLEL
public static final int USAC_PARALLEL
 See Also:
 Constant Field Values

USAC_FM_8PTS
public static final int USAC_FM_8PTS
 See Also:
 Constant Field Values

USAC_FAST
public static final int USAC_FAST
 See Also:
 Constant Field Values

USAC_ACCURATE
public static final int USAC_ACCURATE
 See Also:
 Constant Field Values

USAC_PROSAC
public static final int USAC_PROSAC
 See Also:
 Constant Field Values

USAC_MAGSAC
public static final int USAC_MAGSAC
 See Also:
 Constant Field Values

CALIB_CB_ADAPTIVE_THRESH
public static final int CALIB_CB_ADAPTIVE_THRESH
 See Also:
 Constant Field Values

CALIB_CB_NORMALIZE_IMAGE
public static final int CALIB_CB_NORMALIZE_IMAGE
 See Also:
 Constant Field Values

CALIB_CB_FILTER_QUADS
public static final int CALIB_CB_FILTER_QUADS
 See Also:
 Constant Field Values

CALIB_CB_FAST_CHECK
public static final int CALIB_CB_FAST_CHECK
 See Also:
 Constant Field Values

CALIB_CB_EXHAUSTIVE
public static final int CALIB_CB_EXHAUSTIVE
 See Also:
 Constant Field Values

CALIB_CB_ACCURACY
public static final int CALIB_CB_ACCURACY
 See Also:
 Constant Field Values

CALIB_CB_LARGER
public static final int CALIB_CB_LARGER
 See Also:
 Constant Field Values

CALIB_CB_MARKER
public static final int CALIB_CB_MARKER
 See Also:
 Constant Field Values

CALIB_CB_SYMMETRIC_GRID
public static final int CALIB_CB_SYMMETRIC_GRID
 See Also:
 Constant Field Values

CALIB_CB_ASYMMETRIC_GRID
public static final int CALIB_CB_ASYMMETRIC_GRID
 See Also:
 Constant Field Values

CALIB_CB_CLUSTERING
public static final int CALIB_CB_CLUSTERING
 See Also:
 Constant Field Values

CALIB_NINTRINSIC
public static final int CALIB_NINTRINSIC
 See Also:
 Constant Field Values

CALIB_USE_INTRINSIC_GUESS
public static final int CALIB_USE_INTRINSIC_GUESS
 See Also:
 Constant Field Values

CALIB_FIX_ASPECT_RATIO
public static final int CALIB_FIX_ASPECT_RATIO
 See Also:
 Constant Field Values

CALIB_FIX_PRINCIPAL_POINT
public static final int CALIB_FIX_PRINCIPAL_POINT
 See Also:
 Constant Field Values

CALIB_ZERO_TANGENT_DIST
public static final int CALIB_ZERO_TANGENT_DIST
 See Also:
 Constant Field Values

CALIB_FIX_FOCAL_LENGTH
public static final int CALIB_FIX_FOCAL_LENGTH
 See Also:
 Constant Field Values

CALIB_FIX_K1
public static final int CALIB_FIX_K1
 See Also:
 Constant Field Values

CALIB_FIX_K2
public static final int CALIB_FIX_K2
 See Also:
 Constant Field Values

CALIB_FIX_K3
public static final int CALIB_FIX_K3
 See Also:
 Constant Field Values

CALIB_FIX_K4
public static final int CALIB_FIX_K4
 See Also:
 Constant Field Values

CALIB_FIX_K5
public static final int CALIB_FIX_K5
 See Also:
 Constant Field Values

CALIB_FIX_K6
public static final int CALIB_FIX_K6
 See Also:
 Constant Field Values

CALIB_RATIONAL_MODEL
public static final int CALIB_RATIONAL_MODEL
 See Also:
 Constant Field Values

CALIB_THIN_PRISM_MODEL
public static final int CALIB_THIN_PRISM_MODEL
 See Also:
 Constant Field Values

CALIB_FIX_S1_S2_S3_S4
public static final int CALIB_FIX_S1_S2_S3_S4
 See Also:
 Constant Field Values

CALIB_TILTED_MODEL
public static final int CALIB_TILTED_MODEL
 See Also:
 Constant Field Values

CALIB_FIX_TAUX_TAUY
public static final int CALIB_FIX_TAUX_TAUY
 See Also:
 Constant Field Values

CALIB_USE_QR
public static final int CALIB_USE_QR
 See Also:
 Constant Field Values

CALIB_FIX_TANGENT_DIST
public static final int CALIB_FIX_TANGENT_DIST
 See Also:
 Constant Field Values

CALIB_FIX_INTRINSIC
public static final int CALIB_FIX_INTRINSIC
 See Also:
 Constant Field Values

CALIB_SAME_FOCAL_LENGTH
public static final int CALIB_SAME_FOCAL_LENGTH
 See Also:
 Constant Field Values

CALIB_ZERO_DISPARITY
public static final int CALIB_ZERO_DISPARITY
 See Also:
 Constant Field Values

CALIB_USE_LU
public static final int CALIB_USE_LU
 See Also:
 Constant Field Values

CALIB_USE_EXTRINSIC_GUESS
public static final int CALIB_USE_EXTRINSIC_GUESS
 See Also:
 Constant Field Values

FM_7POINT
public static final int FM_7POINT
 See Also:
 Constant Field Values

FM_8POINT
public static final int FM_8POINT
 See Also:
 Constant Field Values

FM_LMEDS
public static final int FM_LMEDS
 See Also:
 Constant Field Values

FM_RANSAC
public static final int FM_RANSAC
 See Also:
 Constant Field Values

fisheye_CALIB_USE_INTRINSIC_GUESS
public static final int fisheye_CALIB_USE_INTRINSIC_GUESS
 See Also:
 Constant Field Values

fisheye_CALIB_RECOMPUTE_EXTRINSIC
public static final int fisheye_CALIB_RECOMPUTE_EXTRINSIC
 See Also:
 Constant Field Values

fisheye_CALIB_CHECK_COND
public static final int fisheye_CALIB_CHECK_COND
 See Also:
 Constant Field Values

fisheye_CALIB_FIX_SKEW
public static final int fisheye_CALIB_FIX_SKEW
 See Also:
 Constant Field Values

fisheye_CALIB_FIX_K1
public static final int fisheye_CALIB_FIX_K1
 See Also:
 Constant Field Values

fisheye_CALIB_FIX_K2
public static final int fisheye_CALIB_FIX_K2
 See Also:
 Constant Field Values

fisheye_CALIB_FIX_K3
public static final int fisheye_CALIB_FIX_K3
 See Also:
 Constant Field Values

fisheye_CALIB_FIX_K4
public static final int fisheye_CALIB_FIX_K4
 See Also:
 Constant Field Values

fisheye_CALIB_FIX_INTRINSIC
public static final int fisheye_CALIB_FIX_INTRINSIC
 See Also:
 Constant Field Values

fisheye_CALIB_FIX_PRINCIPAL_POINT
public static final int fisheye_CALIB_FIX_PRINCIPAL_POINT
 See Also:
 Constant Field Values

fisheye_CALIB_ZERO_DISPARITY
public static final int fisheye_CALIB_ZERO_DISPARITY
 See Also:
 Constant Field Values

fisheye_CALIB_FIX_FOCAL_LENGTH
public static final int fisheye_CALIB_FIX_FOCAL_LENGTH
 See Also:
 Constant Field Values

CirclesGridFinderParameters_SYMMETRIC_GRID
public static final int CirclesGridFinderParameters_SYMMETRIC_GRID
 See Also:
 Constant Field Values

CirclesGridFinderParameters_ASYMMETRIC_GRID
public static final int CirclesGridFinderParameters_ASYMMETRIC_GRID
 See Also:
 Constant Field Values

CALIB_HAND_EYE_TSAI
public static final int CALIB_HAND_EYE_TSAI
 See Also:
 Constant Field Values

CALIB_HAND_EYE_PARK
public static final int CALIB_HAND_EYE_PARK
 See Also:
 Constant Field Values

CALIB_HAND_EYE_HORAUD
public static final int CALIB_HAND_EYE_HORAUD
 See Also:
 Constant Field Values

CALIB_HAND_EYE_ANDREFF
public static final int CALIB_HAND_EYE_ANDREFF
 See Also:
 Constant Field Values

CALIB_HAND_EYE_DANIILIDIS
public static final int CALIB_HAND_EYE_DANIILIDIS
 See Also:
 Constant Field Values

LOCAL_OPTIM_NULL
public static final int LOCAL_OPTIM_NULL
 See Also:
 Constant Field Values

LOCAL_OPTIM_INNER_LO
public static final int LOCAL_OPTIM_INNER_LO
 See Also:
 Constant Field Values

LOCAL_OPTIM_INNER_AND_ITER_LO
public static final int LOCAL_OPTIM_INNER_AND_ITER_LO
 See Also:
 Constant Field Values

LOCAL_OPTIM_GC
public static final int LOCAL_OPTIM_GC
 See Also:
 Constant Field Values

LOCAL_OPTIM_SIGMA
public static final int LOCAL_OPTIM_SIGMA
 See Also:
 Constant Field Values

NEIGH_FLANN_KNN
public static final int NEIGH_FLANN_KNN
 See Also:
 Constant Field Values

NEIGH_GRID
public static final int NEIGH_GRID
 See Also:
 Constant Field Values

NEIGH_FLANN_RADIUS
public static final int NEIGH_FLANN_RADIUS
 See Also:
 Constant Field Values

NONE_POLISHER
public static final int NONE_POLISHER
 See Also:
 Constant Field Values

LSQ_POLISHER
public static final int LSQ_POLISHER
 See Also:
 Constant Field Values

MAGSAC
public static final int MAGSAC
 See Also:
 Constant Field Values

COV_POLISHER
public static final int COV_POLISHER
 See Also:
 Constant Field Values

CALIB_ROBOT_WORLD_HAND_EYE_SHAH
public static final int CALIB_ROBOT_WORLD_HAND_EYE_SHAH
 See Also:
 Constant Field Values

CALIB_ROBOT_WORLD_HAND_EYE_LI
public static final int CALIB_ROBOT_WORLD_HAND_EYE_LI
 See Also:
 Constant Field Values

SAMPLING_UNIFORM
public static final int SAMPLING_UNIFORM
 See Also:
 Constant Field Values

SAMPLING_PROGRESSIVE_NAPSAC
public static final int SAMPLING_PROGRESSIVE_NAPSAC
 See Also:
 Constant Field Values

SAMPLING_NAPSAC
public static final int SAMPLING_NAPSAC
 See Also:
 Constant Field Values

SAMPLING_PROSAC
public static final int SAMPLING_PROSAC
 See Also:
 Constant Field Values

SCORE_METHOD_RANSAC
public static final int SCORE_METHOD_RANSAC
 See Also:
 Constant Field Values

SCORE_METHOD_MSAC
public static final int SCORE_METHOD_MSAC
 See Also:
 Constant Field Values

SCORE_METHOD_MAGSAC
public static final int SCORE_METHOD_MAGSAC
 See Also:
 Constant Field Values

SCORE_METHOD_LMEDS
public static final int SCORE_METHOD_LMEDS
 See Also:
 Constant Field Values

SOLVEPNP_ITERATIVE
public static final int SOLVEPNP_ITERATIVE
 See Also:
 Constant Field Values

SOLVEPNP_EPNP
public static final int SOLVEPNP_EPNP
 See Also:
 Constant Field Values

SOLVEPNP_P3P
public static final int SOLVEPNP_P3P
 See Also:
 Constant Field Values

SOLVEPNP_DLS
public static final int SOLVEPNP_DLS
 See Also:
 Constant Field Values

SOLVEPNP_UPNP
public static final int SOLVEPNP_UPNP
 See Also:
 Constant Field Values

SOLVEPNP_AP3P
public static final int SOLVEPNP_AP3P
 See Also:
 Constant Field Values

SOLVEPNP_IPPE
public static final int SOLVEPNP_IPPE
 See Also:
 Constant Field Values

SOLVEPNP_IPPE_SQUARE
public static final int SOLVEPNP_IPPE_SQUARE
 See Also:
 Constant Field Values

SOLVEPNP_SQPNP
public static final int SOLVEPNP_SQPNP
 See Also:
 Constant Field Values

SOLVEPNP_MAX_COUNT
public static final int SOLVEPNP_MAX_COUNT
 See Also:
 Constant Field Values

PROJ_SPHERICAL_ORTHO
public static final int PROJ_SPHERICAL_ORTHO
 See Also:
 Constant Field Values

PROJ_SPHERICAL_EQRECT
public static final int PROJ_SPHERICAL_EQRECT
 See Also:
 Constant Field Values


Method Detail

Rodrigues
public static void Rodrigues(Mat src, Mat dst, Mat jacobian)
Converts a rotation matrix to a rotation vector or vice versa. Parameters:
src
 Input rotation vector (3x1 or 1x3) or rotation matrix (3x3).dst
 Output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively.jacobian
 Optional output Jacobian matrix, 3x9 or 9x3, which is a matrix of partial derivatives of the output array components with respect to the input array components. \(\begin{array}{l} \theta \leftarrow norm(r) \\ r \leftarrow r/ \theta \\ R = \cos(\theta) I + (1 \cos{\theta} ) r r^T + \sin(\theta) \vecthreethree{0}{r_z}{r_y}{r_z}{0}{r_x}{r_y}{r_x}{0} \end{array}\) Inverse transformation can be also done easily, since \(\sin ( \theta ) \vecthreethree{0}{r_z}{r_y}{r_z}{0}{r_x}{r_y}{r_x}{0} = \frac{R  R^T}{2}\) A rotation vector is a convenient and most compact representation of a rotation matrix (since any rotation matrix has just 3 degrees of freedom). The representation is used in the global 3D geometry optimization procedures like REF: calibrateCamera, REF: stereoCalibrate, or REF: solvePnP . Note: More information about the computation of the derivative of a 3D rotation matrix with respect to its exponential coordinate can be found in: A Compact Formula for the Derivative of a 3D Rotation in Exponential Coordinates, Guillermo Gallego, Anthony J. Yezzi CITE: Gallego2014ACF
 A tutorial on SE(3) transformation parameterizations and onmanifold optimization, JoseLuis Blanco CITE: blanco2010tutorial
 Lie Groups for 2D and 3D Transformation, Ethan Eade CITE: Eade17
 A micro Lie theory for state estimation in robotics, Joan SolÃ , JÃ©rÃ©mie Deray, Dinesh Atchuthan CITE: Sol2018AML

Rodrigues
public static void Rodrigues(Mat src, Mat dst)
Converts a rotation matrix to a rotation vector or vice versa. Parameters:
src
 Input rotation vector (3x1 or 1x3) or rotation matrix (3x3).dst
 Output rotation matrix (3x3) or rotation vector (3x1 or 1x3), respectively. derivatives of the output array components with respect to the input array components. \(\begin{array}{l} \theta \leftarrow norm(r) \\ r \leftarrow r/ \theta \\ R = \cos(\theta) I + (1 \cos{\theta} ) r r^T + \sin(\theta) \vecthreethree{0}{r_z}{r_y}{r_z}{0}{r_x}{r_y}{r_x}{0} \end{array}\) Inverse transformation can be also done easily, since \(\sin ( \theta ) \vecthreethree{0}{r_z}{r_y}{r_z}{0}{r_x}{r_y}{r_x}{0} = \frac{R  R^T}{2}\) A rotation vector is a convenient and most compact representation of a rotation matrix (since any rotation matrix has just 3 degrees of freedom). The representation is used in the global 3D geometry optimization procedures like REF: calibrateCamera, REF: stereoCalibrate, or REF: solvePnP . Note: More information about the computation of the derivative of a 3D rotation matrix with respect to its exponential coordinate can be found in: A Compact Formula for the Derivative of a 3D Rotation in Exponential Coordinates, Guillermo Gallego, Anthony J. Yezzi CITE: Gallego2014ACF
 A tutorial on SE(3) transformation parameterizations and onmanifold optimization, JoseLuis Blanco CITE: blanco2010tutorial
 Lie Groups for 2D and 3D Transformation, Ethan Eade CITE: Eade17
 A micro Lie theory for state estimation in robotics, Joan SolÃ , JÃ©rÃ©mie Deray, Dinesh Atchuthan CITE: Sol2018AML

findHomography
public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold, Mat mask, int maxIters, double confidence)
Finds a perspective transformation between two planes. Parameters:
srcPoints
 Coordinates of the points in the original plane, a matrix of the type CV_32FC2 or vector<Point2f> .dstPoints
 Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or a vector<Point2f> .method
 Method used to compute a homography matrix. The following methods are possible: 0  a regular method using all the points, i.e., the least squares method
 REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method
 REF: RHO  PROSACbased robust method
ransacReprojThreshold
 Maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC and RHO methods only). That is, if \(\ \texttt{dstPoints} _i  \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \_2 > \texttt{ransacReprojThreshold}\) then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels, it usually makes sense to set this parameter somewhere in the range of 1 to 10.mask
 Optional output mask set by a robust method ( RANSAC or LMeDS ). Note that the input mask values are ignored.maxIters
 The maximum number of RANSAC iterations.confidence
 Confidence level, between 0 and 1. The function finds and returns the perspective transformation \(H\) between the source and the destination planes: \(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\) so that the backprojection error \(\sum _i \left ( x'_i \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\) is minimized. If the parameter method is set to the default value 0, the function uses all the point pairs to compute an initial homography estimate with a simple leastsquares scheme. However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective transformation (that is, there are some outliers), this initial estimate will be poor. In this case, you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix using this subset and a simple leastsquares algorithm, and then compute the quality/goodness of the computed homography (which is the number of inliers for RANSAC or the least median reprojection error for LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and the mask of inliers/outliers. Regardless of the method, robust or not, the computed homography matrix is refined further (using inliers only in case of a robust method) with the LevenbergMarquardt method to reduce the reprojection error even more. The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the noise is rather small, use the default method (method=0). The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is determined up to a scale. Thus, it is normalized so that \(h_{33}=1\). Note that whenever an \(H\) matrix cannot be estimated, an empty one will be returned. SEE: getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective, perspectiveTransform Returns:
 automatically generated

findHomography
public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold, Mat mask, int maxIters)
Finds a perspective transformation between two planes. Parameters:
srcPoints
 Coordinates of the points in the original plane, a matrix of the type CV_32FC2 or vector<Point2f> .dstPoints
 Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or a vector<Point2f> .method
 Method used to compute a homography matrix. The following methods are possible: 0  a regular method using all the points, i.e., the least squares method
 REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method
 REF: RHO  PROSACbased robust method
ransacReprojThreshold
 Maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC and RHO methods only). That is, if \(\ \texttt{dstPoints} _i  \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \_2 > \texttt{ransacReprojThreshold}\) then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels, it usually makes sense to set this parameter somewhere in the range of 1 to 10.mask
 Optional output mask set by a robust method ( RANSAC or LMeDS ). Note that the input mask values are ignored.maxIters
 The maximum number of RANSAC iterations. The function finds and returns the perspective transformation \(H\) between the source and the destination planes: \(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\) so that the backprojection error \(\sum _i \left ( x'_i \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\) is minimized. If the parameter method is set to the default value 0, the function uses all the point pairs to compute an initial homography estimate with a simple leastsquares scheme. However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective transformation (that is, there are some outliers), this initial estimate will be poor. In this case, you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix using this subset and a simple leastsquares algorithm, and then compute the quality/goodness of the computed homography (which is the number of inliers for RANSAC or the least median reprojection error for LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and the mask of inliers/outliers. Regardless of the method, robust or not, the computed homography matrix is refined further (using inliers only in case of a robust method) with the LevenbergMarquardt method to reduce the reprojection error even more. The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the noise is rather small, use the default method (method=0). The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is determined up to a scale. Thus, it is normalized so that \(h_{33}=1\). Note that whenever an \(H\) matrix cannot be estimated, an empty one will be returned. SEE: getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective, perspectiveTransform Returns:
 automatically generated

findHomography
public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold, Mat mask)
Finds a perspective transformation between two planes. Parameters:
srcPoints
 Coordinates of the points in the original plane, a matrix of the type CV_32FC2 or vector<Point2f> .dstPoints
 Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or a vector<Point2f> .method
 Method used to compute a homography matrix. The following methods are possible: 0  a regular method using all the points, i.e., the least squares method
 REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method
 REF: RHO  PROSACbased robust method
ransacReprojThreshold
 Maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC and RHO methods only). That is, if \(\ \texttt{dstPoints} _i  \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \_2 > \texttt{ransacReprojThreshold}\) then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels, it usually makes sense to set this parameter somewhere in the range of 1 to 10.mask
 Optional output mask set by a robust method ( RANSAC or LMeDS ). Note that the input mask values are ignored. The function finds and returns the perspective transformation \(H\) between the source and the destination planes: \(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\) so that the backprojection error \(\sum _i \left ( x'_i \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\) is minimized. If the parameter method is set to the default value 0, the function uses all the point pairs to compute an initial homography estimate with a simple leastsquares scheme. However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective transformation (that is, there are some outliers), this initial estimate will be poor. In this case, you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix using this subset and a simple leastsquares algorithm, and then compute the quality/goodness of the computed homography (which is the number of inliers for RANSAC or the least median reprojection error for LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and the mask of inliers/outliers. Regardless of the method, robust or not, the computed homography matrix is refined further (using inliers only in case of a robust method) with the LevenbergMarquardt method to reduce the reprojection error even more. The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the noise is rather small, use the default method (method=0). The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is determined up to a scale. Thus, it is normalized so that \(h_{33}=1\). Note that whenever an \(H\) matrix cannot be estimated, an empty one will be returned. SEE: getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective, perspectiveTransform Returns:
 automatically generated

findHomography
public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method, double ransacReprojThreshold)
Finds a perspective transformation between two planes. Parameters:
srcPoints
 Coordinates of the points in the original plane, a matrix of the type CV_32FC2 or vector<Point2f> .dstPoints
 Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or a vector<Point2f> .method
 Method used to compute a homography matrix. The following methods are possible: 0  a regular method using all the points, i.e., the least squares method
 REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method
 REF: RHO  PROSACbased robust method
ransacReprojThreshold
 Maximum allowed reprojection error to treat a point pair as an inlier (used in the RANSAC and RHO methods only). That is, if \(\ \texttt{dstPoints} _i  \texttt{convertPointsHomogeneous} ( \texttt{H} \cdot \texttt{srcPoints} _i) \_2 > \texttt{ransacReprojThreshold}\) then the point \(i\) is considered as an outlier. If srcPoints and dstPoints are measured in pixels, it usually makes sense to set this parameter somewhere in the range of 1 to 10. mask values are ignored. The function finds and returns the perspective transformation \(H\) between the source and the destination planes: \(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\) so that the backprojection error \(\sum _i \left ( x'_i \frac{h_{11} x_i + h_{12} y_i + h_{13}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2+ \left ( y'_i \frac{h_{21} x_i + h_{22} y_i + h_{23}}{h_{31} x_i + h_{32} y_i + h_{33}} \right )^2\) is minimized. If the parameter method is set to the default value 0, the function uses all the point pairs to compute an initial homography estimate with a simple leastsquares scheme. However, if not all of the point pairs ( \(srcPoints_i\), \(dstPoints_i\) ) fit the rigid perspective transformation (that is, there are some outliers), this initial estimate will be poor. In this case, you can use one of the three robust methods. The methods RANSAC, LMeDS and RHO try many different random subsets of the corresponding point pairs (of four pairs each, collinear pairs are discarded), estimate the homography matrix using this subset and a simple leastsquares algorithm, and then compute the quality/goodness of the computed homography (which is the number of inliers for RANSAC or the least median reprojection error for LMeDS). The best subset is then used to produce the initial estimate of the homography matrix and the mask of inliers/outliers. Regardless of the method, robust or not, the computed homography matrix is refined further (using inliers only in case of a robust method) with the LevenbergMarquardt method to reduce the reprojection error even more. The methods RANSAC and RHO can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. Finally, if there are no outliers and the noise is rather small, use the default method (method=0). The function is used to find initial intrinsic and extrinsic matrices. Homography matrix is determined up to a scale. Thus, it is normalized so that \(h_{33}=1\). Note that whenever an \(H\) matrix cannot be estimated, an empty one will be returned. SEE: getAffineTransform, estimateAffine2D, estimateAffinePartial2D, getPerspectiveTransform, warpPerspective, perspectiveTransform Returns:
 automatically generated

findHomography
public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, int method)
Finds a perspective transformation between two planes. Parameters:
srcPoints
 Coordinates of the points in the original plane, a matrix of the type CV_32FC2 or vector<Point2f> .dstPoints
 Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or a vector<Point2f> .method
 Method used to compute a homography matrix. The following methods are possible: 0  a regular method using all the points, i.e., the least squares method
 REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method
 REF: RHO  PROSACbased robust method
 Returns:
 automatically generated

findHomography
public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints)
Finds a perspective transformation between two planes. Parameters:
srcPoints
 Coordinates of the points in the original plane, a matrix of the type CV_32FC2 or vector<Point2f> .dstPoints
 Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or a vector<Point2f> . 0  a regular method using all the points, i.e., the least squares method
 REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method
 REF: RHO  PROSACbased robust method
 Returns:
 automatically generated

findHomography
public static Mat findHomography(MatOfPoint2f srcPoints, MatOfPoint2f dstPoints, Mat mask, UsacParams params)

RQDecomp3x3
public static double[] RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ, Mat Qx, Mat Qy, Mat Qz)
Computes an RQ decomposition of 3x3 matrices. Parameters:
src
 3x3 input matrix.mtxR
 Output 3x3 uppertriangular matrix.mtxQ
 Output 3x3 orthogonal matrix.Qx
 Optional output 3x3 rotation matrix around xaxis.Qy
 Optional output 3x3 rotation matrix around yaxis.Qz
 Optional output 3x3 rotation matrix around zaxis. The function computes a RQ decomposition using the given rotations. This function is used in #decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera and a rotation matrix. It optionally returns three rotation matrices, one for each axis, and the three Euler angles in degrees (as the return value) that could be used in OpenGL. Note, there is always more than one sequence of rotations about the three principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned tree rotation matrices and corresponding three Euler angles are only one of the possible solutions. Returns:
 automatically generated

RQDecomp3x3
public static double[] RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ, Mat Qx, Mat Qy)
Computes an RQ decomposition of 3x3 matrices. Parameters:
src
 3x3 input matrix.mtxR
 Output 3x3 uppertriangular matrix.mtxQ
 Output 3x3 orthogonal matrix.Qx
 Optional output 3x3 rotation matrix around xaxis.Qy
 Optional output 3x3 rotation matrix around yaxis. The function computes a RQ decomposition using the given rotations. This function is used in #decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera and a rotation matrix. It optionally returns three rotation matrices, one for each axis, and the three Euler angles in degrees (as the return value) that could be used in OpenGL. Note, there is always more than one sequence of rotations about the three principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned tree rotation matrices and corresponding three Euler angles are only one of the possible solutions. Returns:
 automatically generated

RQDecomp3x3
public static double[] RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ, Mat Qx)
Computes an RQ decomposition of 3x3 matrices. Parameters:
src
 3x3 input matrix.mtxR
 Output 3x3 uppertriangular matrix.mtxQ
 Output 3x3 orthogonal matrix.Qx
 Optional output 3x3 rotation matrix around xaxis. The function computes a RQ decomposition using the given rotations. This function is used in #decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera and a rotation matrix. It optionally returns three rotation matrices, one for each axis, and the three Euler angles in degrees (as the return value) that could be used in OpenGL. Note, there is always more than one sequence of rotations about the three principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned tree rotation matrices and corresponding three Euler angles are only one of the possible solutions. Returns:
 automatically generated

RQDecomp3x3
public static double[] RQDecomp3x3(Mat src, Mat mtxR, Mat mtxQ)
Computes an RQ decomposition of 3x3 matrices. Parameters:
src
 3x3 input matrix.mtxR
 Output 3x3 uppertriangular matrix.mtxQ
 Output 3x3 orthogonal matrix. The function computes a RQ decomposition using the given rotations. This function is used in #decomposeProjectionMatrix to decompose the left 3x3 submatrix of a projection matrix into a camera and a rotation matrix. It optionally returns three rotation matrices, one for each axis, and the three Euler angles in degrees (as the return value) that could be used in OpenGL. Note, there is always more than one sequence of rotations about the three principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned tree rotation matrices and corresponding three Euler angles are only one of the possible solutions. Returns:
 automatically generated

decomposeProjectionMatrix
public static void decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect, Mat rotMatrixX, Mat rotMatrixY, Mat rotMatrixZ, Mat eulerAngles)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix. Parameters:
projMatrix
 3x4 input projection matrix P.cameraMatrix
 Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).rotMatrix
 Output 3x3 external rotation matrix R.transVect
 Output 4x1 translation vector T.rotMatrixX
 Optional 3x3 rotation matrix around xaxis.rotMatrixY
 Optional 3x3 rotation matrix around yaxis.rotMatrixZ
 Optional 3x3 rotation matrix around zaxis.eulerAngles
 Optional threeelement vector containing three Euler angles of rotation in degrees. The function computes a decomposition of a projection matrix into a calibration and a rotation matrix and the position of a camera. It optionally returns three rotation matrices, one for each axis, and three Euler angles that could be used in OpenGL. Note, there is always more than one sequence of rotations about the three principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned tree rotation matrices and corresponding three Euler angles are only one of the possible solutions. The function is based on #RQDecomp3x3 .

decomposeProjectionMatrix
public static void decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect, Mat rotMatrixX, Mat rotMatrixY, Mat rotMatrixZ)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix. Parameters:
projMatrix
 3x4 input projection matrix P.cameraMatrix
 Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).rotMatrix
 Output 3x3 external rotation matrix R.transVect
 Output 4x1 translation vector T.rotMatrixX
 Optional 3x3 rotation matrix around xaxis.rotMatrixY
 Optional 3x3 rotation matrix around yaxis.rotMatrixZ
 Optional 3x3 rotation matrix around zaxis. degrees. The function computes a decomposition of a projection matrix into a calibration and a rotation matrix and the position of a camera. It optionally returns three rotation matrices, one for each axis, and three Euler angles that could be used in OpenGL. Note, there is always more than one sequence of rotations about the three principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned tree rotation matrices and corresponding three Euler angles are only one of the possible solutions. The function is based on #RQDecomp3x3 .

decomposeProjectionMatrix
public static void decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect, Mat rotMatrixX, Mat rotMatrixY)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix. Parameters:
projMatrix
 3x4 input projection matrix P.cameraMatrix
 Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).rotMatrix
 Output 3x3 external rotation matrix R.transVect
 Output 4x1 translation vector T.rotMatrixX
 Optional 3x3 rotation matrix around xaxis.rotMatrixY
 Optional 3x3 rotation matrix around yaxis. degrees. The function computes a decomposition of a projection matrix into a calibration and a rotation matrix and the position of a camera. It optionally returns three rotation matrices, one for each axis, and three Euler angles that could be used in OpenGL. Note, there is always more than one sequence of rotations about the three principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned tree rotation matrices and corresponding three Euler angles are only one of the possible solutions. The function is based on #RQDecomp3x3 .

decomposeProjectionMatrix
public static void decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect, Mat rotMatrixX)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix. Parameters:
projMatrix
 3x4 input projection matrix P.cameraMatrix
 Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).rotMatrix
 Output 3x3 external rotation matrix R.transVect
 Output 4x1 translation vector T.rotMatrixX
 Optional 3x3 rotation matrix around xaxis. degrees. The function computes a decomposition of a projection matrix into a calibration and a rotation matrix and the position of a camera. It optionally returns three rotation matrices, one for each axis, and three Euler angles that could be used in OpenGL. Note, there is always more than one sequence of rotations about the three principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned tree rotation matrices and corresponding three Euler angles are only one of the possible solutions. The function is based on #RQDecomp3x3 .

decomposeProjectionMatrix
public static void decomposeProjectionMatrix(Mat projMatrix, Mat cameraMatrix, Mat rotMatrix, Mat transVect)
Decomposes a projection matrix into a rotation matrix and a camera intrinsic matrix. Parameters:
projMatrix
 3x4 input projection matrix P.cameraMatrix
 Output 3x3 camera intrinsic matrix \(\cameramatrix{A}\).rotMatrix
 Output 3x3 external rotation matrix R.transVect
 Output 4x1 translation vector T. degrees. The function computes a decomposition of a projection matrix into a calibration and a rotation matrix and the position of a camera. It optionally returns three rotation matrices, one for each axis, and three Euler angles that could be used in OpenGL. Note, there is always more than one sequence of rotations about the three principal axes that results in the same orientation of an object, e.g. see CITE: Slabaugh . Returned tree rotation matrices and corresponding three Euler angles are only one of the possible solutions. The function is based on #RQDecomp3x3 .

matMulDeriv
public static void matMulDeriv(Mat A, Mat B, Mat dABdA, Mat dABdB)
Computes partial derivatives of the matrix product for each multiplied matrix. Parameters:
A
 First multiplied matrix.B
 Second multiplied matrix.dABdA
 First output derivative matrix d(A\*B)/dA of size \(\texttt{A.rows*B.cols} \times {A.rows*A.cols}\) .dABdB
 Second output derivative matrix d(A\*B)/dB of size \(\texttt{A.rows*B.cols} \times {B.rows*B.cols}\) . The function computes partial derivatives of the elements of the matrix product \(A*B\) with regard to the elements of each of the two input matrices. The function is used to compute the Jacobian matrices in #stereoCalibrate but can also be used in any other similar optimization function.

composeRT
public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2, Mat dt3dr1, Mat dt3dt1, Mat dt3dr2, Mat dt3dt2)
Combines two rotationandshift transformations. Parameters:
rvec1
 First rotation vector.tvec1
 First translation vector.rvec2
 Second rotation vector.tvec2
 Second translation vector.rvec3
 Output rotation vector of the superposition.tvec3
 Output translation vector of the superposition.dr3dr1
 Optional output derivative of rvec3 with regard to rvec1dr3dt1
 Optional output derivative of rvec3 with regard to tvec1dr3dr2
 Optional output derivative of rvec3 with regard to rvec2dr3dt2
 Optional output derivative of rvec3 with regard to tvec2dt3dr1
 Optional output derivative of tvec3 with regard to rvec1dt3dt1
 Optional output derivative of tvec3 with regard to tvec1dt3dr2
 Optional output derivative of tvec3 with regard to rvec2dt3dt2
 Optional output derivative of tvec3 with regard to tvec2 The functions compute: \(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\) where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and \(\mathrm{rodrigues}^{1}\) denotes the inverse transformation. See #Rodrigues for details. Also, the functions can compute the derivatives of the output vectors with regards to the input vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in your own code where LevenbergMarquardt or another gradientbased solver is used to optimize a function that contains a matrix multiplication.

composeRT
public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2, Mat dt3dr1, Mat dt3dt1, Mat dt3dr2)
Combines two rotationandshift transformations. Parameters:
rvec1
 First rotation vector.tvec1
 First translation vector.rvec2
 Second rotation vector.tvec2
 Second translation vector.rvec3
 Output rotation vector of the superposition.tvec3
 Output translation vector of the superposition.dr3dr1
 Optional output derivative of rvec3 with regard to rvec1dr3dt1
 Optional output derivative of rvec3 with regard to tvec1dr3dr2
 Optional output derivative of rvec3 with regard to rvec2dr3dt2
 Optional output derivative of rvec3 with regard to tvec2dt3dr1
 Optional output derivative of tvec3 with regard to rvec1dt3dt1
 Optional output derivative of tvec3 with regard to tvec1dt3dr2
 Optional output derivative of tvec3 with regard to rvec2 The functions compute: \(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\) where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and \(\mathrm{rodrigues}^{1}\) denotes the inverse transformation. See #Rodrigues for details. Also, the functions can compute the derivatives of the output vectors with regards to the input vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in your own code where LevenbergMarquardt or another gradientbased solver is used to optimize a function that contains a matrix multiplication.

composeRT
public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2, Mat dt3dr1, Mat dt3dt1)
Combines two rotationandshift transformations. Parameters:
rvec1
 First rotation vector.tvec1
 First translation vector.rvec2
 Second rotation vector.tvec2
 Second translation vector.rvec3
 Output rotation vector of the superposition.tvec3
 Output translation vector of the superposition.dr3dr1
 Optional output derivative of rvec3 with regard to rvec1dr3dt1
 Optional output derivative of rvec3 with regard to tvec1dr3dr2
 Optional output derivative of rvec3 with regard to rvec2dr3dt2
 Optional output derivative of rvec3 with regard to tvec2dt3dr1
 Optional output derivative of tvec3 with regard to rvec1dt3dt1
 Optional output derivative of tvec3 with regard to tvec1 The functions compute: \(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\) where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and \(\mathrm{rodrigues}^{1}\) denotes the inverse transformation. See #Rodrigues for details. Also, the functions can compute the derivatives of the output vectors with regards to the input vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in your own code where LevenbergMarquardt or another gradientbased solver is used to optimize a function that contains a matrix multiplication.

composeRT
public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2, Mat dt3dr1)
Combines two rotationandshift transformations. Parameters:
rvec1
 First rotation vector.tvec1
 First translation vector.rvec2
 Second rotation vector.tvec2
 Second translation vector.rvec3
 Output rotation vector of the superposition.tvec3
 Output translation vector of the superposition.dr3dr1
 Optional output derivative of rvec3 with regard to rvec1dr3dt1
 Optional output derivative of rvec3 with regard to tvec1dr3dr2
 Optional output derivative of rvec3 with regard to rvec2dr3dt2
 Optional output derivative of rvec3 with regard to tvec2dt3dr1
 Optional output derivative of tvec3 with regard to rvec1 The functions compute: \(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\) where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and \(\mathrm{rodrigues}^{1}\) denotes the inverse transformation. See #Rodrigues for details. Also, the functions can compute the derivatives of the output vectors with regards to the input vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in your own code where LevenbergMarquardt or another gradientbased solver is used to optimize a function that contains a matrix multiplication.

composeRT
public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2, Mat dr3dt2)
Combines two rotationandshift transformations. Parameters:
rvec1
 First rotation vector.tvec1
 First translation vector.rvec2
 Second rotation vector.tvec2
 Second translation vector.rvec3
 Output rotation vector of the superposition.tvec3
 Output translation vector of the superposition.dr3dr1
 Optional output derivative of rvec3 with regard to rvec1dr3dt1
 Optional output derivative of rvec3 with regard to tvec1dr3dr2
 Optional output derivative of rvec3 with regard to rvec2dr3dt2
 Optional output derivative of rvec3 with regard to tvec2 The functions compute: \(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\) where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and \(\mathrm{rodrigues}^{1}\) denotes the inverse transformation. See #Rodrigues for details. Also, the functions can compute the derivatives of the output vectors with regards to the input vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in your own code where LevenbergMarquardt or another gradientbased solver is used to optimize a function that contains a matrix multiplication.

composeRT
public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1, Mat dr3dr2)
Combines two rotationandshift transformations. Parameters:
rvec1
 First rotation vector.tvec1
 First translation vector.rvec2
 Second rotation vector.tvec2
 Second translation vector.rvec3
 Output rotation vector of the superposition.tvec3
 Output translation vector of the superposition.dr3dr1
 Optional output derivative of rvec3 with regard to rvec1dr3dt1
 Optional output derivative of rvec3 with regard to tvec1dr3dr2
 Optional output derivative of rvec3 with regard to rvec2 The functions compute: \(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\) where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and \(\mathrm{rodrigues}^{1}\) denotes the inverse transformation. See #Rodrigues for details. Also, the functions can compute the derivatives of the output vectors with regards to the input vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in your own code where LevenbergMarquardt or another gradientbased solver is used to optimize a function that contains a matrix multiplication.

composeRT
public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1, Mat dr3dt1)
Combines two rotationandshift transformations. Parameters:
rvec1
 First rotation vector.tvec1
 First translation vector.rvec2
 Second rotation vector.tvec2
 Second translation vector.rvec3
 Output rotation vector of the superposition.tvec3
 Output translation vector of the superposition.dr3dr1
 Optional output derivative of rvec3 with regard to rvec1dr3dt1
 Optional output derivative of rvec3 with regard to tvec1 The functions compute: \(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\) where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and \(\mathrm{rodrigues}^{1}\) denotes the inverse transformation. See #Rodrigues for details. Also, the functions can compute the derivatives of the output vectors with regards to the input vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in your own code where LevenbergMarquardt or another gradientbased solver is used to optimize a function that contains a matrix multiplication.

composeRT
public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3, Mat dr3dr1)
Combines two rotationandshift transformations. Parameters:
rvec1
 First rotation vector.tvec1
 First translation vector.rvec2
 Second rotation vector.tvec2
 Second translation vector.rvec3
 Output rotation vector of the superposition.tvec3
 Output translation vector of the superposition.dr3dr1
 Optional output derivative of rvec3 with regard to rvec1 The functions compute: \(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\) where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and \(\mathrm{rodrigues}^{1}\) denotes the inverse transformation. See #Rodrigues for details. Also, the functions can compute the derivatives of the output vectors with regards to the input vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in your own code where LevenbergMarquardt or another gradientbased solver is used to optimize a function that contains a matrix multiplication.

composeRT
public static void composeRT(Mat rvec1, Mat tvec1, Mat rvec2, Mat tvec2, Mat rvec3, Mat tvec3)
Combines two rotationandshift transformations. Parameters:
rvec1
 First rotation vector.tvec1
 First translation vector.rvec2
 Second rotation vector.tvec2
 Second translation vector.rvec3
 Output rotation vector of the superposition.tvec3
 Output translation vector of the superposition. The functions compute: \(\begin{array}{l} \texttt{rvec3} = \mathrm{rodrigues} ^{1} \left ( \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \mathrm{rodrigues} ( \texttt{rvec1} ) \right ) \\ \texttt{tvec3} = \mathrm{rodrigues} ( \texttt{rvec2} ) \cdot \texttt{tvec1} + \texttt{tvec2} \end{array} ,\) where \(\mathrm{rodrigues}\) denotes a rotation vector to a rotation matrix transformation, and \(\mathrm{rodrigues}^{1}\) denotes the inverse transformation. See #Rodrigues for details. Also, the functions can compute the derivatives of the output vectors with regards to the input vectors (see #matMulDeriv ). The functions are used inside #stereoCalibrate but can also be used in your own code where LevenbergMarquardt or another gradientbased solver is used to optimize a function that contains a matrix multiplication.

projectPoints
public static void projectPoints(MatOfPoint3f objectPoints, Mat rvec, Mat tvec, Mat cameraMatrix, MatOfDouble distCoeffs, MatOfPoint2f imagePoints, Mat jacobian, double aspectRatio)
Projects 3D points to an image plane. Parameters:
objectPoints
 Array of object points expressed wrt. the world coordinate frame. A 3xN/Nx3 1channel or 1xN/Nx1 3channel (or vector<Point3f> ), where N is the number of points in the view.rvec
 The rotation vector (REF: Rodrigues) that, together with tvec, performs a change of basis from world to camera coordinate system, see REF: calibrateCamera for details.tvec
 The translation vector, see parameter description above.cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\) . If the vector is empty, the zero distortion coefficients are assumed.imagePoints
 Output array of image points, 1xN/Nx1 2channel, or vector<Point2f> .jacobian
 Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters.aspectRatio
 Optional "fixed aspect ratio" parameter. If the parameter is not 0, the function assumes that the aspect ratio (\(f_x / f_y\)) is fixed and correspondingly adjusts the jacobian matrix. The function computes the 2D projections of 3D points to the image plane, given intrinsic and extrinsic camera parameters. Optionally, the function computes Jacobians matrices of partial derivatives of image points coordinates (as functions of all the input parameters) with respect to the particular parameters, intrinsic and/or extrinsic. The Jacobians are used during the global optimization in REF: calibrateCamera, REF: solvePnP, and REF: stereoCalibrate. The function itself can also be used to compute a reprojection error, given the current intrinsic and extrinsic parameters. Note: By setting rvec = tvec = \([0, 0, 0]\), or by setting cameraMatrix to a 3x3 identity matrix, or by passing zero distortion coefficients, one can get various useful partial cases of the function. This means, one can compute the distorted coordinates for a sparse set of points or apply a perspective transformation (and also compute the derivatives) in the ideal zerodistortion setup.

projectPoints
public static void projectPoints(MatOfPoint3f objectPoints, Mat rvec, Mat tvec, Mat cameraMatrix, MatOfDouble distCoeffs, MatOfPoint2f imagePoints, Mat jacobian)
Projects 3D points to an image plane. Parameters:
objectPoints
 Array of object points expressed wrt. the world coordinate frame. A 3xN/Nx3 1channel or 1xN/Nx1 3channel (or vector<Point3f> ), where N is the number of points in the view.rvec
 The rotation vector (REF: Rodrigues) that, together with tvec, performs a change of basis from world to camera coordinate system, see REF: calibrateCamera for details.tvec
 The translation vector, see parameter description above.cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\) . If the vector is empty, the zero distortion coefficients are assumed.imagePoints
 Output array of image points, 1xN/Nx1 2channel, or vector<Point2f> .jacobian
 Optional output 2Nx(10+<numDistCoeffs>) jacobian matrix of derivatives of image points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters. function assumes that the aspect ratio (\(f_x / f_y\)) is fixed and correspondingly adjusts the jacobian matrix. The function computes the 2D projections of 3D points to the image plane, given intrinsic and extrinsic camera parameters. Optionally, the function computes Jacobians matrices of partial derivatives of image points coordinates (as functions of all the input parameters) with respect to the particular parameters, intrinsic and/or extrinsic. The Jacobians are used during the global optimization in REF: calibrateCamera, REF: solvePnP, and REF: stereoCalibrate. The function itself can also be used to compute a reprojection error, given the current intrinsic and extrinsic parameters. Note: By setting rvec = tvec = \([0, 0, 0]\), or by setting cameraMatrix to a 3x3 identity matrix, or by passing zero distortion coefficients, one can get various useful partial cases of the function. This means, one can compute the distorted coordinates for a sparse set of points or apply a perspective transformation (and also compute the derivatives) in the ideal zerodistortion setup.

projectPoints
public static void projectPoints(MatOfPoint3f objectPoints, Mat rvec, Mat tvec, Mat cameraMatrix, MatOfDouble distCoeffs, MatOfPoint2f imagePoints)
Projects 3D points to an image plane. Parameters:
objectPoints
 Array of object points expressed wrt. the world coordinate frame. A 3xN/Nx3 1channel or 1xN/Nx1 3channel (or vector<Point3f> ), where N is the number of points in the view.rvec
 The rotation vector (REF: Rodrigues) that, together with tvec, performs a change of basis from world to camera coordinate system, see REF: calibrateCamera for details.tvec
 The translation vector, see parameter description above.cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\) . If the vector is empty, the zero distortion coefficients are assumed.imagePoints
 Output array of image points, 1xN/Nx1 2channel, or vector<Point2f> . points with respect to components of the rotation vector, translation vector, focal lengths, coordinates of the principal point and the distortion coefficients. In the old interface different components of the jacobian are returned via different output parameters. function assumes that the aspect ratio (\(f_x / f_y\)) is fixed and correspondingly adjusts the jacobian matrix. The function computes the 2D projections of 3D points to the image plane, given intrinsic and extrinsic camera parameters. Optionally, the function computes Jacobians matrices of partial derivatives of image points coordinates (as functions of all the input parameters) with respect to the particular parameters, intrinsic and/or extrinsic. The Jacobians are used during the global optimization in REF: calibrateCamera, REF: solvePnP, and REF: stereoCalibrate. The function itself can also be used to compute a reprojection error, given the current intrinsic and extrinsic parameters. Note: By setting rvec = tvec = \([0, 0, 0]\), or by setting cameraMatrix to a 3x3 identity matrix, or by passing zero distortion coefficients, one can get various useful partial cases of the function. This means, one can compute the distorted coordinates for a sparse set of points or apply a perspective transformation (and also compute the derivatives) in the ideal zerodistortion setup.

solvePnP
public static boolean solvePnP(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int flags)
Finds an object pose from 3D2D point correspondences. SEE: REF: calib3d_solvePnP This function returns the rotation and the translation vectors that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame, using different methods: P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
 REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.

REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
 Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector.useExtrinsicGuess
 Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.flags
 Method for solving a PnP problem: see REF: calib3d_solvePnP_flags More information about PerspectivenPoints is described in REF: calib3d_solvePnP Note: An example of how to use solvePnP for planar augmented reality can be found at opencv_source_code/samples/python/plane_ar.py

If you are using Python:
 Numpy array slices won't work as input because solvePnP requires contiguous arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
 The P3P algorithm requires image points to be in an array of shape (N,1,2) due to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9) which requires 2channel information.
 Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints = np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
 The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are unstable and sometimes give completely wrong results. If you pass one of these two flags, REF: SOLVEPNP_EPNP method will be used instead.
 The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).

With REF: SOLVEPNP_ITERATIVE method and
useExtrinsicGuess=true
, the minimum number of points is 3 (3 points are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the global solution to converge.  With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.

With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 With REF: SOLVEPNP_SQPNP input points must be >= 3
 Returns:
 automatically generated

solvePnP
public static boolean solvePnP(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess)
Finds an object pose from 3D2D point correspondences. SEE: REF: calib3d_solvePnP This function returns the rotation and the translation vectors that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame, using different methods: P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
 REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.

REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
 Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector.useExtrinsicGuess
 Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. More information about PerspectivenPoints is described in REF: calib3d_solvePnP Note: An example of how to use solvePnP for planar augmented reality can be found at opencv_source_code/samples/python/plane_ar.py

If you are using Python:
 Numpy array slices won't work as input because solvePnP requires contiguous arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
 The P3P algorithm requires image points to be in an array of shape (N,1,2) due to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9) which requires 2channel information.
 Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints = np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
 The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are unstable and sometimes give completely wrong results. If you pass one of these two flags, REF: SOLVEPNP_EPNP method will be used instead.
 The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).

With REF: SOLVEPNP_ITERATIVE method and
useExtrinsicGuess=true
, the minimum number of points is 3 (3 points are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the global solution to converge.  With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.

With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 With REF: SOLVEPNP_SQPNP input points must be >= 3
 Returns:
 automatically generated

solvePnP
public static boolean solvePnP(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec)
Finds an object pose from 3D2D point correspondences. SEE: REF: calib3d_solvePnP This function returns the rotation and the translation vectors that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame, using different methods: P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): need 4 input points to return a unique solution.
 REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar.

REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 for all the other flags, number of input points must be >= 4 and object points can be in any configuration.
 Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector. the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. More information about PerspectivenPoints is described in REF: calib3d_solvePnP Note: An example of how to use solvePnP for planar augmented reality can be found at opencv_source_code/samples/python/plane_ar.py

If you are using Python:
 Numpy array slices won't work as input because solvePnP requires contiguous arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
 The P3P algorithm requires image points to be in an array of shape (N,1,2) due to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9) which requires 2channel information.
 Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints = np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
 The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are unstable and sometimes give completely wrong results. If you pass one of these two flags, REF: SOLVEPNP_EPNP method will be used instead.
 The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).

With REF: SOLVEPNP_ITERATIVE method and
useExtrinsicGuess=true
, the minimum number of points is 3 (3 points are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the global solution to converge.  With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.

With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 With REF: SOLVEPNP_SQPNP input points must be >= 3
 Returns:
 automatically generated

solvePnPRansac
public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, double confidence, Mat inliers, int flags)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector.useExtrinsicGuess
 Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.iterationsCount
 Number of iterations.reprojectionError
 Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.confidence
 The probability that the algorithm produces a useful result.inliers
 Output vector that contains indices of inliers in objectPoints and imagePoints .flags
 Method for solving a PnP problem (see REF: solvePnP ). The function estimates an object pose given a set of object points, their corresponding image projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such a pose that minimizes reprojection error, that is, the sum of squared distances between the observed projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC makes the function resistant to outliers. Note: An example of how to use solvePNPRansac for object detection can be found at opencv_source_code/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/

The default method used to estimate the camera pose for the Minimal Sample Sets step
is #SOLVEPNP_EPNP. Exceptions are:
 if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
 if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
 The method used to estimate the camera pose using all the inliers is defined by the flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case, the method #SOLVEPNP_EPNP will be used instead.
 Returns:
 automatically generated

solvePnPRansac
public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, double confidence, Mat inliers)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector.useExtrinsicGuess
 Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.iterationsCount
 Number of iterations.reprojectionError
 Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.confidence
 The probability that the algorithm produces a useful result.inliers
 Output vector that contains indices of inliers in objectPoints and imagePoints . The function estimates an object pose given a set of object points, their corresponding image projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such a pose that minimizes reprojection error, that is, the sum of squared distances between the observed projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC makes the function resistant to outliers. Note: An example of how to use solvePNPRansac for object detection can be found at opencv_source_code/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/

The default method used to estimate the camera pose for the Minimal Sample Sets step
is #SOLVEPNP_EPNP. Exceptions are:
 if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
 if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
 The method used to estimate the camera pose using all the inliers is defined by the flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case, the method #SOLVEPNP_EPNP will be used instead.
 Returns:
 automatically generated

solvePnPRansac
public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError, double confidence)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector.useExtrinsicGuess
 Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.iterationsCount
 Number of iterations.reprojectionError
 Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier.confidence
 The probability that the algorithm produces a useful result. The function estimates an object pose given a set of object points, their corresponding image projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such a pose that minimizes reprojection error, that is, the sum of squared distances between the observed projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC makes the function resistant to outliers. Note: An example of how to use solvePNPRansac for object detection can be found at opencv_source_code/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/

The default method used to estimate the camera pose for the Minimal Sample Sets step
is #SOLVEPNP_EPNP. Exceptions are:
 if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
 if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
 The method used to estimate the camera pose using all the inliers is defined by the flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case, the method #SOLVEPNP_EPNP will be used instead.
 Returns:
 automatically generated

solvePnPRansac
public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount, float reprojectionError)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector.useExtrinsicGuess
 Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.iterationsCount
 Number of iterations.reprojectionError
 Inlier threshold value used by the RANSAC procedure. The parameter value is the maximum allowed distance between the observed and computed point projections to consider it an inlier. The function estimates an object pose given a set of object points, their corresponding image projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such a pose that minimizes reprojection error, that is, the sum of squared distances between the observed projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC makes the function resistant to outliers. Note: An example of how to use solvePNPRansac for object detection can be found at opencv_source_code/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/

The default method used to estimate the camera pose for the Minimal Sample Sets step
is #SOLVEPNP_EPNP. Exceptions are:
 if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
 if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
 The method used to estimate the camera pose using all the inliers is defined by the flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case, the method #SOLVEPNP_EPNP will be used instead.
 Returns:
 automatically generated

solvePnPRansac
public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess, int iterationsCount)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector.useExtrinsicGuess
 Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.iterationsCount
 Number of iterations. is the maximum allowed distance between the observed and computed point projections to consider it an inlier. The function estimates an object pose given a set of object points, their corresponding image projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such a pose that minimizes reprojection error, that is, the sum of squared distances between the observed projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC makes the function resistant to outliers. Note: An example of how to use solvePNPRansac for object detection can be found at opencv_source_code/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/

The default method used to estimate the camera pose for the Minimal Sample Sets step
is #SOLVEPNP_EPNP. Exceptions are:
 if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
 if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
 The method used to estimate the camera pose using all the inliers is defined by the flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case, the method #SOLVEPNP_EPNP will be used instead.
 Returns:
 automatically generated

solvePnPRansac
public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, boolean useExtrinsicGuess)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector.useExtrinsicGuess
 Parameter used for REF: SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. is the maximum allowed distance between the observed and computed point projections to consider it an inlier. The function estimates an object pose given a set of object points, their corresponding image projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such a pose that minimizes reprojection error, that is, the sum of squared distances between the observed projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC makes the function resistant to outliers. Note: An example of how to use solvePNPRansac for object detection can be found at opencv_source_code/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/

The default method used to estimate the camera pose for the Minimal Sample Sets step
is #SOLVEPNP_EPNP. Exceptions are:
 if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
 if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
 The method used to estimate the camera pose using all the inliers is defined by the flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case, the method #SOLVEPNP_EPNP will be used instead.
 Returns:
 automatically generated

solvePnPRansac
public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec)
Finds an object pose from 3D2D point correspondences using the RANSAC scheme. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Output translation vector. the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. is the maximum allowed distance between the observed and computed point projections to consider it an inlier. The function estimates an object pose given a set of object points, their corresponding image projections, as well as the camera intrinsic matrix and the distortion coefficients. This function finds such a pose that minimizes reprojection error, that is, the sum of squared distances between the observed projections imagePoints and the projected (using REF: projectPoints ) objectPoints. The use of RANSAC makes the function resistant to outliers. Note: An example of how to use solvePNPRansac for object detection can be found at opencv_source_code/samples/cpp/tutorial_code/calib3d/real_time_pose_estimation/

The default method used to estimate the camera pose for the Minimal Sample Sets step
is #SOLVEPNP_EPNP. Exceptions are:
 if you choose #SOLVEPNP_P3P or #SOLVEPNP_AP3P, these methods will be used.
 if the number of input points is equal to 4, #SOLVEPNP_P3P is used.
 The method used to estimate the camera pose using all the inliers is defined by the flags parameters unless it is equal to #SOLVEPNP_P3P or #SOLVEPNP_AP3P. In this case, the method #SOLVEPNP_EPNP will be used instead.
 Returns:
 automatically generated

solvePnPRansac
public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, Mat inliers, UsacParams params)

solvePnPRansac
public static boolean solvePnPRansac(MatOfPoint3f objectPoints, MatOfPoint2f imagePoints, Mat cameraMatrix, MatOfDouble distCoeffs, Mat rvec, Mat tvec, Mat inliers)

solveP3P
public static int solveP3P(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags)
Finds an object pose from 3 3D2D point correspondences. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, 3x3 1channel or 1x3/3x1 3channel. vector<Point3f> can be also passed here.imagePoints
 Array of corresponding image points, 3x2 1channel or 1x3/3x1 2channel. vector<Point2f> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvecs
 Output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from the model coordinate system to the camera coordinate system. A P3P problem has up to 4 solutions.tvecs
 Output translation vectors.flags
 Method for solving a P3P problem: REF: SOLVEPNP_P3P Method is based on the paper of X.S. Gao, X.R. Hou, J. Tang, H.F. Chang "Complete Solution Classification for the PerspectiveThreePoint Problem" (CITE: gao2003complete).
 REF: SOLVEPNP_AP3P Method is based on the paper of T. Ke and S. Roumeliotis. "An Efficient Algebraic Solution to the PerspectiveThreePoint Problem" (CITE: Ke17).
 Returns:
 automatically generated

solvePnPRefineLM
public static void solvePnPRefineLM(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, TermCriteria criteria)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can also be passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can also be passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system. Input values are used as an initial solution.tvec
 Input/Output translation vector. Input values are used as an initial solution.criteria
 Criteria when to stop the LevenbergMarquard iterative algorithm. The function refines the object pose given at least 3 object points, their corresponding image projections, an initial solution for the rotation and translation vector, as well as the camera intrinsic matrix and the distortion coefficients. The function minimizes the projection error with respect to the rotation and the translation vectors, according to a LevenbergMarquardt iterative minimization CITE: Madsen04 CITE: Eade13 process.

solvePnPRefineLM
public static void solvePnPRefineLM(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can also be passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can also be passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system. Input values are used as an initial solution.tvec
 Input/Output translation vector. Input values are used as an initial solution. The function refines the object pose given at least 3 object points, their corresponding image projections, an initial solution for the rotation and translation vector, as well as the camera intrinsic matrix and the distortion coefficients. The function minimizes the projection error with respect to the rotation and the translation vectors, according to a LevenbergMarquardt iterative minimization CITE: Madsen04 CITE: Eade13 process.

solvePnPRefineVVS
public static void solvePnPRefineVVS(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, TermCriteria criteria, double VVSlambda)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can also be passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can also be passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system. Input values are used as an initial solution.tvec
 Input/Output translation vector. Input values are used as an initial solution.criteria
 Criteria when to stop the LevenbergMarquard iterative algorithm.VVSlambda
 Gain for the virtual visual servoing control law, equivalent to the \(\alpha\) gain in the Damped GaussNewton formulation. The function refines the object pose given at least 3 object points, their corresponding image projections, an initial solution for the rotation and translation vector, as well as the camera intrinsic matrix and the distortion coefficients. The function minimizes the projection error with respect to the rotation and the translation vectors, using a virtual visual servoing (VVS) CITE: Chaumette06 CITE: Marchand16 scheme.

solvePnPRefineVVS
public static void solvePnPRefineVVS(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, TermCriteria criteria)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can also be passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can also be passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system. Input values are used as an initial solution.tvec
 Input/Output translation vector. Input values are used as an initial solution.criteria
 Criteria when to stop the LevenbergMarquard iterative algorithm. gain in the Damped GaussNewton formulation. The function refines the object pose given at least 3 object points, their corresponding image projections, an initial solution for the rotation and translation vector, as well as the camera intrinsic matrix and the distortion coefficients. The function minimizes the projection error with respect to the rotation and the translation vectors, using a virtual visual servoing (VVS) CITE: Chaumette06 CITE: Marchand16 scheme.

solvePnPRefineVVS
public static void solvePnPRefineVVS(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec)
Refine a pose (the translation and the rotation that transform a 3D point expressed in the object coordinate frame to the camera coordinate frame) from a 3D2D point correspondences and starting from an initial solution. SEE: REF: calib3d_solvePnP Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can also be passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can also be passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvec
 Input/Output rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system. Input values are used as an initial solution.tvec
 Input/Output translation vector. Input values are used as an initial solution. gain in the Damped GaussNewton formulation. The function refines the object pose given at least 3 object points, their corresponding image projections, an initial solution for the rotation and translation vector, as well as the camera intrinsic matrix and the distortion coefficients. The function minimizes the projection error with respect to the rotation and the translation vectors, using a virtual visual servoing (VVS) CITE: Chaumette06 CITE: Marchand16 scheme.

solvePnPGeneric
public static int solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess, int flags, Mat rvec, Mat tvec, Mat reprojectionError)
Finds an object pose from 3D2D point correspondences. SEE: REF: calib3d_solvePnP This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector> couple), depending on the number of input points and the chosen method: P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
 REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.

REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 for all the other flags, number of input points must be >= 4 and object points can be in any configuration. Only 1 solution is returned.
 Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvecs
 Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from the model coordinate system to the camera coordinate system.tvecs
 Vector of output translation vectors.useExtrinsicGuess
 Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.flags
 Method for solving a PnP problem: see REF: calib3d_solvePnP_flagsrvec
 Rotation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE and useExtrinsicGuess is set to true.tvec
 Translation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE and useExtrinsicGuess is set to true.reprojectionError
 Optional vector of reprojection error, that is the RMS error (\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i}  y_i \right )^2}{N}} \)) between the input image points and the 3D object points projected with the estimated pose. More information is described in REF: calib3d_solvePnP Note: An example of how to use solvePnP for planar augmented reality can be found at opencv_source_code/samples/python/plane_ar.py

If you are using Python:
 Numpy array slices won't work as input because solvePnP requires contiguous arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
 The P3P algorithm requires image points to be in an array of shape (N,1,2) due to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9) which requires 2channel information.
 Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints = np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
 The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are unstable and sometimes give completely wrong results. If you pass one of these two flags, REF: SOLVEPNP_EPNP method will be used instead.
 The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).

With REF: SOLVEPNP_ITERATIVE method and
useExtrinsicGuess=true
, the minimum number of points is 3 (3 points are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the global solution to converge.  With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.

With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 Returns:
 automatically generated

solvePnPGeneric
public static int solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess, int flags, Mat rvec, Mat tvec)
Finds an object pose from 3D2D point correspondences. SEE: REF: calib3d_solvePnP This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector> couple), depending on the number of input points and the chosen method: P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
 REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.

REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 for all the other flags, number of input points must be >= 4 and object points can be in any configuration. Only 1 solution is returned.
 Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvecs
 Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from the model coordinate system to the camera coordinate system.tvecs
 Vector of output translation vectors.useExtrinsicGuess
 Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.flags
 Method for solving a PnP problem: see REF: calib3d_solvePnP_flagsrvec
 Rotation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE and useExtrinsicGuess is set to true.tvec
 Translation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE and useExtrinsicGuess is set to true. (\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i}  y_i \right )^2}{N}} \)) between the input image points and the 3D object points projected with the estimated pose. More information is described in REF: calib3d_solvePnP Note: An example of how to use solvePnP for planar augmented reality can be found at opencv_source_code/samples/python/plane_ar.py

If you are using Python:
 Numpy array slices won't work as input because solvePnP requires contiguous arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
 The P3P algorithm requires image points to be in an array of shape (N,1,2) due to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9) which requires 2channel information.
 Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints = np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
 The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are unstable and sometimes give completely wrong results. If you pass one of these two flags, REF: SOLVEPNP_EPNP method will be used instead.
 The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).

With REF: SOLVEPNP_ITERATIVE method and
useExtrinsicGuess=true
, the minimum number of points is 3 (3 points are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the global solution to converge.  With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.

With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 Returns:
 automatically generated

solvePnPGeneric
public static int solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess, int flags, Mat rvec)
Finds an object pose from 3D2D point correspondences. SEE: REF: calib3d_solvePnP This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector> couple), depending on the number of input points and the chosen method: P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
 REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.

REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 for all the other flags, number of input points must be >= 4 and object points can be in any configuration. Only 1 solution is returned.
 Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvecs
 Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from the model coordinate system to the camera coordinate system.tvecs
 Vector of output translation vectors.useExtrinsicGuess
 Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.flags
 Method for solving a PnP problem: see REF: calib3d_solvePnP_flagsrvec
 Rotation vector used to initialize an iterative PnP refinement algorithm, when flag is REF: SOLVEPNP_ITERATIVE and useExtrinsicGuess is set to true. and useExtrinsicGuess is set to true. (\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i}  y_i \right )^2}{N}} \)) between the input image points and the 3D object points projected with the estimated pose. More information is described in REF: calib3d_solvePnP Note: An example of how to use solvePnP for planar augmented reality can be found at opencv_source_code/samples/python/plane_ar.py

If you are using Python:
 Numpy array slices won't work as input because solvePnP requires contiguous arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
 The P3P algorithm requires image points to be in an array of shape (N,1,2) due to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9) which requires 2channel information.
 Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints = np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
 The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are unstable and sometimes give completely wrong results. If you pass one of these two flags, REF: SOLVEPNP_EPNP method will be used instead.
 The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).

With REF: SOLVEPNP_ITERATIVE method and
useExtrinsicGuess=true
, the minimum number of points is 3 (3 points are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the global solution to converge.  With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.

With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 Returns:
 automatically generated

solvePnPGeneric
public static int solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess, int flags)
Finds an object pose from 3D2D point correspondences. SEE: REF: calib3d_solvePnP This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector> couple), depending on the number of input points and the chosen method: P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
 REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.

REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 for all the other flags, number of input points must be >= 4 and object points can be in any configuration. Only 1 solution is returned.
 Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvecs
 Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from the model coordinate system to the camera coordinate system.tvecs
 Vector of output translation vectors.useExtrinsicGuess
 Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them.flags
 Method for solving a PnP problem: see REF: calib3d_solvePnP_flags and useExtrinsicGuess is set to true. and useExtrinsicGuess is set to true. (\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i}  y_i \right )^2}{N}} \)) between the input image points and the 3D object points projected with the estimated pose. More information is described in REF: calib3d_solvePnP Note: An example of how to use solvePnP for planar augmented reality can be found at opencv_source_code/samples/python/plane_ar.py

If you are using Python:
 Numpy array slices won't work as input because solvePnP requires contiguous arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
 The P3P algorithm requires image points to be in an array of shape (N,1,2) due to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9) which requires 2channel information.
 Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints = np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
 The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are unstable and sometimes give completely wrong results. If you pass one of these two flags, REF: SOLVEPNP_EPNP method will be used instead.
 The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).

With REF: SOLVEPNP_ITERATIVE method and
useExtrinsicGuess=true
, the minimum number of points is 3 (3 points are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the global solution to converge.  With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.

With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 Returns:
 automatically generated

solvePnPGeneric
public static int solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, boolean useExtrinsicGuess)
Finds an object pose from 3D2D point correspondences. SEE: REF: calib3d_solvePnP This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector> couple), depending on the number of input points and the chosen method: P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
 REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.

REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 for all the other flags, number of input points must be >= 4 and object points can be in any configuration. Only 1 solution is returned.
 Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvecs
 Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from the model coordinate system to the camera coordinate system.tvecs
 Vector of output translation vectors.useExtrinsicGuess
 Parameter used for #SOLVEPNP_ITERATIVE. If true (1), the function uses the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. and useExtrinsicGuess is set to true. and useExtrinsicGuess is set to true. (\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i}  y_i \right )^2}{N}} \)) between the input image points and the 3D object points projected with the estimated pose. More information is described in REF: calib3d_solvePnP Note: An example of how to use solvePnP for planar augmented reality can be found at opencv_source_code/samples/python/plane_ar.py

If you are using Python:
 Numpy array slices won't work as input because solvePnP requires contiguous arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
 The P3P algorithm requires image points to be in an array of shape (N,1,2) due to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9) which requires 2channel information.
 Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints = np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
 The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are unstable and sometimes give completely wrong results. If you pass one of these two flags, REF: SOLVEPNP_EPNP method will be used instead.
 The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).

With REF: SOLVEPNP_ITERATIVE method and
useExtrinsicGuess=true
, the minimum number of points is 3 (3 points are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the global solution to converge.  With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.

With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 Returns:
 automatically generated

solvePnPGeneric
public static int solvePnPGeneric(Mat objectPoints, Mat imagePoints, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs)
Finds an object pose from 3D2D point correspondences. SEE: REF: calib3d_solvePnP This function returns a list of all the possible solutions (a solution is a <rotation vector, translation vector> couple), depending on the number of input points and the chosen method: P3P methods (REF: SOLVEPNP_P3P, REF: SOLVEPNP_AP3P): 3 or 4 input points. Number of returned solutions can be between 0 and 4 with 3 input points.
 REF: SOLVEPNP_IPPE Input points must be >= 4 and object points must be coplanar. Returns 2 solutions.

REF: SOLVEPNP_IPPE_SQUARE Special case suitable for marker pose estimation.
Number of input points must be 4 and 2 solutions are returned. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 for all the other flags, number of input points must be >= 4 and object points can be in any configuration. Only 1 solution is returned.
 Parameters:
objectPoints
 Array of object points in the object coordinate space, Nx3 1channel or 1xN/Nx1 3channel, where N is the number of points. vector<Point3d> can be also passed here.imagePoints
 Array of corresponding image points, Nx2 1channel or 1xN/Nx1 2channel, where N is the number of points. vector<Point2d> can be also passed here.cameraMatrix
 Input camera intrinsic matrix \(\cameramatrix{A}\) .distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.rvecs
 Vector of output rotation vectors (see REF: Rodrigues ) that, together with tvecs, brings points from the model coordinate system to the camera coordinate system.tvecs
 Vector of output translation vectors. the provided rvec and tvec values as initial approximations of the rotation and translation vectors, respectively, and further optimizes them. and useExtrinsicGuess is set to true. and useExtrinsicGuess is set to true. (\( \text{RMSE} = \sqrt{\frac{\sum_{i}^{N} \left ( \hat{y_i}  y_i \right )^2}{N}} \)) between the input image points and the 3D object points projected with the estimated pose. More information is described in REF: calib3d_solvePnP Note: An example of how to use solvePnP for planar augmented reality can be found at opencv_source_code/samples/python/plane_ar.py

If you are using Python:
 Numpy array slices won't work as input because solvePnP requires contiguous arrays (enforced by the assertion using cv::Mat::checkVector() around line 55 of modules/calib3d/src/solvepnp.cpp version 2.4.9)
 The P3P algorithm requires image points to be in an array of shape (N,1,2) due to its calling of #undistortPoints (around line 75 of modules/calib3d/src/solvepnp.cpp version 2.4.9) which requires 2channel information.
 Thus, given some data D = np.array(...) where D.shape = (N,M), in order to use a subset of it as, e.g., imagePoints, one must effectively copy it into a new array: imagePoints = np.ascontiguousarray(D[:,:2]).reshape((N,1,2))
 The methods REF: SOLVEPNP_DLS and REF: SOLVEPNP_UPNP cannot be used as the current implementations are unstable and sometimes give completely wrong results. If you pass one of these two flags, REF: SOLVEPNP_EPNP method will be used instead.
 The minimum number of points is 4 in the general case. In the case of REF: SOLVEPNP_P3P and REF: SOLVEPNP_AP3P methods, it is required to use exactly 4 points (the first 3 points are used to estimate all the solutions of the P3P problem, the last one is used to retain the best solution that minimizes the reprojection error).

With REF: SOLVEPNP_ITERATIVE method and
useExtrinsicGuess=true
, the minimum number of points is 3 (3 points are sufficient to compute a pose but there are up to 4 solutions). The initial solution should be close to the global solution to converge.  With REF: SOLVEPNP_IPPE input points must be >= 4 and object points must be coplanar.

With REF: SOLVEPNP_IPPE_SQUARE this is a special case suitable for marker pose estimation.
Number of input points must be 4. Object points must be defined in the following order:
 point 0: [squareLength / 2, squareLength / 2, 0]
 point 1: [ squareLength / 2, squareLength / 2, 0]
 point 2: [ squareLength / 2, squareLength / 2, 0]
 point 3: [squareLength / 2, squareLength / 2, 0]
 Returns:
 automatically generated

initCameraMatrix2D
public static Mat initCameraMatrix2D(java.util.List<MatOfPoint3f> objectPoints, java.util.List<MatOfPoint2f> imagePoints, Size imageSize, double aspectRatio)
Finds an initial camera intrinsic matrix from 3D2D point correspondences. Parameters:
objectPoints
 Vector of vectors of the calibration pattern points in the calibration pattern coordinate space. In the old interface all the perview vectors are concatenated. See #calibrateCamera for details.imagePoints
 Vector of vectors of the projections of the calibration pattern points. In the old interface all the perview vectors are concatenated.imageSize
 Image size in pixels used to initialize the principal point.aspectRatio
 If it is zero or negative, both \(f_x\) and \(f_y\) are estimated independently. Otherwise, \(f_x = f_y \cdot \texttt{aspectRatio}\) . The function estimates and returns an initial camera intrinsic matrix for the camera calibration process. Currently, the function only supports planar calibration patterns, which are patterns where each object point has zcoordinate =0. Returns:
 automatically generated

initCameraMatrix2D
public static Mat initCameraMatrix2D(java.util.List<MatOfPoint3f> objectPoints, java.util.List<MatOfPoint2f> imagePoints, Size imageSize)
Finds an initial camera intrinsic matrix from 3D2D point correspondences. Parameters:
objectPoints
 Vector of vectors of the calibration pattern points in the calibration pattern coordinate space. In the old interface all the perview vectors are concatenated. See #calibrateCamera for details.imagePoints
 Vector of vectors of the projections of the calibration pattern points. In the old interface all the perview vectors are concatenated.imageSize
 Image size in pixels used to initialize the principal point. Otherwise, \(f_x = f_y \cdot \texttt{aspectRatio}\) . The function estimates and returns an initial camera intrinsic matrix for the camera calibration process. Currently, the function only supports planar calibration patterns, which are patterns where each object point has zcoordinate =0. Returns:
 automatically generated

findChessboardCorners
public static boolean findChessboardCorners(Mat image, Size patternSize, MatOfPoint2f corners, int flags)
Finds the positions of internal corners of the chessboard. Parameters:
image
 Source chessboard view. It must be an 8bit grayscale or color image.patternSize
 Number of inner corners per a chessboard row and column ( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).corners
 Output array of detected corners.flags
 Various operation flags that can be zero or a combination of the following values: REF: CALIB_CB_ADAPTIVE_THRESH Use adaptive thresholding to convert the image to black and white, rather than a fixed threshold level (computed from the average image brightness).
 REF: CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with #equalizeHist before applying fixed or adaptive thresholding.
 REF: CALIB_CB_FILTER_QUADS Use additional criteria (like contour area, perimeter, squarelike shape) to filter out false quads extracted at the contour retrieval stage.
 REF: CALIB_CB_FAST_CHECK Run a fast check on the image that looks for chessboard corners, and shortcut the call if none is found. This can drastically speed up the call in the degenerate condition when no chessboard is observed.
Size patternsize(8,6); //interior number of corners Mat gray = ....; //source image vector<Point2f> corners; //this will be filled by the detected corners //CALIB_CB_FAST_CHECK saves a lot of time on images //that do not contain any chessboard corners bool patternfound = findChessboardCorners(gray, patternsize, corners, CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE + CALIB_CB_FAST_CHECK); if(patternfound) cornerSubPix(gray, corners, Size(11, 11), Size(1, 1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1)); drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
Note: The function requires white space (like a squarethick border, the wider the better) around the board to make the detection more robust in various environments. Otherwise, if there is no border and the background is dark, the outer black squares cannot be segmented properly and so the square grouping and ordering algorithm fails. Use gen_pattern.py (REF: tutorial_camera_calibration_pattern) to create checkerboard. Returns:
 automatically generated

findChessboardCorners
public static boolean findChessboardCorners(Mat image, Size patternSize, MatOfPoint2f corners)
Finds the positions of internal corners of the chessboard. Parameters:
image
 Source chessboard view. It must be an 8bit grayscale or color image.patternSize
 Number of inner corners per a chessboard row and column ( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).corners
 Output array of detected corners. REF: CALIB_CB_ADAPTIVE_THRESH Use adaptive thresholding to convert the image to black and white, rather than a fixed threshold level (computed from the average image brightness).
 REF: CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with #equalizeHist before applying fixed or adaptive thresholding.
 REF: CALIB_CB_FILTER_QUADS Use additional criteria (like contour area, perimeter, squarelike shape) to filter out false quads extracted at the contour retrieval stage.
 REF: CALIB_CB_FAST_CHECK Run a fast check on the image that looks for chessboard corners, and shortcut the call if none is found. This can drastically speed up the call in the degenerate condition when no chessboard is observed.
Size patternsize(8,6); //interior number of corners Mat gray = ....; //source image vector<Point2f> corners; //this will be filled by the detected corners //CALIB_CB_FAST_CHECK saves a lot of time on images //that do not contain any chessboard corners bool patternfound = findChessboardCorners(gray, patternsize, corners, CALIB_CB_ADAPTIVE_THRESH + CALIB_CB_NORMALIZE_IMAGE + CALIB_CB_FAST_CHECK); if(patternfound) cornerSubPix(gray, corners, Size(11, 11), Size(1, 1), TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1)); drawChessboardCorners(img, patternsize, Mat(corners), patternfound);
Note: The function requires white space (like a squarethick border, the wider the better) around the board to make the detection more robust in various environments. Otherwise, if there is no border and the background is dark, the outer black squares cannot be segmented properly and so the square grouping and ordering algorithm fails. Use gen_pattern.py (REF: tutorial_camera_calibration_pattern) to create checkerboard. Returns:
 automatically generated

findChessboardCornersSBWithMeta
public static boolean findChessboardCornersSBWithMeta(Mat image, Size patternSize, Mat corners, int flags, Mat meta)
Finds the positions of internal corners of the chessboard using a sector based approach. Parameters:
image
 Source chessboard view. It must be an 8bit grayscale or color image.patternSize
 Number of inner corners per a chessboard row and column ( patternSize = cv::Size(points_per_row,points_per_colum) = cv::Size(columns,rows) ).corners
 Output array of detected corners.flags
 Various operation flags that can be zero or a combination of the following values: REF: CALIB_CB_NORMALIZE_IMAGE Normalize the image gamma with equalizeHist before detection.
 REF: CALIB_CB_EXHAUSTIVE Run an exhaustive search to improve detection rate.
 REF: CALIB_CB_ACCURACY Up sample input image to improve subpixel accuracy due to aliasing effects.
 REF: CALIB_CB_LARGER The detected pattern is allowed to be larger than patternSize (see description).
 REF: CALIB_CB_MARKER The detected pattern must have a marker (see description). This should be used if an accurate camera calibration is required.
meta
 Optional output arrray of detected corners (CV_8UC1 and size = cv::Size(columns,rows)). Each entry stands for one corner of the pattern and can have one of the following values: 0 = no meta data attached
 1 = lefttop corner of a black cell
 2 = lefttop corner of a white cell
 3 = lefttop corner of a black cell with a white marker dot
 4 = lefttop corner of a white cell with a black marker dot (pattern origin in case of markers otherwise first corner)
 Returns:
 automatically generated

findChessboardCornersSB
public static boolean findChessboardCornersSB(Mat image, Size patternSize, Mat corners, int flags)

findChessboardCornersSB
public static boolean findChessboardCornersSB(Mat image, Size patternSize, Mat corners)

estimateChessboardSharpness
public static Scalar estimateChessboardSharpness(Mat image, Size patternSize, Mat corners, float rise_distance, boolean vertical, Mat sharpness)
Estimates the sharpness of a detected chessboard. Image sharpness, as well as brightness, are a critical parameter for accuracte camera calibration. For accessing these parameters for filtering out problematic calibraiton images, this method calculates edge profiles by traveling from black to white chessboard cell centers. Based on this, the number of pixels is calculated required to transit from black to white. This width of the transition area is a good indication of how sharp the chessboard is imaged and should be below ~3.0 pixels. Parameters:
image
 Gray image used to find chessboard cornerspatternSize
 Size of a found chessboard patterncorners
 Corners found by #findChessboardCornersSBrise_distance
 Rise distance 0.8 means 10% ... 90% of the final signal strengthvertical
 By default edge responses for horizontal lines are calculatedsharpness
 Optional output array with a sharpness value for calculated edge responses (see description) The optional sharpness array is of type CV_32FC1 and has for each calculated profile one row with the following five entries: 0 = x coordinate of the underlying edge in the image 1 = y coordinate of the underlying edge in the image 2 = width of the transition area (sharpness) 3 = signal strength in the black cell (min brightness) 4 = signal strength in the white cell (max brightness) Returns:
 Scalar(average sharpness, average min brightness, average max brightness,0)

estimateChessboardSharpness
public static Scalar estimateChessboardSharpness(Mat image, Size patternSize, Mat corners, float rise_distance, boolean vertical)
Estimates the sharpness of a detected chessboard. Image sharpness, as well as brightness, are a critical parameter for accuracte camera calibration. For accessing these parameters for filtering out problematic calibraiton images, this method calculates edge profiles by traveling from black to white chessboard cell centers. Based on this, the number of pixels is calculated required to transit from black to white. This width of the transition area is a good indication of how sharp the chessboard is imaged and should be below ~3.0 pixels. Parameters:
image
 Gray image used to find chessboard cornerspatternSize
 Size of a found chessboard patterncorners
 Corners found by #findChessboardCornersSBrise_distance
 Rise distance 0.8 means 10% ... 90% of the final signal strengthvertical
 By default edge responses for horizontal lines are calculated The optional sharpness array is of type CV_32FC1 and has for each calculated profile one row with the following five entries: 0 = x coordinate of the underlying edge in the image 1 = y coordinate of the underlying edge in the image 2 = width of the transition area (sharpness) 3 = signal strength in the black cell (min brightness) 4 = signal strength in the white cell (max brightness) Returns:
 Scalar(average sharpness, average min brightness, average max brightness,0)

estimateChessboardSharpness
public static Scalar estimateChessboardSharpness(Mat image, Size patternSize, Mat corners, float rise_distance)
Estimates the sharpness of a detected chessboard. Image sharpness, as well as brightness, are a critical parameter for accuracte camera calibration. For accessing these parameters for filtering out problematic calibraiton images, this method calculates edge profiles by traveling from black to white chessboard cell centers. Based on this, the number of pixels is calculated required to transit from black to white. This width of the transition area is a good indication of how sharp the chessboard is imaged and should be below ~3.0 pixels. Parameters:
image
 Gray image used to find chessboard cornerspatternSize
 Size of a found chessboard patterncorners
 Corners found by #findChessboardCornersSBrise_distance
 Rise distance 0.8 means 10% ... 90% of the final signal strength The optional sharpness array is of type CV_32FC1 and has for each calculated profile one row with the following five entries: 0 = x coordinate of the underlying edge in the image 1 = y coordinate of the underlying edge in the image 2 = width of the transition area (sharpness) 3 = signal strength in the black cell (min brightness) 4 = signal strength in the white cell (max brightness) Returns:
 Scalar(average sharpness, average min brightness, average max brightness,0)

estimateChessboardSharpness
public static Scalar estimateChessboardSharpness(Mat image, Size patternSize, Mat corners)
Estimates the sharpness of a detected chessboard. Image sharpness, as well as brightness, are a critical parameter for accuracte camera calibration. For accessing these parameters for filtering out problematic calibraiton images, this method calculates edge profiles by traveling from black to white chessboard cell centers. Based on this, the number of pixels is calculated required to transit from black to white. This width of the transition area is a good indication of how sharp the chessboard is imaged and should be below ~3.0 pixels. Parameters:
image
 Gray image used to find chessboard cornerspatternSize
 Size of a found chessboard patterncorners
 Corners found by #findChessboardCornersSB The optional sharpness array is of type CV_32FC1 and has for each calculated profile one row with the following five entries: 0 = x coordinate of the underlying edge in the image 1 = y coordinate of the underlying edge in the image 2 = width of the transition area (sharpness) 3 = signal strength in the black cell (min brightness) 4 = signal strength in the white cell (max brightness) Returns:
 Scalar(average sharpness, average min brightness, average max brightness,0)

find4QuadCornerSubpix
public static boolean find4QuadCornerSubpix(Mat img, Mat corners, Size region_size)

drawChessboardCorners
public static void drawChessboardCorners(Mat image, Size patternSize, MatOfPoint2f corners, boolean patternWasFound)
Renders the detected chessboard corners. Parameters:
image
 Destination image. It must be an 8bit color image.patternSize
 Number of inner corners per a chessboard row and column (patternSize = cv::Size(points_per_row,points_per_column)).corners
 Array of detected corners, the output of #findChessboardCorners.patternWasFound
 Parameter indicating whether the complete board was found or not. The return value of #findChessboardCorners should be passed here. The function draws individual chessboard corners detected either as red circles if the board was not found, or as colored corners connected with lines if the board was found.

drawFrameAxes
public static void drawFrameAxes(Mat image, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, float length, int thickness)
Draw axes of the world/object coordinate system from pose estimation. SEE: solvePnP Parameters:
image
 Input/output image. It must have 1 or 3 channels. The number of channels is not altered.cameraMatrix
 Input 3x3 floatingpoint matrix of camera intrinsic parameters. \(\cameramatrix{A}\)distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is empty, the zero distortion coefficients are assumed.rvec
 Rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Translation vector.length
 Length of the painted axes in the same unit than tvec (usually in meters).thickness
 Line thickness of the painted axes. This function draws the axes of the world/object coordinate system w.r.t. to the camera frame. OX is drawn in red, OY in green and OZ in blue.

drawFrameAxes
public static void drawFrameAxes(Mat image, Mat cameraMatrix, Mat distCoeffs, Mat rvec, Mat tvec, float length)
Draw axes of the world/object coordinate system from pose estimation. SEE: solvePnP Parameters:
image
 Input/output image. It must have 1 or 3 channels. The number of channels is not altered.cameraMatrix
 Input 3x3 floatingpoint matrix of camera intrinsic parameters. \(\cameramatrix{A}\)distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is empty, the zero distortion coefficients are assumed.rvec
 Rotation vector (see REF: Rodrigues ) that, together with tvec, brings points from the model coordinate system to the camera coordinate system.tvec
 Translation vector.length
 Length of the painted axes in the same unit than tvec (usually in meters). This function draws the axes of the world/object coordinate system w.r.t. to the camera frame. OX is drawn in red, OY in green and OZ in blue.

findCirclesGrid
public static boolean findCirclesGrid(Mat image, Size patternSize, Mat centers, int flags)

calibrateCameraExtended
public static double calibrateCameraExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat perViewErrors, int flags, TermCriteria criteria)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. Parameters:
objectPoints
 In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer vector contains as many elements as the number of pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns or even different patterns in different views. Then, the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's XY coordinate plane (thus 0 in the Zcoordinate), if the used calibration pattern is a planar rig. In the old interface all the vectors of object points from different views are concatenated together.imagePoints
 In the new interface it is a vector of vectors of the projections of calibration pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and objectPoints.size(), and imagePoints[i].size() and objectPoints[i].size() for each i, must be equal, respectively. In the old interface all the vectors of object points from different views are concatenated together.imageSize
 Size of the image used only to initialize the camera intrinsic matrix.cameraMatrix
 Input/output 3x3 floatingpoint camera intrinsic matrix \(\cameramatrix{A}\) . If REF: CALIB_USE_INTRINSIC_GUESS and/or REF: CALIB_FIX_ASPECT_RATIO, REF: CALIB_FIX_PRINCIPAL_POINT or REF: CALIB_FIX_FOCAL_LENGTH are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.distCoeffs
 Input/output vector of distortion coefficients \(\distcoeffs\).rvecs
 Output vector of rotation vectors (REF: Rodrigues ) estimated for each pattern view (e.g. std::vector<cv::Mat>>). That is, each ith rotation vector together with the corresponding ith translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space. In more technical terms, the tuple of the ith rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space. Due to its duality, this tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate space.tvecs
 Output vector of translation vectors estimated for each pattern view, see parameter describtion above.stdDeviationsIntrinsics
 Output vector of standard deviations estimated for intrinsic parameters. Order of deviations values: \((f_x, f_y, c_x, c_y, k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6 , s_1, s_2, s_3, s_4, \tau_x, \tau_y)\) If one of parameters is not estimated, it's deviation is equals to zero.stdDeviationsExtrinsics
 Output vector of standard deviations estimated for extrinsic parameters. Order of deviations values: \((R_0, T_0, \dotsc , R_{M  1}, T_{M  1})\) where M is the number of pattern views. \(R_i, T_i\) are concatenated 1x3 vectors.perViewErrors
 Output vector of the RMS reprojection error estimated for each pattern view.flags
 Different flags that may be zero or a combination of the following values: REF: CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used), and focal distances are computed in a leastsquares fashion. Note, that if intrinsic parameters are known, there is no need to use this function just to estimate extrinsic parameters. Use REF: solvePnP instead.
 REF: CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global optimization. It stays at the center or at a different location specified when REF: CALIB_USE_INTRINSIC_GUESS is set too.
 REF: CALIB_FIX_ASPECT_RATIO The functions consider only fy as a free parameter. The ratio fx/fy stays the same as in the input cameraMatrix . When REF: CALIB_USE_INTRINSIC_GUESS is not set, the actual input values of fx and fy are ignored, only their ratio is computed and used further.
 REF: CALIB_ZERO_TANGENT_DIST Tangential distortion coefficients \((p_1, p_2)\) are set to zeros and stay zero.
 REF: CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global optimization if REF: CALIB_USE_INTRINSIC_GUESS is set.
 REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 The corresponding radial distortion coefficient is not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_RATIONAL_MODEL Coefficients k4, k5, and k6 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the rational model and return 8 coefficients or more.
 REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the thin prism model and return 12 coefficients or more.
 REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the tilted sensor model and return 14 coefficients.
 REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
criteria
 Termination criteria for the iterative optimization algorithm. Returns:
 the overall RMS reprojection error.
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
views. The algorithm is based on CITE: Zhang2000 and CITE: BouguetMCT . The coordinates of 3D object
points and their corresponding 2D projections in each view must be specified. That may be achieved
by using an object with known geometry and easily detectable feature points. Such an object is
called a calibration rig or calibration pattern, and OpenCV has builtin support for a chessboard as
a calibration rig (see REF: findChessboardCorners). Currently, initialization of intrinsic
parameters (when REF: CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration
patterns (where Zcoordinates of the object points must be all zeros). 3D calibration rigs can also
be used as long as initial cameraMatrix is provided.
The algorithm performs the following steps:
 Compute the initial intrinsic parameters (the option only available for planar calibration patterns) or read them from the input parameters. The distortion coefficients are all set to zeros initially unless some of CALIB_FIX_K? are specified.
 Estimate the initial camera pose as if the intrinsic parameters have been already known. This is done using REF: solvePnP .
 Run the global LevenbergMarquardt optimization algorithm to minimize the reprojection error, that is, the total sum of squared distances between the observed feature points imagePoints and the projected (using the current estimates for camera parameters and the poses) object points objectPoints. See REF: projectPoints for details.

calibrateCameraExtended
public static double calibrateCameraExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat perViewErrors, int flags)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. Parameters:
objectPoints
 In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer vector contains as many elements as the number of pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns or even different patterns in different views. Then, the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's XY coordinate plane (thus 0 in the Zcoordinate), if the used calibration pattern is a planar rig. In the old interface all the vectors of object points from different views are concatenated together.imagePoints
 In the new interface it is a vector of vectors of the projections of calibration pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and objectPoints.size(), and imagePoints[i].size() and objectPoints[i].size() for each i, must be equal, respectively. In the old interface all the vectors of object points from different views are concatenated together.imageSize
 Size of the image used only to initialize the camera intrinsic matrix.cameraMatrix
 Input/output 3x3 floatingpoint camera intrinsic matrix \(\cameramatrix{A}\) . If REF: CALIB_USE_INTRINSIC_GUESS and/or REF: CALIB_FIX_ASPECT_RATIO, REF: CALIB_FIX_PRINCIPAL_POINT or REF: CALIB_FIX_FOCAL_LENGTH are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.distCoeffs
 Input/output vector of distortion coefficients \(\distcoeffs\).rvecs
 Output vector of rotation vectors (REF: Rodrigues ) estimated for each pattern view (e.g. std::vector<cv::Mat>>). That is, each ith rotation vector together with the corresponding ith translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space. In more technical terms, the tuple of the ith rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space. Due to its duality, this tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate space.tvecs
 Output vector of translation vectors estimated for each pattern view, see parameter describtion above.stdDeviationsIntrinsics
 Output vector of standard deviations estimated for intrinsic parameters. Order of deviations values: \((f_x, f_y, c_x, c_y, k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6 , s_1, s_2, s_3, s_4, \tau_x, \tau_y)\) If one of parameters is not estimated, it's deviation is equals to zero.stdDeviationsExtrinsics
 Output vector of standard deviations estimated for extrinsic parameters. Order of deviations values: \((R_0, T_0, \dotsc , R_{M  1}, T_{M  1})\) where M is the number of pattern views. \(R_i, T_i\) are concatenated 1x3 vectors.perViewErrors
 Output vector of the RMS reprojection error estimated for each pattern view.flags
 Different flags that may be zero or a combination of the following values: REF: CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used), and focal distances are computed in a leastsquares fashion. Note, that if intrinsic parameters are known, there is no need to use this function just to estimate extrinsic parameters. Use REF: solvePnP instead.
 REF: CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global optimization. It stays at the center or at a different location specified when REF: CALIB_USE_INTRINSIC_GUESS is set too.
 REF: CALIB_FIX_ASPECT_RATIO The functions consider only fy as a free parameter. The ratio fx/fy stays the same as in the input cameraMatrix . When REF: CALIB_USE_INTRINSIC_GUESS is not set, the actual input values of fx and fy are ignored, only their ratio is computed and used further.
 REF: CALIB_ZERO_TANGENT_DIST Tangential distortion coefficients \((p_1, p_2)\) are set to zeros and stay zero.
 REF: CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global optimization if REF: CALIB_USE_INTRINSIC_GUESS is set.
 REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 The corresponding radial distortion coefficient is not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_RATIONAL_MODEL Coefficients k4, k5, and k6 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the rational model and return 8 coefficients or more.
 REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the thin prism model and return 12 coefficients or more.
 REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the tilted sensor model and return 14 coefficients.
 REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 Returns:
 the overall RMS reprojection error.
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
views. The algorithm is based on CITE: Zhang2000 and CITE: BouguetMCT . The coordinates of 3D object
points and their corresponding 2D projections in each view must be specified. That may be achieved
by using an object with known geometry and easily detectable feature points. Such an object is
called a calibration rig or calibration pattern, and OpenCV has builtin support for a chessboard as
a calibration rig (see REF: findChessboardCorners). Currently, initialization of intrinsic
parameters (when REF: CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration
patterns (where Zcoordinates of the object points must be all zeros). 3D calibration rigs can also
be used as long as initial cameraMatrix is provided.
The algorithm performs the following steps:
 Compute the initial intrinsic parameters (the option only available for planar calibration patterns) or read them from the input parameters. The distortion coefficients are all set to zeros initially unless some of CALIB_FIX_K? are specified.
 Estimate the initial camera pose as if the intrinsic parameters have been already known. This is done using REF: solvePnP .
 Run the global LevenbergMarquardt optimization algorithm to minimize the reprojection error, that is, the total sum of squared distances between the observed feature points imagePoints and the projected (using the current estimates for camera parameters and the poses) object points objectPoints. See REF: projectPoints for details.

calibrateCameraExtended
public static double calibrateCameraExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat perViewErrors)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. Parameters:
objectPoints
 In the new interface it is a vector of vectors of calibration pattern points in the calibration pattern coordinate space (e.g. std::vector<std::vector<cv::Vec3f>>). The outer vector contains as many elements as the number of pattern views. If the same calibration pattern is shown in each view and it is fully visible, all the vectors will be the same. Although, it is possible to use partially occluded patterns or even different patterns in different views. Then, the vectors will be different. Although the points are 3D, they all lie in the calibration pattern's XY coordinate plane (thus 0 in the Zcoordinate), if the used calibration pattern is a planar rig. In the old interface all the vectors of object points from different views are concatenated together.imagePoints
 In the new interface it is a vector of vectors of the projections of calibration pattern points (e.g. std::vector<std::vector<cv::Vec2f>>). imagePoints.size() and objectPoints.size(), and imagePoints[i].size() and objectPoints[i].size() for each i, must be equal, respectively. In the old interface all the vectors of object points from different views are concatenated together.imageSize
 Size of the image used only to initialize the camera intrinsic matrix.cameraMatrix
 Input/output 3x3 floatingpoint camera intrinsic matrix \(\cameramatrix{A}\) . If REF: CALIB_USE_INTRINSIC_GUESS and/or REF: CALIB_FIX_ASPECT_RATIO, REF: CALIB_FIX_PRINCIPAL_POINT or REF: CALIB_FIX_FOCAL_LENGTH are specified, some or all of fx, fy, cx, cy must be initialized before calling the function.distCoeffs
 Input/output vector of distortion coefficients \(\distcoeffs\).rvecs
 Output vector of rotation vectors (REF: Rodrigues ) estimated for each pattern view (e.g. std::vector<cv::Mat>>). That is, each ith rotation vector together with the corresponding ith translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space. In more technical terms, the tuple of the ith rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space. Due to its duality, this tuple is equivalent to the position of the calibration pattern with respect to the camera coordinate space.tvecs
 Output vector of translation vectors estimated for each pattern view, see parameter describtion above.stdDeviationsIntrinsics
 Output vector of standard deviations estimated for intrinsic parameters. Order of deviations values: \((f_x, f_y, c_x, c_y, k_1, k_2, p_1, p_2, k_3, k_4, k_5, k_6 , s_1, s_2, s_3, s_4, \tau_x, \tau_y)\) If one of parameters is not estimated, it's deviation is equals to zero.stdDeviationsExtrinsics
 Output vector of standard deviations estimated for extrinsic parameters. Order of deviations values: \((R_0, T_0, \dotsc , R_{M  1}, T_{M  1})\) where M is the number of pattern views. \(R_i, T_i\) are concatenated 1x3 vectors.perViewErrors
 Output vector of the RMS reprojection error estimated for each pattern view. REF: CALIB_USE_INTRINSIC_GUESS cameraMatrix contains valid initial values of fx, fy, cx, cy that are optimized further. Otherwise, (cx, cy) is initially set to the image center ( imageSize is used), and focal distances are computed in a leastsquares fashion. Note, that if intrinsic parameters are known, there is no need to use this function just to estimate extrinsic parameters. Use REF: solvePnP instead.
 REF: CALIB_FIX_PRINCIPAL_POINT The principal point is not changed during the global optimization. It stays at the center or at a different location specified when REF: CALIB_USE_INTRINSIC_GUESS is set too.
 REF: CALIB_FIX_ASPECT_RATIO The functions consider only fy as a free parameter. The ratio fx/fy stays the same as in the input cameraMatrix . When REF: CALIB_USE_INTRINSIC_GUESS is not set, the actual input values of fx and fy are ignored, only their ratio is computed and used further.
 REF: CALIB_ZERO_TANGENT_DIST Tangential distortion coefficients \((p_1, p_2)\) are set to zeros and stay zero.
 REF: CALIB_FIX_FOCAL_LENGTH The focal length is not changed during the global optimization if REF: CALIB_USE_INTRINSIC_GUESS is set.
 REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 The corresponding radial distortion coefficient is not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_RATIONAL_MODEL Coefficients k4, k5, and k6 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the rational model and return 8 coefficients or more.
 REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the thin prism model and return 12 coefficients or more.
 REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the tilted sensor model and return 14 coefficients.
 REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 Returns:
 the overall RMS reprojection error.
The function estimates the intrinsic camera parameters and extrinsic parameters for each of the
views. The algorithm is based on CITE: Zhang2000 and CITE: BouguetMCT . The coordinates of 3D object
points and their corresponding 2D projections in each view must be specified. That may be achieved
by using an object with known geometry and easily detectable feature points. Such an object is
called a calibration rig or calibration pattern, and OpenCV has builtin support for a chessboard as
a calibration rig (see REF: findChessboardCorners). Currently, initialization of intrinsic
parameters (when REF: CALIB_USE_INTRINSIC_GUESS is not set) is only implemented for planar calibration
patterns (where Zcoordinates of the object points must be all zeros). 3D calibration rigs can also
be used as long as initial cameraMatrix is provided.
The algorithm performs the following steps:
 Compute the initial intrinsic parameters (the option only available for planar calibration patterns) or read them from the input parameters. The distortion coefficients are all set to zeros initially unless some of CALIB_FIX_K? are specified.
 Estimate the initial camera pose as if the intrinsic parameters have been already known. This is done using REF: solvePnP .
 Run the global LevenbergMarquardt optimization algorithm to minimize the reprojection error, that is, the total sum of squared distances between the observed feature points imagePoints and the projected (using the current estimates for camera parameters and the poses) object points objectPoints. See REF: projectPoints for details.

calibrateCamera
public static double calibrateCamera(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags, TermCriteria criteria)

calibrateCamera
public static double calibrateCamera(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, int flags)

calibrateCamera
public static double calibrateCamera(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs)

calibrateCameraROExtended
public static double calibrateCameraROExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat stdDeviationsObjPoints, Mat perViewErrors, int flags, TermCriteria criteria)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. This function is an extension of #calibrateCamera with the method of releasing object which was proposed in CITE: strobl2011iccv. In many common cases with inaccurate, unmeasured, roughly planar targets (calibration plates), this method can dramatically improve the precision of the estimated camera parameters. Both the objectreleasing method and standard method are supported by this function. Use the parameter iFixedPoint for method selection. In the internal implementation, #calibrateCamera is a wrapper for this function. Parameters:
objectPoints
 Vector of vectors of calibration pattern points in the calibration pattern coordinate space. See #calibrateCamera for details. If the method of releasing object to be used, the identical calibration board must be used in each view and it must be fully visible, and all objectPoints[i] must be the same and all points should be roughly close to a plane. The calibration target has to be rigid, or at least static if the camera (rather than the calibration target) is shifted for grabbing images.imagePoints
 Vector of vectors of the projections of calibration pattern points. See #calibrateCamera for details.imageSize
 Size of the image used only to initialize the intrinsic camera matrix.iFixedPoint
 The index of the 3D object point in objectPoints[0] to be fixed. It also acts as a switch for calibration method selection. If objectreleasing method to be used, pass in the parameter in the range of [1, objectPoints[0].size()2], otherwise a value out of this range will make standard calibration method selected. Usually the topright corner point of the calibration board grid is recommended to be fixed when objectreleasing method being utilized. According to \cite strobl2011iccv, two other points are also fixed. In this implementation, objectPoints[0].front and objectPoints[0].back.z are used. With objectreleasing method, accurate rvecs, tvecs and newObjPoints are only possible if coordinates of these three fixed points are accurate enough.cameraMatrix
 Output 3x3 floatingpoint camera matrix. See #calibrateCamera for details.distCoeffs
 Output vector of distortion coefficients. See #calibrateCamera for details.rvecs
 Output vector of rotation vectors estimated for each pattern view. See #calibrateCamera for details.tvecs
 Output vector of translation vectors estimated for each pattern view.newObjPoints
 The updated output vector of calibration pattern points. The coordinates might be scaled based on three fixed points. The returned coordinates are accurate only if the above mentioned three fixed points are accurate. If not needed, noArray() can be passed in. This parameter is ignored with standard calibration method.stdDeviationsIntrinsics
 Output vector of standard deviations estimated for intrinsic parameters. See #calibrateCamera for details.stdDeviationsExtrinsics
 Output vector of standard deviations estimated for extrinsic parameters. See #calibrateCamera for details.stdDeviationsObjPoints
 Output vector of standard deviations estimated for refined coordinates of calibration pattern points. It has the same size and order as objectPoints[0] vector. This parameter is ignored with standard calibration method.perViewErrors
 Output vector of the RMS reprojection error estimated for each pattern view.flags
 Different flags that may be zero or a combination of some predefined values. See #calibrateCamera for details. If the method of releasing object is used, the calibration time may be much longer. CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with potentially less precise and less stable in some rare cases.criteria
 Termination criteria for the iterative optimization algorithm. Returns:
 the overall RMS reprojection error. The function estimates the intrinsic camera parameters and extrinsic parameters for each of the views. The algorithm is based on CITE: Zhang2000, CITE: BouguetMCT and CITE: strobl2011iccv. See #calibrateCamera for other detailed explanations. SEE: calibrateCamera, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort

calibrateCameraROExtended
public static double calibrateCameraROExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat stdDeviationsObjPoints, Mat perViewErrors, int flags)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. This function is an extension of #calibrateCamera with the method of releasing object which was proposed in CITE: strobl2011iccv. In many common cases with inaccurate, unmeasured, roughly planar targets (calibration plates), this method can dramatically improve the precision of the estimated camera parameters. Both the objectreleasing method and standard method are supported by this function. Use the parameter iFixedPoint for method selection. In the internal implementation, #calibrateCamera is a wrapper for this function. Parameters:
objectPoints
 Vector of vectors of calibration pattern points in the calibration pattern coordinate space. See #calibrateCamera for details. If the method of releasing object to be used, the identical calibration board must be used in each view and it must be fully visible, and all objectPoints[i] must be the same and all points should be roughly close to a plane. The calibration target has to be rigid, or at least static if the camera (rather than the calibration target) is shifted for grabbing images.imagePoints
 Vector of vectors of the projections of calibration pattern points. See #calibrateCamera for details.imageSize
 Size of the image used only to initialize the intrinsic camera matrix.iFixedPoint
 The index of the 3D object point in objectPoints[0] to be fixed. It also acts as a switch for calibration method selection. If objectreleasing method to be used, pass in the parameter in the range of [1, objectPoints[0].size()2], otherwise a value out of this range will make standard calibration method selected. Usually the topright corner point of the calibration board grid is recommended to be fixed when objectreleasing method being utilized. According to \cite strobl2011iccv, two other points are also fixed. In this implementation, objectPoints[0].front and objectPoints[0].back.z are used. With objectreleasing method, accurate rvecs, tvecs and newObjPoints are only possible if coordinates of these three fixed points are accurate enough.cameraMatrix
 Output 3x3 floatingpoint camera matrix. See #calibrateCamera for details.distCoeffs
 Output vector of distortion coefficients. See #calibrateCamera for details.rvecs
 Output vector of rotation vectors estimated for each pattern view. See #calibrateCamera for details.tvecs
 Output vector of translation vectors estimated for each pattern view.newObjPoints
 The updated output vector of calibration pattern points. The coordinates might be scaled based on three fixed points. The returned coordinates are accurate only if the above mentioned three fixed points are accurate. If not needed, noArray() can be passed in. This parameter is ignored with standard calibration method.stdDeviationsIntrinsics
 Output vector of standard deviations estimated for intrinsic parameters. See #calibrateCamera for details.stdDeviationsExtrinsics
 Output vector of standard deviations estimated for extrinsic parameters. See #calibrateCamera for details.stdDeviationsObjPoints
 Output vector of standard deviations estimated for refined coordinates of calibration pattern points. It has the same size and order as objectPoints[0] vector. This parameter is ignored with standard calibration method.perViewErrors
 Output vector of the RMS reprojection error estimated for each pattern view.flags
 Different flags that may be zero or a combination of some predefined values. See #calibrateCamera for details. If the method of releasing object is used, the calibration time may be much longer. CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with potentially less precise and less stable in some rare cases. Returns:
 the overall RMS reprojection error. The function estimates the intrinsic camera parameters and extrinsic parameters for each of the views. The algorithm is based on CITE: Zhang2000, CITE: BouguetMCT and CITE: strobl2011iccv. See #calibrateCamera for other detailed explanations. SEE: calibrateCamera, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort

calibrateCameraROExtended
public static double calibrateCameraROExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, Mat stdDeviationsIntrinsics, Mat stdDeviationsExtrinsics, Mat stdDeviationsObjPoints, Mat perViewErrors)
Finds the camera intrinsic and extrinsic parameters from several views of a calibration pattern. This function is an extension of #calibrateCamera with the method of releasing object which was proposed in CITE: strobl2011iccv. In many common cases with inaccurate, unmeasured, roughly planar targets (calibration plates), this method can dramatically improve the precision of the estimated camera parameters. Both the objectreleasing method and standard method are supported by this function. Use the parameter iFixedPoint for method selection. In the internal implementation, #calibrateCamera is a wrapper for this function. Parameters:
objectPoints
 Vector of vectors of calibration pattern points in the calibration pattern coordinate space. See #calibrateCamera for details. If the method of releasing object to be used, the identical calibration board must be used in each view and it must be fully visible, and all objectPoints[i] must be the same and all points should be roughly close to a plane. The calibration target has to be rigid, or at least static if the camera (rather than the calibration target) is shifted for grabbing images.imagePoints
 Vector of vectors of the projections of calibration pattern points. See #calibrateCamera for details.imageSize
 Size of the image used only to initialize the intrinsic camera matrix.iFixedPoint
 The index of the 3D object point in objectPoints[0] to be fixed. It also acts as a switch for calibration method selection. If objectreleasing method to be used, pass in the parameter in the range of [1, objectPoints[0].size()2], otherwise a value out of this range will make standard calibration method selected. Usually the topright corner point of the calibration board grid is recommended to be fixed when objectreleasing method being utilized. According to \cite strobl2011iccv, two other points are also fixed. In this implementation, objectPoints[0].front and objectPoints[0].back.z are used. With objectreleasing method, accurate rvecs, tvecs and newObjPoints are only possible if coordinates of these three fixed points are accurate enough.cameraMatrix
 Output 3x3 floatingpoint camera matrix. See #calibrateCamera for details.distCoeffs
 Output vector of distortion coefficients. See #calibrateCamera for details.rvecs
 Output vector of rotation vectors estimated for each pattern view. See #calibrateCamera for details.tvecs
 Output vector of translation vectors estimated for each pattern view.newObjPoints
 The updated output vector of calibration pattern points. The coordinates might be scaled based on three fixed points. The returned coordinates are accurate only if the above mentioned three fixed points are accurate. If not needed, noArray() can be passed in. This parameter is ignored with standard calibration method.stdDeviationsIntrinsics
 Output vector of standard deviations estimated for intrinsic parameters. See #calibrateCamera for details.stdDeviationsExtrinsics
 Output vector of standard deviations estimated for extrinsic parameters. See #calibrateCamera for details.stdDeviationsObjPoints
 Output vector of standard deviations estimated for refined coordinates of calibration pattern points. It has the same size and order as objectPoints[0] vector. This parameter is ignored with standard calibration method.perViewErrors
 Output vector of the RMS reprojection error estimated for each pattern view. #calibrateCamera for details. If the method of releasing object is used, the calibration time may be much longer. CALIB_USE_QR or CALIB_USE_LU could be used for faster calibration with potentially less precise and less stable in some rare cases. Returns:
 the overall RMS reprojection error. The function estimates the intrinsic camera parameters and extrinsic parameters for each of the views. The algorithm is based on CITE: Zhang2000, CITE: BouguetMCT and CITE: strobl2011iccv. See #calibrateCamera for other detailed explanations. SEE: calibrateCamera, findChessboardCorners, solvePnP, initCameraMatrix2D, stereoCalibrate, undistort

calibrateCameraRO
public static double calibrateCameraRO(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, int flags, TermCriteria criteria)

calibrateCameraRO
public static double calibrateCameraRO(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints, int flags)

calibrateCameraRO
public static double calibrateCameraRO(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints, Size imageSize, int iFixedPoint, Mat cameraMatrix, Mat distCoeffs, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat newObjPoints)

calibrationMatrixValues
public static void calibrationMatrixValues(Mat cameraMatrix, Size imageSize, double apertureWidth, double apertureHeight, double[] fovx, double[] fovy, double[] focalLength, Point principalPoint, double[] aspectRatio)
Computes useful camera characteristics from the camera intrinsic matrix. Parameters:
cameraMatrix
 Input camera intrinsic matrix that can be estimated by #calibrateCamera or #stereoCalibrate .imageSize
 Input image size in pixels.apertureWidth
 Physical width in mm of the sensor.apertureHeight
 Physical height in mm of the sensor.fovx
 Output field of view in degrees along the horizontal sensor axis.fovy
 Output field of view in degrees along the vertical sensor axis.focalLength
 Focal length of the lens in mm.principalPoint
 Principal point in mm.aspectRatio
 \(f_y/f_x\) The function computes various useful camera characteristics from the previously estimated camera matrix. Note: Do keep in mind that the unity measure 'mm' stands for whatever unit of measure one chooses for the chessboard pitch (it can thus be any value).

stereoCalibrateExtended
public static double stereoCalibrateExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat perViewErrors, int flags, TermCriteria criteria)
Calibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras. Parameters:
objectPoints
 Vector of vectors of the calibration pattern points. The same structure as in REF: calibrateCamera. For each pattern view, both cameras need to see the same object points. Therefore, objectPoints.size(), imagePoints1.size(), and imagePoints2.size() need to be equal as well as objectPoints[i].size(), imagePoints1[i].size(), and imagePoints2[i].size() need to be equal for each i.imagePoints1
 Vector of vectors of the projections of the calibration pattern points, observed by the first camera. The same structure as in REF: calibrateCamera.imagePoints2
 Vector of vectors of the projections of the calibration pattern points, observed by the second camera. The same structure as in REF: calibrateCamera.cameraMatrix1
 Input/output camera intrinsic matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs1
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.cameraMatrix2
 Input/output second camera intrinsic matrix for the second camera. See description for cameraMatrix1.distCoeffs2
 Input/output lens distortion coefficients for the second camera. See description for distCoeffs1.imageSize
 Size of the image used only to initialize the camera intrinsic matrices.R
 Output rotation matrix. Together with the translation vector T, this matrix brings points given in the first camera's coordinate system to points in the second camera's coordinate system. In more technical terms, the tuple of R and T performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Due to its duality, this tuple is equivalent to the position of the first camera with respect to the second camera coordinate system.T
 Output translation vector, see description above.E
 Output essential matrix.F
 Output fundamental matrix.rvecs
 Output vector of rotation vectors ( REF: Rodrigues ) estimated for each pattern view in the coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each ith rotation vector together with the corresponding ith translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms, the tuple of the ith rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space of the first camera of the stereo pair.tvecs
 Output vector of translation vectors estimated for each pattern view, see parameter description of previous output parameter ( rvecs ).perViewErrors
 Output vector of the RMS reprojection error estimated for each pattern view.flags
 Different flags that may be zero or a combination of the following values: REF: CALIB_FIX_INTRINSIC Fix cameraMatrix? and distCoeffs? so that only R, T, E, and F matrices are estimated.
 REF: CALIB_USE_INTRINSIC_GUESS Optimize some or all of the intrinsic parameters according to the specified flags. Initial values are provided by the user.
 REF: CALIB_USE_EXTRINSIC_GUESS R and T contain valid initial values that are optimized further. Otherwise R and T are initialized to the median value of the pattern views (each dimension separately).
 REF: CALIB_FIX_PRINCIPAL_POINT Fix the principal points during the optimization.
 REF: CALIB_FIX_FOCAL_LENGTH Fix \(f^{(j)}_x\) and \(f^{(j)}_y\) .
 REF: CALIB_FIX_ASPECT_RATIO Optimize \(f^{(j)}_y\) . Fix the ratio \(f^{(j)}_x/f^{(j)}_y\) .
 REF: CALIB_SAME_FOCAL_LENGTH Enforce \(f^{(0)}_x=f^{(1)}_x\) and \(f^{(0)}_y=f^{(1)}_y\) .
 REF: CALIB_ZERO_TANGENT_DIST Set tangential distortion coefficients for each camera to zeros and fix there.
 REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 Do not change the corresponding radial distortion coefficient during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_RATIONAL_MODEL Enable coefficients k4, k5, and k6. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the rational model and return 8 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
 REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the thin prism model and return 12 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
 REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the tilted sensor model and return 14 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
 REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
criteria
 Termination criteria for the iterative optimization algorithm. The function estimates the transformation between two cameras making a stereo pair. If one computes the poses of an object relative to the first camera and to the second camera, ( \(R_1\),\(T_1\) ) and (\(R_2\),\(T_2\)), respectively, for a stereo camera where the relative position and orientation between the two cameras are fixed, then those poses definitely relate to each other. This means, if the relative position and orientation (\(R\),\(T\)) of the two cameras is known, it is possible to compute (\(R_2\),\(T_2\)) when (\(R_1\),\(T_1\)) is given. This is what the described function does. It computes (\(R\),\(T\)) such that: \(R_2=R R_1\) \(T_2=R T_1 + T.\) Therefore, one can compute the coordinate representation of a 3D point for the second camera's coordinate system when given the point's coordinate representation in the first camera's coordinate system: \(\begin{bmatrix} X_2 \\ Y_2 \\ Z_2 \\ 1 \end{bmatrix} = \begin{bmatrix} R & T \\ 0 & 1 \end{bmatrix} \begin{bmatrix} X_1 \\ Y_1 \\ Z_1 \\ 1 \end{bmatrix}.\) Optionally, it computes the essential matrix E: \(E= \vecthreethree{0}{T_2}{T_1}{T_2}{0}{T_0}{T_1}{T_0}{0} R\) where \(T_i\) are components of the translation vector \(T\) : \(T=[T_0, T_1, T_2]^T\) . And the function can also compute the fundamental matrix F: \(F = cameraMatrix2^{T}\cdot E \cdot cameraMatrix1^{1}\) Besides the stereorelated information, the function can also perform a full calibration of each of the two cameras. However, due to the high dimensionality of the parameter space and noise in the input data, the function can diverge from the correct solution. If the intrinsic parameters can be estimated with high accuracy for each of the cameras individually (for example, using #calibrateCamera ), you are recommended to do so and then pass REF: CALIB_FIX_INTRINSIC flag to the function along with the computed intrinsic parameters. Otherwise, if all the parameters are estimated at once, it makes sense to restrict some parameters, for example, pass REF: CALIB_SAME_FOCAL_LENGTH and REF: CALIB_ZERO_TANGENT_DIST flags, which is usually a reasonable assumption. Similarly to #calibrateCamera, the function minimizes the total reprojection error for all the points in all the available views from both cameras. The function returns the final value of the reprojection error. Returns:
 automatically generated

stereoCalibrateExtended
public static double stereoCalibrateExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat perViewErrors, int flags)
Calibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras. Parameters:
objectPoints
 Vector of vectors of the calibration pattern points. The same structure as in REF: calibrateCamera. For each pattern view, both cameras need to see the same object points. Therefore, objectPoints.size(), imagePoints1.size(), and imagePoints2.size() need to be equal as well as objectPoints[i].size(), imagePoints1[i].size(), and imagePoints2[i].size() need to be equal for each i.imagePoints1
 Vector of vectors of the projections of the calibration pattern points, observed by the first camera. The same structure as in REF: calibrateCamera.imagePoints2
 Vector of vectors of the projections of the calibration pattern points, observed by the second camera. The same structure as in REF: calibrateCamera.cameraMatrix1
 Input/output camera intrinsic matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs1
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.cameraMatrix2
 Input/output second camera intrinsic matrix for the second camera. See description for cameraMatrix1.distCoeffs2
 Input/output lens distortion coefficients for the second camera. See description for distCoeffs1.imageSize
 Size of the image used only to initialize the camera intrinsic matrices.R
 Output rotation matrix. Together with the translation vector T, this matrix brings points given in the first camera's coordinate system to points in the second camera's coordinate system. In more technical terms, the tuple of R and T performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Due to its duality, this tuple is equivalent to the position of the first camera with respect to the second camera coordinate system.T
 Output translation vector, see description above.E
 Output essential matrix.F
 Output fundamental matrix.rvecs
 Output vector of rotation vectors ( REF: Rodrigues ) estimated for each pattern view in the coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each ith rotation vector together with the corresponding ith translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms, the tuple of the ith rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space of the first camera of the stereo pair.tvecs
 Output vector of translation vectors estimated for each pattern view, see parameter description of previous output parameter ( rvecs ).perViewErrors
 Output vector of the RMS reprojection error estimated for each pattern view.flags
 Different flags that may be zero or a combination of the following values: REF: CALIB_FIX_INTRINSIC Fix cameraMatrix? and distCoeffs? so that only R, T, E, and F matrices are estimated.
 REF: CALIB_USE_INTRINSIC_GUESS Optimize some or all of the intrinsic parameters according to the specified flags. Initial values are provided by the user.
 REF: CALIB_USE_EXTRINSIC_GUESS R and T contain valid initial values that are optimized further. Otherwise R and T are initialized to the median value of the pattern views (each dimension separately).
 REF: CALIB_FIX_PRINCIPAL_POINT Fix the principal points during the optimization.
 REF: CALIB_FIX_FOCAL_LENGTH Fix \(f^{(j)}_x\) and \(f^{(j)}_y\) .
 REF: CALIB_FIX_ASPECT_RATIO Optimize \(f^{(j)}_y\) . Fix the ratio \(f^{(j)}_x/f^{(j)}_y\) .
 REF: CALIB_SAME_FOCAL_LENGTH Enforce \(f^{(0)}_x=f^{(1)}_x\) and \(f^{(0)}_y=f^{(1)}_y\) .
 REF: CALIB_ZERO_TANGENT_DIST Set tangential distortion coefficients for each camera to zeros and fix there.
 REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 Do not change the corresponding radial distortion coefficient during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_RATIONAL_MODEL Enable coefficients k4, k5, and k6. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the rational model and return 8 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
 REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the thin prism model and return 12 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
 REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the tilted sensor model and return 14 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
 REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 Returns:
 automatically generated

stereoCalibrateExtended
public static double stereoCalibrateExtended(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, java.util.List<Mat> rvecs, java.util.List<Mat> tvecs, Mat perViewErrors)
Calibrates a stereo camera set up. This function finds the intrinsic parameters for each of the two cameras and the extrinsic parameters between the two cameras. Parameters:
objectPoints
 Vector of vectors of the calibration pattern points. The same structure as in REF: calibrateCamera. For each pattern view, both cameras need to see the same object points. Therefore, objectPoints.size(), imagePoints1.size(), and imagePoints2.size() need to be equal as well as objectPoints[i].size(), imagePoints1[i].size(), and imagePoints2[i].size() need to be equal for each i.imagePoints1
 Vector of vectors of the projections of the calibration pattern points, observed by the first camera. The same structure as in REF: calibrateCamera.imagePoints2
 Vector of vectors of the projections of the calibration pattern points, observed by the second camera. The same structure as in REF: calibrateCamera.cameraMatrix1
 Input/output camera intrinsic matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs1
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.cameraMatrix2
 Input/output second camera intrinsic matrix for the second camera. See description for cameraMatrix1.distCoeffs2
 Input/output lens distortion coefficients for the second camera. See description for distCoeffs1.imageSize
 Size of the image used only to initialize the camera intrinsic matrices.R
 Output rotation matrix. Together with the translation vector T, this matrix brings points given in the first camera's coordinate system to points in the second camera's coordinate system. In more technical terms, the tuple of R and T performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Due to its duality, this tuple is equivalent to the position of the first camera with respect to the second camera coordinate system.T
 Output translation vector, see description above.E
 Output essential matrix.F
 Output fundamental matrix.rvecs
 Output vector of rotation vectors ( REF: Rodrigues ) estimated for each pattern view in the coordinate system of the first camera of the stereo pair (e.g. std::vector<cv::Mat>). More in detail, each ith rotation vector together with the corresponding ith translation vector (see the next output parameter description) brings the calibration pattern from the object coordinate space (in which object points are specified) to the camera coordinate space of the first camera of the stereo pair. In more technical terms, the tuple of the ith rotation and translation vector performs a change of basis from object coordinate space to camera coordinate space of the first camera of the stereo pair.tvecs
 Output vector of translation vectors estimated for each pattern view, see parameter description of previous output parameter ( rvecs ).perViewErrors
 Output vector of the RMS reprojection error estimated for each pattern view. REF: CALIB_FIX_INTRINSIC Fix cameraMatrix? and distCoeffs? so that only R, T, E, and F matrices are estimated.
 REF: CALIB_USE_INTRINSIC_GUESS Optimize some or all of the intrinsic parameters according to the specified flags. Initial values are provided by the user.
 REF: CALIB_USE_EXTRINSIC_GUESS R and T contain valid initial values that are optimized further. Otherwise R and T are initialized to the median value of the pattern views (each dimension separately).
 REF: CALIB_FIX_PRINCIPAL_POINT Fix the principal points during the optimization.
 REF: CALIB_FIX_FOCAL_LENGTH Fix \(f^{(j)}_x\) and \(f^{(j)}_y\) .
 REF: CALIB_FIX_ASPECT_RATIO Optimize \(f^{(j)}_y\) . Fix the ratio \(f^{(j)}_x/f^{(j)}_y\) .
 REF: CALIB_SAME_FOCAL_LENGTH Enforce \(f^{(0)}_x=f^{(1)}_x\) and \(f^{(0)}_y=f^{(1)}_y\) .
 REF: CALIB_ZERO_TANGENT_DIST Set tangential distortion coefficients for each camera to zeros and fix there.
 REF: CALIB_FIX_K1,..., REF: CALIB_FIX_K6 Do not change the corresponding radial distortion coefficient during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_RATIONAL_MODEL Enable coefficients k4, k5, and k6. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the rational model and return 8 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
 REF: CALIB_THIN_PRISM_MODEL Coefficients s1, s2, s3 and s4 are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the thin prism model and return 12 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
 REF: CALIB_FIX_S1_S2_S3_S4 The thin prism distortion coefficients are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 REF: CALIB_TILTED_MODEL Coefficients tauX and tauY are enabled. To provide the backward compatibility, this extra flag should be explicitly specified to make the calibration function use the tilted sensor model and return 14 coefficients. If the flag is not set, the function computes and returns only 5 distortion coefficients.
 REF: CALIB_FIX_TAUX_TAUY The coefficients of the tilted sensor model are not changed during the optimization. If REF: CALIB_USE_INTRINSIC_GUESS is set, the coefficient from the supplied distCoeffs matrix is used. Otherwise, it is set to 0.
 Returns:
 automatically generated

stereoCalibrate
public static double stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, int flags, TermCriteria criteria)

stereoCalibrate
public static double stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, int flags)

stereoCalibrate
public static double stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F)

stereoCalibrate
public static double stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, Mat perViewErrors, int flags, TermCriteria criteria)

stereoCalibrate
public static double stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, Mat perViewErrors, int flags)

stereoCalibrate
public static double stereoCalibrate(java.util.List<Mat> objectPoints, java.util.List<Mat> imagePoints1, java.util.List<Mat> imagePoints2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat E, Mat F, Mat perViewErrors)

stereoRectify
public static void stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, double alpha, Size newImageSize, Rect validPixROI1, Rect validPixROI2)
Computes rectification transforms for each head of a calibrated stereo camera. Parameters:
cameraMatrix1
 First camera intrinsic matrix.distCoeffs1
 First camera distortion parameters.cameraMatrix2
 Second camera intrinsic matrix.distCoeffs2
 Second camera distortion parameters.imageSize
 Size of the image used for stereo calibration.R
 Rotation matrix from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.T
 Translation vector from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.R1
 Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix brings points given in the unrectified first camera's coordinate system to points in the rectified first camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified first camera's coordinate system to the rectified first camera's coordinate system.R2
 Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix brings points given in the unrectified second camera's coordinate system to points in the rectified second camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified second camera's coordinate system to the rectified second camera's coordinate system.P1
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified first camera's image.P2
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified second camera's image.Q
 Output \(4 \times 4\) disparitytodepth mapping matrix (see REF: reprojectImageTo3D).flags
 Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.alpha
 Free scaling parameter. If it is 1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Any intermediate value yields an intermediate result between those two extreme cases.newImageSize
 New image resolution after rectification. The same size should be passed to #initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0) is passed (default), it is set to the original imageSize . Setting it to a larger value can help you preserve details in the original image, especially when there is a big radial distortion.validPixROI1
 Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below).validPixROI2
 Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). The function computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate as input. As output, it provides two rotation matrices and also two projection matrices in the new coordinates. The function distinguishes the following two cases: Horizontal stereo: the first and the second camera views are shifted relative to each other mainly along the xaxis (with possible small vertical shift). In the rectified images, the corresponding epipolar lines in the left and right cameras are horizontal and have the same ycoordinate. P1 and P2 look like:
 Vertical stereo: the first and the second camera views are shifted relative to each other mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar lines in the rectified images are vertical and have the same xcoordinate. P1 and P2 look like:

stereoRectify
public static void stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, double alpha, Size newImageSize, Rect validPixROI1)
Computes rectification transforms for each head of a calibrated stereo camera. Parameters:
cameraMatrix1
 First camera intrinsic matrix.distCoeffs1
 First camera distortion parameters.cameraMatrix2
 Second camera intrinsic matrix.distCoeffs2
 Second camera distortion parameters.imageSize
 Size of the image used for stereo calibration.R
 Rotation matrix from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.T
 Translation vector from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.R1
 Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix brings points given in the unrectified first camera's coordinate system to points in the rectified first camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified first camera's coordinate system to the rectified first camera's coordinate system.R2
 Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix brings points given in the unrectified second camera's coordinate system to points in the rectified second camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified second camera's coordinate system to the rectified second camera's coordinate system.P1
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified first camera's image.P2
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified second camera's image.Q
 Output \(4 \times 4\) disparitytodepth mapping matrix (see REF: reprojectImageTo3D).flags
 Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.alpha
 Free scaling parameter. If it is 1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Any intermediate value yields an intermediate result between those two extreme cases.newImageSize
 New image resolution after rectification. The same size should be passed to #initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0) is passed (default), it is set to the original imageSize . Setting it to a larger value can help you preserve details in the original image, especially when there is a big radial distortion.validPixROI1
 Optional output rectangles inside the rectified images where all the pixels are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). The function computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate as input. As output, it provides two rotation matrices and also two projection matrices in the new coordinates. The function distinguishes the following two cases: Horizontal stereo: the first and the second camera views are shifted relative to each other mainly along the xaxis (with possible small vertical shift). In the rectified images, the corresponding epipolar lines in the left and right cameras are horizontal and have the same ycoordinate. P1 and P2 look like:
 Vertical stereo: the first and the second camera views are shifted relative to each other mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar lines in the rectified images are vertical and have the same xcoordinate. P1 and P2 look like:

stereoRectify
public static void stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, double alpha, Size newImageSize)
Computes rectification transforms for each head of a calibrated stereo camera. Parameters:
cameraMatrix1
 First camera intrinsic matrix.distCoeffs1
 First camera distortion parameters.cameraMatrix2
 Second camera intrinsic matrix.distCoeffs2
 Second camera distortion parameters.imageSize
 Size of the image used for stereo calibration.R
 Rotation matrix from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.T
 Translation vector from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.R1
 Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix brings points given in the unrectified first camera's coordinate system to points in the rectified first camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified first camera's coordinate system to the rectified first camera's coordinate system.R2
 Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix brings points given in the unrectified second camera's coordinate system to points in the rectified second camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified second camera's coordinate system to the rectified second camera's coordinate system.P1
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified first camera's image.P2
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified second camera's image.Q
 Output \(4 \times 4\) disparitytodepth mapping matrix (see REF: reprojectImageTo3D).flags
 Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.alpha
 Free scaling parameter. If it is 1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Any intermediate value yields an intermediate result between those two extreme cases.newImageSize
 New image resolution after rectification. The same size should be passed to #initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0) is passed (default), it is set to the original imageSize . Setting it to a larger value can help you preserve details in the original image, especially when there is a big radial distortion. are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). The function computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate as input. As output, it provides two rotation matrices and also two projection matrices in the new coordinates. The function distinguishes the following two cases: Horizontal stereo: the first and the second camera views are shifted relative to each other mainly along the xaxis (with possible small vertical shift). In the rectified images, the corresponding epipolar lines in the left and right cameras are horizontal and have the same ycoordinate. P1 and P2 look like:
 Vertical stereo: the first and the second camera views are shifted relative to each other mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar lines in the rectified images are vertical and have the same xcoordinate. P1 and P2 look like:

stereoRectify
public static void stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags, double alpha)
Computes rectification transforms for each head of a calibrated stereo camera. Parameters:
cameraMatrix1
 First camera intrinsic matrix.distCoeffs1
 First camera distortion parameters.cameraMatrix2
 Second camera intrinsic matrix.distCoeffs2
 Second camera distortion parameters.imageSize
 Size of the image used for stereo calibration.R
 Rotation matrix from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.T
 Translation vector from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.R1
 Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix brings points given in the unrectified first camera's coordinate system to points in the rectified first camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified first camera's coordinate system to the rectified first camera's coordinate system.R2
 Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix brings points given in the unrectified second camera's coordinate system to points in the rectified second camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified second camera's coordinate system to the rectified second camera's coordinate system.P1
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified first camera's image.P2
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified second camera's image.Q
 Output \(4 \times 4\) disparitytodepth mapping matrix (see REF: reprojectImageTo3D).flags
 Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area.alpha
 Free scaling parameter. If it is 1 or absent, the function performs the default scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Any intermediate value yields an intermediate result between those two extreme cases. #initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0) is passed (default), it is set to the original imageSize . Setting it to a larger value can help you preserve details in the original image, especially when there is a big radial distortion. are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). The function computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate as input. As output, it provides two rotation matrices and also two projection matrices in the new coordinates. The function distinguishes the following two cases: Horizontal stereo: the first and the second camera views are shifted relative to each other mainly along the xaxis (with possible small vertical shift). In the rectified images, the corresponding epipolar lines in the left and right cameras are horizontal and have the same ycoordinate. P1 and P2 look like:
 Vertical stereo: the first and the second camera views are shifted relative to each other mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar lines in the rectified images are vertical and have the same xcoordinate. P1 and P2 look like:

stereoRectify
public static void stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q, int flags)
Computes rectification transforms for each head of a calibrated stereo camera. Parameters:
cameraMatrix1
 First camera intrinsic matrix.distCoeffs1
 First camera distortion parameters.cameraMatrix2
 Second camera intrinsic matrix.distCoeffs2
 Second camera distortion parameters.imageSize
 Size of the image used for stereo calibration.R
 Rotation matrix from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.T
 Translation vector from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.R1
 Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix brings points given in the unrectified first camera's coordinate system to points in the rectified first camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified first camera's coordinate system to the rectified first camera's coordinate system.R2
 Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix brings points given in the unrectified second camera's coordinate system to points in the rectified second camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified second camera's coordinate system to the rectified second camera's coordinate system.P1
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified first camera's image.P2
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified second camera's image.Q
 Output \(4 \times 4\) disparitytodepth mapping matrix (see REF: reprojectImageTo3D).flags
 Operation flags that may be zero or REF: CALIB_ZERO_DISPARITY . If the flag is set, the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area. scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Any intermediate value yields an intermediate result between those two extreme cases. #initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0) is passed (default), it is set to the original imageSize . Setting it to a larger value can help you preserve details in the original image, especially when there is a big radial distortion. are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). The function computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate as input. As output, it provides two rotation matrices and also two projection matrices in the new coordinates. The function distinguishes the following two cases: Horizontal stereo: the first and the second camera views are shifted relative to each other mainly along the xaxis (with possible small vertical shift). In the rectified images, the corresponding epipolar lines in the left and right cameras are horizontal and have the same ycoordinate. P1 and P2 look like:
 Vertical stereo: the first and the second camera views are shifted relative to each other mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar lines in the rectified images are vertical and have the same xcoordinate. P1 and P2 look like:

stereoRectify
public static void stereoRectify(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Size imageSize, Mat R, Mat T, Mat R1, Mat R2, Mat P1, Mat P2, Mat Q)
Computes rectification transforms for each head of a calibrated stereo camera. Parameters:
cameraMatrix1
 First camera intrinsic matrix.distCoeffs1
 First camera distortion parameters.cameraMatrix2
 Second camera intrinsic matrix.distCoeffs2
 Second camera distortion parameters.imageSize
 Size of the image used for stereo calibration.R
 Rotation matrix from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.T
 Translation vector from the coordinate system of the first camera to the second camera, see REF: stereoCalibrate.R1
 Output 3x3 rectification transform (rotation matrix) for the first camera. This matrix brings points given in the unrectified first camera's coordinate system to points in the rectified first camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified first camera's coordinate system to the rectified first camera's coordinate system.R2
 Output 3x3 rectification transform (rotation matrix) for the second camera. This matrix brings points given in the unrectified second camera's coordinate system to points in the rectified second camera's coordinate system. In more technical terms, it performs a change of basis from the unrectified second camera's coordinate system to the rectified second camera's coordinate system.P1
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the first camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified first camera's image.P2
 Output 3x4 projection matrix in the new (rectified) coordinate systems for the second camera, i.e. it projects points given in the rectified first camera coordinate system into the rectified second camera's image.Q
 Output \(4 \times 4\) disparitytodepth mapping matrix (see REF: reprojectImageTo3D). the function makes the principal points of each camera have the same pixel coordinates in the rectified views. And if the flag is not set, the function may still shift the images in the horizontal or vertical direction (depending on the orientation of epipolar lines) to maximize the useful image area. scaling. Otherwise, the parameter should be between 0 and 1. alpha=0 means that the rectified images are zoomed and shifted so that only valid pixels are visible (no black areas after rectification). alpha=1 means that the rectified image is decimated and shifted so that all the pixels from the original images from the cameras are retained in the rectified images (no source image pixels are lost). Any intermediate value yields an intermediate result between those two extreme cases. #initUndistortRectifyMap (see the stereo_calib.cpp sample in OpenCV samples directory). When (0,0) is passed (default), it is set to the original imageSize . Setting it to a larger value can help you preserve details in the original image, especially when there is a big radial distortion. are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). are valid. If alpha=0 , the ROIs cover the whole images. Otherwise, they are likely to be smaller (see the picture below). The function computes the rotation matrices for each camera that (virtually) make both camera image planes the same plane. Consequently, this makes all the epipolar lines parallel and thus simplifies the dense stereo correspondence problem. The function takes the matrices computed by #stereoCalibrate as input. As output, it provides two rotation matrices and also two projection matrices in the new coordinates. The function distinguishes the following two cases: Horizontal stereo: the first and the second camera views are shifted relative to each other mainly along the xaxis (with possible small vertical shift). In the rectified images, the corresponding epipolar lines in the left and right cameras are horizontal and have the same ycoordinate. P1 and P2 look like:
 Vertical stereo: the first and the second camera views are shifted relative to each other mainly in the vertical direction (and probably a bit in the horizontal direction too). The epipolar lines in the rectified images are vertical and have the same xcoordinate. P1 and P2 look like:

stereoRectifyUncalibrated
public static boolean stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat H1, Mat H2, double threshold)
Computes a rectification transform for an uncalibrated stereo camera. Parameters:
points1
 Array of feature points in the first image.points2
 The corresponding points in the second image. The same formats as in #findFundamentalMat are supported.F
 Input fundamental matrix. It can be computed from the same set of point pairs using #findFundamentalMat .imgSize
 Size of the image.H1
 Output rectification homography matrix for the first image.H2
 Output rectification homography matrix for the second image.threshold
 Optional threshold used to filter out the outliers. If the parameter is greater than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points for which \(\texttt{points2[i]}^T \cdot \texttt{F} \cdot \texttt{points1[i]}>\texttt{threshold}\) ) are rejected prior to computing the homographies. Otherwise, all the points are considered inliers. The function computes the rectification transformations without knowing intrinsic parameters of the cameras and their relative position in the space, which explains the suffix "uncalibrated". Another related difference from #stereoRectify is that the function outputs not the rectification transformations in the object (3D) space, but the planar perspective transformations encoded by the homography matrices H1 and H2 . The function implements the algorithm CITE: Hartley99 . Note: While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion, it would be better to correct it before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using #calibrateCamera . Then, the images can be corrected using #undistort , or just the point coordinates can be corrected with #undistortPoints . Returns:
 automatically generated

stereoRectifyUncalibrated
public static boolean stereoRectifyUncalibrated(Mat points1, Mat points2, Mat F, Size imgSize, Mat H1, Mat H2)
Computes a rectification transform for an uncalibrated stereo camera. Parameters:
points1
 Array of feature points in the first image.points2
 The corresponding points in the second image. The same formats as in #findFundamentalMat are supported.F
 Input fundamental matrix. It can be computed from the same set of point pairs using #findFundamentalMat .imgSize
 Size of the image.H1
 Output rectification homography matrix for the first image.H2
 Output rectification homography matrix for the second image. than zero, all the point pairs that do not comply with the epipolar geometry (that is, the points for which \(\texttt{points2[i]}^T \cdot \texttt{F} \cdot \texttt{points1[i]}>\texttt{threshold}\) ) are rejected prior to computing the homographies. Otherwise, all the points are considered inliers. The function computes the rectification transformations without knowing intrinsic parameters of the cameras and their relative position in the space, which explains the suffix "uncalibrated". Another related difference from #stereoRectify is that the function outputs not the rectification transformations in the object (3D) space, but the planar perspective transformations encoded by the homography matrices H1 and H2 . The function implements the algorithm CITE: Hartley99 . Note: While the algorithm does not need to know the intrinsic parameters of the cameras, it heavily depends on the epipolar geometry. Therefore, if the camera lenses have a significant distortion, it would be better to correct it before computing the fundamental matrix and calling this function. For example, distortion coefficients can be estimated for each head of stereo camera separately by using #calibrateCamera . Then, the images can be corrected using #undistort , or just the point coordinates can be corrected with #undistortPoints . Returns:
 automatically generated

rectify3Collinear
public static float rectify3Collinear(Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat cameraMatrix3, Mat distCoeffs3, java.util.List<Mat> imgpt1, java.util.List<Mat> imgpt3, Size imageSize, Mat R12, Mat T12, Mat R13, Mat T13, Mat R1, Mat R2, Mat R3, Mat P1, Mat P2, Mat P3, Mat Q, double alpha, Size newImgSize, Rect roi1, Rect roi2, int flags)

getOptimalNewCameraMatrix
public static Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize, Rect validPixROI, boolean centerPrincipalPoint)
Returns the new camera intrinsic matrix based on the free scaling parameter. Parameters:
cameraMatrix
 Input camera intrinsic matrix.distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.imageSize
 Original image size.alpha
 Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). See #stereoRectify for details.newImgSize
 Image size after rectification. By default, it is set to imageSize .validPixROI
 Optional output rectangle that outlines allgoodpixels region in the undistorted image. See roi1, roi2 description in #stereoRectify .centerPrincipalPoint
 Optional flag that indicates whether in the new camera intrinsic matrix the principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image. Returns:
 new_camera_matrix Output new camera intrinsic matrix. The function computes and returns the optimal new camera intrinsic matrix based on the free scaling parameter. By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original image pixels if there is valuable information in the corners alpha=1 , or get something in between. When alpha>0 , the undistorted result is likely to have some black pixels corresponding to "virtual" pixels outside of the captured distorted image. The original camera intrinsic matrix, distortion coefficients, the computed new camera intrinsic matrix, and newImageSize should be passed to #initUndistortRectifyMap to produce the maps for #remap .

getOptimalNewCameraMatrix
public static Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize, Rect validPixROI)
Returns the new camera intrinsic matrix based on the free scaling parameter. Parameters:
cameraMatrix
 Input camera intrinsic matrix.distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.imageSize
 Original image size.alpha
 Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). See #stereoRectify for details.newImgSize
 Image size after rectification. By default, it is set to imageSize .validPixROI
 Optional output rectangle that outlines allgoodpixels region in the undistorted image. See roi1, roi2 description in #stereoRectify . principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image. Returns:
 new_camera_matrix Output new camera intrinsic matrix. The function computes and returns the optimal new camera intrinsic matrix based on the free scaling parameter. By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original image pixels if there is valuable information in the corners alpha=1 , or get something in between. When alpha>0 , the undistorted result is likely to have some black pixels corresponding to "virtual" pixels outside of the captured distorted image. The original camera intrinsic matrix, distortion coefficients, the computed new camera intrinsic matrix, and newImageSize should be passed to #initUndistortRectifyMap to produce the maps for #remap .

getOptimalNewCameraMatrix
public static Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha, Size newImgSize)
Returns the new camera intrinsic matrix based on the free scaling parameter. Parameters:
cameraMatrix
 Input camera intrinsic matrix.distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.imageSize
 Original image size.alpha
 Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). See #stereoRectify for details.newImgSize
 Image size after rectification. By default, it is set to imageSize . undistorted image. See roi1, roi2 description in #stereoRectify . principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image. Returns:
 new_camera_matrix Output new camera intrinsic matrix. The function computes and returns the optimal new camera intrinsic matrix based on the free scaling parameter. By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original image pixels if there is valuable information in the corners alpha=1 , or get something in between. When alpha>0 , the undistorted result is likely to have some black pixels corresponding to "virtual" pixels outside of the captured distorted image. The original camera intrinsic matrix, distortion coefficients, the computed new camera intrinsic matrix, and newImageSize should be passed to #initUndistortRectifyMap to produce the maps for #remap .

getOptimalNewCameraMatrix
public static Mat getOptimalNewCameraMatrix(Mat cameraMatrix, Mat distCoeffs, Size imageSize, double alpha)
Returns the new camera intrinsic matrix based on the free scaling parameter. Parameters:
cameraMatrix
 Input camera intrinsic matrix.distCoeffs
 Input vector of distortion coefficients \(\distcoeffs\). If the vector is NULL/empty, the zero distortion coefficients are assumed.imageSize
 Original image size.alpha
 Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image). See #stereoRectify for details. undistorted image. See roi1, roi2 description in #stereoRectify . principal point should be at the image center or not. By default, the principal point is chosen to best fit a subset of the source image (determined by alpha) to the corrected image. Returns:
 new_camera_matrix Output new camera intrinsic matrix. The function computes and returns the optimal new camera intrinsic matrix based on the free scaling parameter. By varying this parameter, you may retrieve only sensible pixels alpha=0 , keep all the original image pixels if there is valuable information in the corners alpha=1 , or get something in between. When alpha>0 , the undistorted result is likely to have some black pixels corresponding to "virtual" pixels outside of the captured distorted image. The original camera intrinsic matrix, distortion coefficients, the computed new camera intrinsic matrix, and newImageSize should be passed to #initUndistortRectifyMap to produce the maps for #remap .

calibrateHandEye
public static void calibrateHandEye(java.util.List<Mat> R_gripper2base, java.util.List<Mat> t_gripper2base, java.util.List<Mat> R_target2cam, java.util.List<Mat> t_target2cam, Mat R_cam2gripper, Mat t_cam2gripper, int method)
Computes HandEye calibration: \(_{}^{g}\textrm{T}_c\) Parameters:
R_gripper2base
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame (\(_{}^{b}\textrm{T}_g\)). This is a vector (vector<Mat>
) that contains the rotation,(3x3)
rotation matrices or(3x1)
rotation vectors, for all the transformations from gripper frame to robot base frame.t_gripper2base
 Translation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame (\(_{}^{b}\textrm{T}_g\)). This is a vector (vector<Mat>
) that contains the(3x1)
translation vectors for all the transformations from gripper frame to robot base frame.R_target2cam
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame (\(_{}^{c}\textrm{T}_t\)). This is a vector (vector<Mat>
) that contains the rotation,(3x3)
rotation matrices or(3x1)
rotation vectors, for all the transformations from calibration target frame to camera frame.t_target2cam
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame (\(_{}^{c}\textrm{T}_t\)). This is a vector (vector<Mat>
) that contains the(3x1)
translation vectors for all the transformations from calibration target frame to camera frame.R_cam2gripper
 Estimated(3x3)
rotation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame (\(_{}^{g}\textrm{T}_c\)).t_cam2gripper
 Estimated(3x1)
translation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame (\(_{}^{g}\textrm{T}_c\)).method
 One of the implemented HandEye calibration method, see cv::HandEyeCalibrationMethod The function performs the HandEye calibration using various methods. One approach consists in estimating the rotation then the translation (separable solutions) and the following methods are implemented: R. Tsai, R. Lenz A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/EyeCalibration \cite Tsai89
 F. Park, B. Martin Robot Sensor Calibration: Solving AX = XB on the Euclidean Group \cite Park94
 R. Horaud, F. Dornaika HandEye Calibration \cite Horaud95
 N. Andreff, R. Horaud, B. Espiau Online HandEye Calibration \cite Andreff99
 K. Daniilidis HandEye Calibration Using Dual Quaternions \cite Daniilidis98
 a static calibration pattern is used to estimate the transformation between the target frame and the camera frame
 the robot gripper is moved in order to acquire several poses
 for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for instance the robot kinematics \( \begin{bmatrix} X_b\\ Y_b\\ Z_b\\ 1 \end{bmatrix} = \begin{bmatrix} _{}^{b}\textrm{R}_g & _{}^{b}\textrm{t}_g \\ 0_{1 \times 3} & 1 \end{bmatrix} \begin{bmatrix} X_g\\ Y_g\\ Z_g\\ 1 \end{bmatrix} \)
 for each pose, the homogeneous transformation between the calibration target frame and the camera frame is recorded using for instance a pose estimation method (PnP) from 2D3D point correspondences \( \begin{bmatrix} X_c\\ Y_c\\ Z_c\\ 1 \end{bmatrix} = \begin{bmatrix} _{}^{c}\textrm{R}_t & _{}^{c}\textrm{t}_t \\ 0_{1 \times 3} & 1 \end{bmatrix} \begin{bmatrix} X_t\\ Y_t\\ Z_t\\ 1 \end{bmatrix} \)
 for an eyeinhand configuration \( \begin{align*} ^{b}{\textrm{T}_g}^{(1)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &= \hspace{0.1em} ^{b}{\textrm{T}_g}^{(2)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\
 for an eyetohand configuration \( \begin{align*} ^{g}{\textrm{T}_b}^{(1)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &= \hspace{0.1em} ^{g}{\textrm{T}_b}^{(2)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\

calibrateHandEye
public static void calibrateHandEye(java.util.List<Mat> R_gripper2base, java.util.List<Mat> t_gripper2base, java.util.List<Mat> R_target2cam, java.util.List<Mat> t_target2cam, Mat R_cam2gripper, Mat t_cam2gripper)
Computes HandEye calibration: \(_{}^{g}\textrm{T}_c\) Parameters:
R_gripper2base
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame (\(_{}^{b}\textrm{T}_g\)). This is a vector (vector<Mat>
) that contains the rotation,(3x3)
rotation matrices or(3x1)
rotation vectors, for all the transformations from gripper frame to robot base frame.t_gripper2base
 Translation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the robot base frame (\(_{}^{b}\textrm{T}_g\)). This is a vector (vector<Mat>
) that contains the(3x1)
translation vectors for all the transformations from gripper frame to robot base frame.R_target2cam
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame (\(_{}^{c}\textrm{T}_t\)). This is a vector (vector<Mat>
) that contains the rotation,(3x3)
rotation matrices or(3x1)
rotation vectors, for all the transformations from calibration target frame to camera frame.t_target2cam
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the target frame to the camera frame (\(_{}^{c}\textrm{T}_t\)). This is a vector (vector<Mat>
) that contains the(3x1)
translation vectors for all the transformations from calibration target frame to camera frame.R_cam2gripper
 Estimated(3x3)
rotation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame (\(_{}^{g}\textrm{T}_c\)).t_cam2gripper
 Estimated(3x1)
translation part extracted from the homogeneous matrix that transforms a point expressed in the camera frame to the gripper frame (\(_{}^{g}\textrm{T}_c\)). The function performs the HandEye calibration using various methods. One approach consists in estimating the rotation then the translation (separable solutions) and the following methods are implemented: R. Tsai, R. Lenz A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/EyeCalibration \cite Tsai89
 F. Park, B. Martin Robot Sensor Calibration: Solving AX = XB on the Euclidean Group \cite Park94
 R. Horaud, F. Dornaika HandEye Calibration \cite Horaud95
 N. Andreff, R. Horaud, B. Espiau Online HandEye Calibration \cite Andreff99
 K. Daniilidis HandEye Calibration Using Dual Quaternions \cite Daniilidis98
 a static calibration pattern is used to estimate the transformation between the target frame and the camera frame
 the robot gripper is moved in order to acquire several poses
 for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for instance the robot kinematics \( \begin{bmatrix} X_b\\ Y_b\\ Z_b\\ 1 \end{bmatrix} = \begin{bmatrix} _{}^{b}\textrm{R}_g & _{}^{b}\textrm{t}_g \\ 0_{1 \times 3} & 1 \end{bmatrix} \begin{bmatrix} X_g\\ Y_g\\ Z_g\\ 1 \end{bmatrix} \)
 for each pose, the homogeneous transformation between the calibration target frame and the camera frame is recorded using for instance a pose estimation method (PnP) from 2D3D point correspondences \( \begin{bmatrix} X_c\\ Y_c\\ Z_c\\ 1 \end{bmatrix} = \begin{bmatrix} _{}^{c}\textrm{R}_t & _{}^{c}\textrm{t}_t \\ 0_{1 \times 3} & 1 \end{bmatrix} \begin{bmatrix} X_t\\ Y_t\\ Z_t\\ 1 \end{bmatrix} \)
 for an eyeinhand configuration \( \begin{align*} ^{b}{\textrm{T}_g}^{(1)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &= \hspace{0.1em} ^{b}{\textrm{T}_g}^{(2)} \hspace{0.2em} ^{g}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\
 for an eyetohand configuration \( \begin{align*} ^{g}{\textrm{T}_b}^{(1)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(1)} &= \hspace{0.1em} ^{g}{\textrm{T}_b}^{(2)} \hspace{0.2em} ^{b}\textrm{T}_c \hspace{0.2em} ^{c}{\textrm{T}_t}^{(2)} \\

calibrateRobotWorldHandEye
public static void calibrateRobotWorldHandEye(java.util.List<Mat> R_world2cam, java.util.List<Mat> t_world2cam, java.util.List<Mat> R_base2gripper, java.util.List<Mat> t_base2gripper, Mat R_base2world, Mat t_base2world, Mat R_gripper2cam, Mat t_gripper2cam, int method)
Computes RobotWorld/HandEye calibration: \(_{}^{w}\textrm{T}_b\) and \(_{}^{c}\textrm{T}_g\) Parameters:
R_world2cam
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame (\(_{}^{c}\textrm{T}_w\)). This is a vector (vector<Mat>
) that contains the rotation,(3x3)
rotation matrices or(3x1)
rotation vectors, for all the transformations from world frame to the camera frame.t_world2cam
 Translation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame (\(_{}^{c}\textrm{T}_w\)). This is a vector (vector<Mat>
) that contains the(3x1)
translation vectors for all the transformations from world frame to the camera frame.R_base2gripper
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame (\(_{}^{g}\textrm{T}_b\)). This is a vector (vector<Mat>
) that contains the rotation,(3x3)
rotation matrices or(3x1)
rotation vectors, for all the transformations from robot base frame to the gripper frame.t_base2gripper
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame (\(_{}^{g}\textrm{T}_b\)). This is a vector (vector<Mat>
) that contains the(3x1)
translation vectors for all the transformations from robot base frame to the gripper frame.R_base2world
 Estimated(3x3)
rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame (\(_{}^{w}\textrm{T}_b\)).t_base2world
 Estimated(3x1)
translation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame (\(_{}^{w}\textrm{T}_b\)).R_gripper2cam
 Estimated(3x3)
rotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the camera frame (\(_{}^{c}\textrm{T}_g\)).t_gripper2cam
 Estimated(3x1)
translation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the camera frame (\(_{}^{c}\textrm{T}_g\)).method
 One of the implemented RobotWorld/HandEye calibration method, see cv::RobotWorldHandEyeCalibrationMethod The function performs the RobotWorld/HandEye calibration using various methods. One approach consists in estimating the rotation then the translation (separable solutions): M. Shah, Solving the robotworld/handeye calibration problem using the kronecker product \cite Shah2013SolvingTR
 A. Li, L. Wang, and D. Wu, Simultaneous robotworld and handeye calibration using dualquaternions and kronecker product \cite Li2010SimultaneousRA
 a static calibration pattern is used to estimate the transformation between the target frame and the camera frame
 the robot gripper is moved in order to acquire several poses
 for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for instance the robot kinematics \( \begin{bmatrix} X_g\\ Y_g\\ Z_g\\ 1 \end{bmatrix} = \begin{bmatrix} _{}^{g}\textrm{R}_b & _{}^{g}\textrm{t}_b \\ 0_{1 \times 3} & 1 \end{bmatrix} \begin{bmatrix} X_b\\ Y_b\\ Z_b\\ 1 \end{bmatrix} \)
 for each pose, the homogeneous transformation between the calibration target frame (the world frame) and the camera frame is recorded using for instance a pose estimation method (PnP) from 2D3D point correspondences \( \begin{bmatrix} X_c\\ Y_c\\ Z_c\\ 1 \end{bmatrix} = \begin{bmatrix} _{}^{c}\textrm{R}_w & _{}^{c}\textrm{t}_w \\ 0_{1 \times 3} & 1 \end{bmatrix} \begin{bmatrix} X_w\\ Y_w\\ Z_w\\ 1 \end{bmatrix} \)
 \(\mathbf{A} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_w\)
 \(\mathbf{X} \Leftrightarrow \hspace{0.1em} _{}^{w}\textrm{T}_b\)
 \(\mathbf{Z} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_g\)
 \(\mathbf{B} \Leftrightarrow \hspace{0.1em} _{}^{g}\textrm{T}_b\)

calibrateRobotWorldHandEye
public static void calibrateRobotWorldHandEye(java.util.List<Mat> R_world2cam, java.util.List<Mat> t_world2cam, java.util.List<Mat> R_base2gripper, java.util.List<Mat> t_base2gripper, Mat R_base2world, Mat t_base2world, Mat R_gripper2cam, Mat t_gripper2cam)
Computes RobotWorld/HandEye calibration: \(_{}^{w}\textrm{T}_b\) and \(_{}^{c}\textrm{T}_g\) Parameters:
R_world2cam
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame (\(_{}^{c}\textrm{T}_w\)). This is a vector (vector<Mat>
) that contains the rotation,(3x3)
rotation matrices or(3x1)
rotation vectors, for all the transformations from world frame to the camera frame.t_world2cam
 Translation part extracted from the homogeneous matrix that transforms a point expressed in the world frame to the camera frame (\(_{}^{c}\textrm{T}_w\)). This is a vector (vector<Mat>
) that contains the(3x1)
translation vectors for all the transformations from world frame to the camera frame.R_base2gripper
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame (\(_{}^{g}\textrm{T}_b\)). This is a vector (vector<Mat>
) that contains the rotation,(3x3)
rotation matrices or(3x1)
rotation vectors, for all the transformations from robot base frame to the gripper frame.t_base2gripper
 Rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the gripper frame (\(_{}^{g}\textrm{T}_b\)). This is a vector (vector<Mat>
) that contains the(3x1)
translation vectors for all the transformations from robot base frame to the gripper frame.R_base2world
 Estimated(3x3)
rotation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame (\(_{}^{w}\textrm{T}_b\)).t_base2world
 Estimated(3x1)
translation part extracted from the homogeneous matrix that transforms a point expressed in the robot base frame to the world frame (\(_{}^{w}\textrm{T}_b\)).R_gripper2cam
 Estimated(3x3)
rotation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the camera frame (\(_{}^{c}\textrm{T}_g\)).t_gripper2cam
 Estimated(3x1)
translation part extracted from the homogeneous matrix that transforms a point expressed in the gripper frame to the camera frame (\(_{}^{c}\textrm{T}_g\)). The function performs the RobotWorld/HandEye calibration using various methods. One approach consists in estimating the rotation then the translation (separable solutions): M. Shah, Solving the robotworld/handeye calibration problem using the kronecker product \cite Shah2013SolvingTR
 A. Li, L. Wang, and D. Wu, Simultaneous robotworld and handeye calibration using dualquaternions and kronecker product \cite Li2010SimultaneousRA
 a static calibration pattern is used to estimate the transformation between the target frame and the camera frame
 the robot gripper is moved in order to acquire several poses
 for each pose, the homogeneous transformation between the gripper frame and the robot base frame is recorded using for instance the robot kinematics \( \begin{bmatrix} X_g\\ Y_g\\ Z_g\\ 1 \end{bmatrix} = \begin{bmatrix} _{}^{g}\textrm{R}_b & _{}^{g}\textrm{t}_b \\ 0_{1 \times 3} & 1 \end{bmatrix} \begin{bmatrix} X_b\\ Y_b\\ Z_b\\ 1 \end{bmatrix} \)
 for each pose, the homogeneous transformation between the calibration target frame (the world frame) and the camera frame is recorded using for instance a pose estimation method (PnP) from 2D3D point correspondences \( \begin{bmatrix} X_c\\ Y_c\\ Z_c\\ 1 \end{bmatrix} = \begin{bmatrix} _{}^{c}\textrm{R}_w & _{}^{c}\textrm{t}_w \\ 0_{1 \times 3} & 1 \end{bmatrix} \begin{bmatrix} X_w\\ Y_w\\ Z_w\\ 1 \end{bmatrix} \)
 \(\mathbf{A} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_w\)
 \(\mathbf{X} \Leftrightarrow \hspace{0.1em} _{}^{w}\textrm{T}_b\)
 \(\mathbf{Z} \Leftrightarrow \hspace{0.1em} _{}^{c}\textrm{T}_g\)
 \(\mathbf{B} \Leftrightarrow \hspace{0.1em} _{}^{g}\textrm{T}_b\)

convertPointsToHomogeneous
public static void convertPointsToHomogeneous(Mat src, Mat dst)
Converts points from Euclidean to homogeneous space. Parameters:
src
 Input vector of Ndimensional points.dst
 Output vector of N+1dimensional points. The function converts points from Euclidean to homogeneous space by appending 1's to the tuple of point coordinates. That is, each point (x1, x2, ..., xn) is converted to (x1, x2, ..., xn, 1).

convertPointsFromHomogeneous
public static void convertPointsFromHomogeneous(Mat src, Mat dst)
Converts points from homogeneous to Euclidean space. Parameters:
src
 Input vector of Ndimensional points.dst
 Output vector of N1dimensional points. The function converts points homogeneous to Euclidean space using perspective projection. That is, each point (x1, x2, ... x(n1), xn) is converted to (x1/xn, x2/xn, ..., x(n1)/xn). When xn=0, the output point coordinates will be (0,0,0,...).

findFundamentalMat
public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold, double confidence, int maxIters, Mat mask)
Calculates a fundamental matrix from the corresponding points in two images. Parameters:
points1
 Array of N points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .method
 Method for computing a fundamental matrix. REF: FM_7POINT for a 7point algorithm. \(N = 7\)
 REF: FM_8POINT for an 8point algorithm. \(N \ge 8\)
 REF: FM_RANSAC for the RANSAC algorithm. \(N \ge 8\)
 REF: FM_LMEDS for the LMedS algorithm. \(N \ge 8\)
ransacReprojThreshold
 Parameter used only for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise.confidence
 Parameter used for the RANSAC and LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.mask
 optional output maskmaxIters
 The maximum number of robust method iterations. The epipolar geometry is described by the following equation: \([p_2; 1]^T F [p_1; 1] = 0\) where \(F\) is a fundamental matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the second images, respectively. The function calculates the fundamental matrix using one of four methods listed above and returns the found fundamental matrix. Normally just one matrix is found. But in case of the 7point algorithm, the function may return up to 3 solutions ( \(9 \times 3\) matrix that stores all 3 matrices sequentially). The calculated fundamental matrix may be passed further to #computeCorrespondEpilines that finds the epipolar lines corresponding to the specified points. It can also be passed to #stereoRectifyUncalibrated to compute the rectification transformation. :// Example. Estimation of fundamental matrix using the RANSAC algorithm int point_count = 100; vector<Point2f> points1(point_count); vector<Point2f> points2(point_count); // initialize the points here ... for( int i = 0; i < point_count; i++ ) { points1[i] = ...; points2[i] = ...; } Mat fundamental_matrix = findFundamentalMat(points1, points2, FM_RANSAC, 3, 0.99);
 Returns:
 automatically generated

findFundamentalMat
public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold, double confidence, int maxIters)
Calculates a fundamental matrix from the corresponding points in two images. Parameters:
points1
 Array of N points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .method
 Method for computing a fundamental matrix. REF: FM_7POINT for a 7point algorithm. \(N = 7\)
 REF: FM_8POINT for an 8point algorithm. \(N \ge 8\)
 REF: FM_RANSAC for the RANSAC algorithm. \(N \ge 8\)
 REF: FM_LMEDS for the LMedS algorithm. \(N \ge 8\)
ransacReprojThreshold
 Parameter used only for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise.confidence
 Parameter used for the RANSAC and LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.maxIters
 The maximum number of robust method iterations. The epipolar geometry is described by the following equation: \([p_2; 1]^T F [p_1; 1] = 0\) where \(F\) is a fundamental matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the second images, respectively. The function calculates the fundamental matrix using one of four methods listed above and returns the found fundamental matrix. Normally just one matrix is found. But in case of the 7point algorithm, the function may return up to 3 solutions ( \(9 \times 3\) matrix that stores all 3 matrices sequentially). The calculated fundamental matrix may be passed further to #computeCorrespondEpilines that finds the epipolar lines corresponding to the specified points. It can also be passed to #stereoRectifyUncalibrated to compute the rectification transformation. :// Example. Estimation of fundamental matrix using the RANSAC algorithm int point_count = 100; vector<Point2f> points1(point_count); vector<Point2f> points2(point_count); // initialize the points here ... for( int i = 0; i < point_count; i++ ) { points1[i] = ...; points2[i] = ...; } Mat fundamental_matrix = findFundamentalMat(points1, points2, FM_RANSAC, 3, 0.99);
 Returns:
 automatically generated

findFundamentalMat
public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold, double confidence, Mat mask)

findFundamentalMat
public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold, double confidence)

findFundamentalMat
public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method, double ransacReprojThreshold)

findFundamentalMat
public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, int method)

findFundamentalMat
public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2)

findFundamentalMat
public static Mat findFundamentalMat(MatOfPoint2f points1, MatOfPoint2f points2, Mat mask, UsacParams params)

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method, double prob, double threshold, int maxIters, Mat mask)
Calculates an essential matrix from the corresponding points in two images. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera intrinsic matrix. When passing these coordinates, pass the identity matrix for this parameter.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise.mask
 Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods.maxIters
 The maximum number of robust method iterations. This function estimates essential matrix based on the fivepoint algorithm solver in CITE: Nister03 . CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation: \([p_2; 1]^T K^{T} E K^{1} [p_1; 1] = 0\) where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the second images, respectively. The result of this function may be passed further to #decomposeEssentialMat or #recoverPose to recover the relative pose between cameras. Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method, double prob, double threshold, int maxIters)
Calculates an essential matrix from the corresponding points in two images. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera intrinsic matrix. When passing these coordinates, pass the identity matrix for this parameter.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise. for the other points. The array is computed only in the RANSAC and LMedS methods.maxIters
 The maximum number of robust method iterations. This function estimates essential matrix based on the fivepoint algorithm solver in CITE: Nister03 . CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation: \([p_2; 1]^T K^{T} E K^{1} [p_1; 1] = 0\) where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the second images, respectively. The result of this function may be passed further to #decomposeEssentialMat or #recoverPose to recover the relative pose between cameras. Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method, double prob, double threshold)
Calculates an essential matrix from the corresponding points in two images. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera intrinsic matrix. When passing these coordinates, pass the identity matrix for this parameter.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise. for the other points. The array is computed only in the RANSAC and LMedS methods. This function estimates essential matrix based on the fivepoint algorithm solver in CITE: Nister03 . CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation: \([p_2; 1]^T K^{T} E K^{1} [p_1; 1] = 0\) where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the second images, respectively. The result of this function may be passed further to #decomposeEssentialMat or #recoverPose to recover the relative pose between cameras. Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method, double prob)
Calculates an essential matrix from the corresponding points in two images. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera intrinsic matrix. When passing these coordinates, pass the identity matrix for this parameter.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise. for the other points. The array is computed only in the RANSAC and LMedS methods. This function estimates essential matrix based on the fivepoint algorithm solver in CITE: Nister03 . CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation: \([p_2; 1]^T K^{T} E K^{1} [p_1; 1] = 0\) where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the second images, respectively. The result of this function may be passed further to #decomposeEssentialMat or #recoverPose to recover the relative pose between cameras. Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix, int method)
Calculates an essential matrix from the corresponding points in two images. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera intrinsic matrix. When passing these coordinates, pass the identity matrix for this parameter.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
 Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix)
Calculates an essential matrix from the corresponding points in two images. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera intrinsic matrix. When passing these coordinates, pass the identity matrix for this parameter. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
 Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob, double threshold, int maxIters, Mat mask)
 Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .focal
 focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.pp
 principal point of the camera.method
 Method for computing a fundamental matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise.prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.mask
 Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods.maxIters
 The maximum number of robust method iterations. This function differs from the one above that it computes camera intrinsic matrix from focal length and principal point: \(A = \begin{bmatrix} f & 0 & x_{pp} \\ 0 & f & y_{pp} \\ 0 & 0 & 1 \end{bmatrix}\) Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob, double threshold, int maxIters)
 Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .focal
 focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.pp
 principal point of the camera.method
 Method for computing a fundamental matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise.prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. for the other points. The array is computed only in the RANSAC and LMedS methods.maxIters
 The maximum number of robust method iterations. This function differs from the one above that it computes camera intrinsic matrix from focal length and principal point: \(A = \begin{bmatrix} f & 0 & x_{pp} \\ 0 & f & y_{pp} \\ 0 & 0 & 1 \end{bmatrix}\) Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob, double threshold)
 Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .focal
 focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.pp
 principal point of the camera.method
 Method for computing a fundamental matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise.prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. for the other points. The array is computed only in the RANSAC and LMedS methods. This function differs from the one above that it computes camera intrinsic matrix from focal length and principal point: \(A = \begin{bmatrix} f & 0 & x_{pp} \\ 0 & f & y_{pp} \\ 0 & 0 & 1 \end{bmatrix}\) Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method, double prob)
 Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .focal
 focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.pp
 principal point of the camera.method
 Method for computing a fundamental matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. for the other points. The array is computed only in the RANSAC and LMedS methods. This function differs from the one above that it computes camera intrinsic matrix from focal length and principal point: \(A = \begin{bmatrix} f & 0 & x_{pp} \\ 0 & f & y_{pp} \\ 0 & 0 & 1 \end{bmatrix}\) Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, double focal, Point pp, int method)
 Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .focal
 focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.pp
 principal point of the camera.method
 Method for computing a fundamental matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
 Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, double focal, Point pp)
 Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .focal
 focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.pp
 principal point of the camera. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
 Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, double focal)
 Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .focal
 focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
 Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2)
 Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 . are feature points from cameras with same focal length and principal point. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
 Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, int method, double prob, double threshold, Mat mask)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.cameraMatrix2
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.distCoeffs1
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.distCoeffs2
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise.mask
 Output array of N elements, every element of which is set to 0 for outliers and to 1 for the other points. The array is computed only in the RANSAC and LMedS methods. This function estimates essential matrix based on the fivepoint algorithm solver in CITE: Nister03 . CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation: \([p_2; 1]^T K^{T} E K^{1} [p_1; 1] = 0\) where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the second images, respectively. The result of this function may be passed further to #decomposeEssentialMat or #recoverPose to recover the relative pose between cameras. Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, int method, double prob, double threshold)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.cameraMatrix2
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.distCoeffs1
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.distCoeffs2
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise. for the other points. The array is computed only in the RANSAC and LMedS methods. This function estimates essential matrix based on the fivepoint algorithm solver in CITE: Nister03 . CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation: \([p_2; 1]^T K^{T} E K^{1} [p_1; 1] = 0\) where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the second images, respectively. The result of this function may be passed further to #decomposeEssentialMat or #recoverPose to recover the relative pose between cameras. Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, int method, double prob)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.cameraMatrix2
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.distCoeffs1
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.distCoeffs2
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise. for the other points. The array is computed only in the RANSAC and LMedS methods. This function estimates essential matrix based on the fivepoint algorithm solver in CITE: Nister03 . CITE: SteweniusCFS is also a related. The epipolar geometry is described by the following equation: \([p_2; 1]^T K^{T} E K^{1} [p_1; 1] = 0\) where \(E\) is an essential matrix, \(p_1\) and \(p_2\) are corresponding points in the first and the second images, respectively. The result of this function may be passed further to #decomposeEssentialMat or #recoverPose to recover the relative pose between cameras. Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, int method)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.cameraMatrix2
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.distCoeffs1
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.distCoeffs2
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
 Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2)
Calculates an essential matrix from the corresponding points in two images from potentially two different cameras. Parameters:
points1
 Array of N (N >= 5) 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.cameraMatrix2
 Camera matrix \(K = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera matrix. If this assumption does not hold for your use case, use #undistortPoints withP = cv::NoArray()
for both cameras to transform image points to normalized image coordinates, which are valid for the identity camera matrix. When passing these coordinates, pass the identity matrix for this parameter.distCoeffs1
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.distCoeffs2
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
 Returns:
 automatically generated

findEssentialMat
public static Mat findEssentialMat(Mat points1, Mat points2, Mat cameraMatrix1, Mat cameraMatrix2, Mat dist_coeff1, Mat dist_coeff2, Mat mask, UsacParams params)

decomposeEssentialMat
public static void decomposeEssentialMat(Mat E, Mat R1, Mat R2, Mat t)
Decompose an essential matrix to possible rotations and translation. Parameters:
E
 The input essential matrix.R1
 One possible rotation matrix.R2
 Another possible rotation matrix.t
 One possible translation. This function decomposes the essential matrix E using svd decomposition CITE: HartleyZ00. In general, four possible poses exist for the decomposition of E. They are \([R_1, t]\), \([R_1, t]\), \([R_2, t]\), \([R_2, t]\). If E gives the epipolar constraint \([p_2; 1]^T A^{T} E A^{1} [p_1; 1] = 0\) between the image points \(p_1\) in the first image and \(p_2\) in second image, then any of the tuples \([R_1, t]\), \([R_1, t]\), \([R_2, t]\), \([R_2, t]\) is a change of basis from the first camera's coordinate system to the second camera's coordinate system. However, by decomposing E, one can only get the direction of the translation. For this reason, the translation t is returned with unit length.

recoverPose
public static int recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t, int method, double prob, double threshold, Mat mask)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of inliers that pass the check. Parameters:
points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs1
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.cameraMatrix2
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs2
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.E
 The output essential matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter described below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise.mask
 Input/output mask for inliers in points1 and points2. If it is not empty, then it marks inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the cheirality check. This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies possible pose hypotheses by doing cheirality check. The cheirality check means that the triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03. This function can be used to process the output E and mask from REF: findEssentialMat. In this scenario, points1 and points2 are the same input for findEssentialMat.:// Example. Estimation of fundamental matrix using the RANSAC algorithm int point_count = 100; vector<Point2f> points1(point_count); vector<Point2f> points2(point_count); // initialize the points here ... for( int i = 0; i < point_count; i++ ) { points1[i] = ...; points2[i] = ...; } // Input: camera calibration of both cameras, for example using intrinsic chessboard calibration. Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2; // Output: Essential matrix, relative rotation and relative translation. Mat E, R, t, mask; recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
 Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t, int method, double prob, double threshold)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of inliers that pass the check. Parameters:
points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs1
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.cameraMatrix2
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs2
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.E
 The output essential matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter described below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct.threshold
 Parameter used for RANSAC. It is the maximum distance from a point to an epipolar line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise. inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the cheirality check. This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies possible pose hypotheses by doing cheirality check. The cheirality check means that the triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03. This function can be used to process the output E and mask from REF: findEssentialMat. In this scenario, points1 and points2 are the same input for findEssentialMat.:// Example. Estimation of fundamental matrix using the RANSAC algorithm int point_count = 100; vector<Point2f> points1(point_count); vector<Point2f> points2(point_count); // initialize the points here ... for( int i = 0; i < point_count; i++ ) { points1[i] = ...; points2[i] = ...; } // Input: camera calibration of both cameras, for example using intrinsic chessboard calibration. Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2; // Output: Essential matrix, relative rotation and relative translation. Mat E, R, t, mask; recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
 Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t, int method, double prob)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of inliers that pass the check. Parameters:
points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs1
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.cameraMatrix2
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs2
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.E
 The output essential matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter described below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
prob
 Parameter used for the RANSAC or LMedS methods only. It specifies a desirable level of confidence (probability) that the estimated matrix is correct. line in pixels, beyond which the point is considered an outlier and is not used for computing the final fundamental matrix. It can be set to something like 13, depending on the accuracy of the point localization, image resolution, and the image noise. inliers in points1 and points2 for then given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the cheirality check. This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies possible pose hypotheses by doing cheirality check. The cheirality check means that the triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03. This function can be used to process the output E and mask from REF: findEssentialMat. In this scenario, points1 and points2 are the same input for findEssentialMat.:// Example. Estimation of fundamental matrix using the RANSAC algorithm int point_count = 100; vector<Point2f> points1(point_count); vector<Point2f> points2(point_count); // initialize the points here ... for( int i = 0; i < point_count; i++ ) { points1[i] = ...; points2[i] = ...; } // Input: camera calibration of both cameras, for example using intrinsic chessboard calibration. Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2; // Output: Essential matrix, relative rotation and relative translation. Mat E, R, t, mask; recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
 Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t, int method)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of inliers that pass the check. Parameters:
points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs1
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.cameraMatrix2
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs2
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.E
 The output essential matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter described below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.method
 Method for computing an essential matrix. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
// Example. Estimation of fundamental matrix using the RANSAC algorithm int point_count = 100; vector<Point2f> points1(point_count); vector<Point2f> points2(point_count); // initialize the points here ... for( int i = 0; i < point_count; i++ ) { points1[i] = ...; points2[i] = ...; } // Input: camera calibration of both cameras, for example using intrinsic chessboard calibration. Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2; // Output: Essential matrix, relative rotation and relative translation. Mat E, R, t, mask; recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
 Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat points1, Mat points2, Mat cameraMatrix1, Mat distCoeffs1, Mat cameraMatrix2, Mat distCoeffs2, Mat E, Mat R, Mat t)
Recovers the relative camera rotation and the translation from corresponding points in two images from two different cameras, using cheirality check. Returns the number of inliers that pass the check. Parameters:
points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix1
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs1
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.cameraMatrix2
 Input/output camera matrix for the first camera, the same as in REF: calibrateCamera. Furthermore, for the stereo case, additional flags may be used, see below.distCoeffs2
 Input/output vector of distortion coefficients, the same as in REF: calibrateCamera.E
 The output essential matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter described below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length. REF: RANSAC for the RANSAC algorithm.
 REF: LMEDS for the LMedS algorithm.
// Example. Estimation of fundamental matrix using the RANSAC algorithm int point_count = 100; vector<Point2f> points1(point_count); vector<Point2f> points2(point_count); // initialize the points here ... for( int i = 0; i < point_count; i++ ) { points1[i] = ...; points2[i] = ...; } // Input: camera calibration of both cameras, for example using intrinsic chessboard calibration. Mat cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2; // Output: Essential matrix, relative rotation and relative translation. Mat E, R, t, mask; recoverPose(points1, points2, cameraMatrix1, distCoeffs1, cameraMatrix2, distCoeffs2, E, R, t, mask);
 Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t, Mat mask)
Recovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check. Parameters:
E
 The input essential matrix.points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter described below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.mask
 Input/output mask for inliers in points1 and points2. If it is not empty, then it marks inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the chirality check. This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies possible pose hypotheses by doing chirality check. The chirality check means that the triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03. This function can be used to process the output E and mask from REF: findEssentialMat. In this scenario, points1 and points2 are the same input for #findEssentialMat :// Example. Estimation of fundamental matrix using the RANSAC algorithm int point_count = 100; vector<Point2f> points1(point_count); vector<Point2f> points2(point_count); // initialize the points here ... for( int i = 0; i < point_count; i++ ) { points1[i] = ...; points2[i] = ...; } // cametra matrix with both focal lengths = 1, and principal point = (0, 0) Mat cameraMatrix = Mat::eye(3, 3, CV_64F); Mat E, R, t, mask; E = findEssentialMat(points1, points2, cameraMatrix, RANSAC, 0.999, 1.0, mask); recoverPose(E, points1, points2, cameraMatrix, R, t, mask);
 Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t)
Recovers the relative camera rotation and the translation from an estimated essential matrix and the corresponding points in two images, using chirality check. Returns the number of inliers that pass the check. Parameters:
E
 The input essential matrix.points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter described below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length. inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the chirality check. This function decomposes an essential matrix using REF: decomposeEssentialMat and then verifies possible pose hypotheses by doing chirality check. The chirality check means that the triangulated 3D points should have positive depth. Some details can be found in CITE: Nister03. This function can be used to process the output E and mask from REF: findEssentialMat. In this scenario, points1 and points2 are the same input for #findEssentialMat :// Example. Estimation of fundamental matrix using the RANSAC algorithm int point_count = 100; vector<Point2f> points1(point_count); vector<Point2f> points2(point_count); // initialize the points here ... for( int i = 0; i < point_count; i++ ) { points1[i] = ...; points2[i] = ...; } // cametra matrix with both focal lengths = 1, and principal point = (0, 0) Mat cameraMatrix = Mat::eye(3, 3, CV_64F); Mat E, R, t, mask; E = findEssentialMat(points1, points2, cameraMatrix, RANSAC, 0.999, 1.0, mask); recoverPose(E, points1, points2, cameraMatrix, R, t, mask);
 Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat E, Mat points1, Mat points2, Mat R, Mat t, double focal, Point pp, Mat mask)
 Parameters:
E
 The input essential matrix.points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter description below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.focal
 Focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.pp
 principal point of the camera.mask
 Input/output mask for inliers in points1 and points2. If it is not empty, then it marks inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the chirality check. This function differs from the one above that it computes camera intrinsic matrix from focal length and principal point: \(A = \begin{bmatrix} f & 0 & x_{pp} \\ 0 & f & y_{pp} \\ 0 & 0 & 1 \end{bmatrix}\) Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat E, Mat points1, Mat points2, Mat R, Mat t, double focal, Point pp)
 Parameters:
E
 The input essential matrix.points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter description below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.focal
 Focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point.pp
 principal point of the camera. inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the chirality check. This function differs from the one above that it computes camera intrinsic matrix from focal length and principal point: \(A = \begin{bmatrix} f & 0 & x_{pp} \\ 0 & f & y_{pp} \\ 0 & 0 & 1 \end{bmatrix}\) Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat E, Mat points1, Mat points2, Mat R, Mat t, double focal)
 Parameters:
E
 The input essential matrix.points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter description below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.focal
 Focal length of the camera. Note that this function assumes that points1 and points2 are feature points from cameras with same focal length and principal point. inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the chirality check. This function differs from the one above that it computes camera intrinsic matrix from focal length and principal point: \(A = \begin{bmatrix} f & 0 & x_{pp} \\ 0 & f & y_{pp} \\ 0 & 0 & 1 \end{bmatrix}\) Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat E, Mat points1, Mat points2, Mat R, Mat t)
 Parameters:
E
 The input essential matrix.points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1 .R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter description below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length. are feature points from cameras with same focal length and principal point. inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the chirality check. This function differs from the one above that it computes camera intrinsic matrix from focal length and principal point: \(A = \begin{bmatrix} f & 0 & x_{pp} \\ 0 & f & y_{pp} \\ 0 & 0 & 1 \end{bmatrix}\) Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t, double distanceThresh, Mat mask, Mat triangulatedPoints)
 Parameters:
E
 The input essential matrix.points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1.cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter description below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.distanceThresh
 threshold distance which is used to filter out far away points (i.e. infinite points).mask
 Input/output mask for inliers in points1 and points2. If it is not empty, then it marks inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the chirality check.triangulatedPoints
 3D points which were reconstructed by triangulation. This function differs from the one above that it outputs the triangulated 3D point that are used for the chirality check. Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t, double distanceThresh, Mat mask)
 Parameters:
E
 The input essential matrix.points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1.cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter description below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.distanceThresh
 threshold distance which is used to filter out far away points (i.e. infinite points).mask
 Input/output mask for inliers in points1 and points2. If it is not empty, then it marks inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the chirality check. This function differs from the one above that it outputs the triangulated 3D point that are used for the chirality check. Returns:
 automatically generated

recoverPose
public static int recoverPose(Mat E, Mat points1, Mat points2, Mat cameraMatrix, Mat R, Mat t, double distanceThresh)
 Parameters:
E
 The input essential matrix.points1
 Array of N 2D points from the first image. The point coordinates should be floatingpoint (single or double precision).points2
 Array of the second image points of the same size and format as points1.cameraMatrix
 Camera intrinsic matrix \(\cameramatrix{A}\) . Note that this function assumes that points1 and points2 are feature points from cameras with the same camera intrinsic matrix.R
 Output rotation matrix. Together with the translation vector, this matrix makes up a tuple that performs a change of basis from the first camera's coordinate system to the second camera's coordinate system. Note that, in general, t can not be used for this tuple, see the parameter description below.t
 Output translation vector. This vector is obtained by REF: decomposeEssentialMat and therefore is only known up to scale, i.e. t is the direction of the translation vector and has unit length.distanceThresh
 threshold distance which is used to filter out far away points (i.e. infinite points). inliers in points1 and points2 for the given essential matrix E. Only these inliers will be used to recover pose. In the output mask only inliers which pass the chirality check. This function differs from the one above that it outputs the triangulated 3D point that are used for the chirality check. Returns:
 automatically generated

computeCorrespondEpilines
public static void computeCorrespondEpilines(Mat points, int whichImage, Mat F, Mat lines)
For points in an image of a stereo pair, computes the corresponding epilines in the other image. Parameters:
points
 Input points. \(N \times 1\) or \(1 \times N\) matrix of type CV_32FC2 or vector<Point2f> .whichImage
 Index of the image (1 or 2) that contains the points .F
 Fundamental matrix that can be estimated using #findFundamentalMat or #stereoRectify .lines
 Output vector of the epipolar lines corresponding to the points in the other image. Each line \(ax + by + c=0\) is encoded by 3 numbers \((a, b, c)\) . For every point in one of the two images of a stereo pair, the function finds the equation of the corresponding epipolar line in the other image. From the fundamental matrix definition (see #findFundamentalMat ), line \(l^{(2)}_i\) in the second image for the point \(p^{(1)}_i\) in the first image (when whichImage=1 ) is computed as: \(l^{(2)}_i = F p^{(1)}_i\) And vice versa, when whichImage=2, \(l^{(1)}_i\) is computed from \(p^{(2)}_i\) as: \(l^{(1)}_i = F^T p^{(2)}_i\) Line coefficients are defined up to a scale. They are normalized so that \(a_i^2+b_i^2=1\) .

triangulatePoints
public static void triangulatePoints(Mat projMatr1, Mat projMatr2, Mat projPoints1, Mat projPoints2, Mat points4D)
This function reconstructs 3dimensional points (in homogeneous coordinates) by using their observations with a stereo camera. Parameters:
projMatr1
 3x4 projection matrix of the first camera, i.e. this matrix projects 3D points given in the world's coordinate system into the first image.projMatr2
 3x4 projection matrix of the second camera, i.e. this matrix projects 3D points given in the world's coordinate system into the second image.projPoints1
 2xN array of feature points in the first image. In the case of the c++ version, it can be also a vector of feature points or twochannel matrix of size 1xN or Nx1.projPoints2
 2xN array of corresponding points in the second image. In the case of the c++ version, it can be also a vector of feature points or twochannel matrix of size 1xN or Nx1.points4D
 4xN array of reconstructed points in homogeneous coordinates. These points are returned in the world's coordinate system. Note: Keep in mind that all input data should be of float type in order for this function to work. Note: If the projection matrices from REF: stereoRectify are used, then the returned points are represented in the first camera's rectified coordinate system. SEE: reprojectImageTo3D

correctMatches
public static void correctMatches(Mat F, Mat points1, Mat points2, Mat newPoints1, Mat newPoints2)
Refines coordinates of corresponding points. Parameters:
F
 3x3 fundamental matrix.points1
 1xN array containing the first set of points.points2
 1xN array containing the second set of points.newPoints1
 The optimized points1.newPoints2
 The optimized points2. The function implements the Optimal Triangulation Method (see Multiple View Geometry CITE: HartleyZ00 for details). For each given point correspondence points1[i] <> points2[i], and a fundamental matrix F, it computes the corrected correspondences newPoints1[i] <> newPoints2[i] that minimize the geometric error \(d(points1[i], newPoints1[i])^2 + d(points2[i],newPoints2[i])^2\) (where \(d(a,b)\) is the geometric distance between points \(a\) and \(b\) ) subject to the epipolar constraint \(newPoints2^T \cdot F \cdot newPoints1 = 0\) .

filterSpeckles
public static void filterSpeckles(Mat img, double newVal, int maxSpeckleSize, double maxDiff, Mat buf)
Filters off small noise blobs (speckles) in the disparity map Parameters:
img
 The input 16bit signed disparity imagenewVal
 The disparity value used to paintoff the specklesmaxSpeckleSize
 The maximum speckle size to consider it a speckle. Larger blobs are not affected by the algorithmmaxDiff
 Maximum difference between neighbor disparity pixels to put them into the same blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixedpoint disparity map, where disparity values are multiplied by 16, this scale factor should be taken into account when specifying this parameter value.buf
 The optional temporary buffer to avoid memory allocation within the function.

filterSpeckles
public static void filterSpeckles(Mat img, double newVal, int maxSpeckleSize, double maxDiff)
Filters off small noise blobs (speckles) in the disparity map Parameters:
img
 The input 16bit signed disparity imagenewVal
 The disparity value used to paintoff the specklesmaxSpeckleSize
 The maximum speckle size to consider it a speckle. Larger blobs are not affected by the algorithmmaxDiff
 Maximum difference between neighbor disparity pixels to put them into the same blob. Note that since StereoBM, StereoSGBM and may be other algorithms return a fixedpoint disparity map, where disparity values are multiplied by 16, this scale factor should be taken into account when specifying this parameter value.

getValidDisparityROI
public static Rect getValidDisparityROI(Rect roi1, Rect roi2, int minDisparity, int numberOfDisparities, int blockSize)

validateDisparity
public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, int numberOfDisparities, int disp12MaxDisp)

validateDisparity
public static void validateDisparity(Mat disparity, Mat cost, int minDisparity, int numberOfDisparities)

reprojectImageTo3D
public static void reprojectImageTo3D(Mat disparity, Mat _3dImage, Mat Q, boolean handleMissingValues, int ddepth)
Reprojects a disparity image to 3D space. Parameters:
disparity
 Input singlechannel 8bit unsigned, 16bit signed, 32bit signed or 32bit floatingpoint disparity image. The values of 8bit / 16bit signed formats are assumed to have no fractional bits. If the disparity is 16bit signed format, as computed by REF: StereoBM or REF: StereoSGBM and maybe other algorithms, it should be divided by 16 (and scaled to float) before being used here._3dImage
 Output 3channel floatingpoint image of the same size as disparity. Each element of _3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity map. If one uses Q obtained by REF: stereoRectify, then the returned points are represented in the first camera's rectified coordinate system.Q
 \(4 \times 4\) perspective transformation matrix that can be obtained with REF: stereoRectify.handleMissingValues
 Indicates, whether the function should handle missing values (i.e. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed to 3D points with a very large Z value (currently set to 10000).ddepth
 The optional output array depth. If it is 1, the output image will have CV_32F depth. ddepth can also be set to CV_16S, CV_32S or CV_32F. The function transforms a singlechannel disparity map to a 3channel image representing a 3D surface. That is, for each pixel (x,y) and the corresponding disparity d=disparity(x,y) , it computes: \(\begin{bmatrix} X \\ Y \\ Z \\ W \end{bmatrix} = Q \begin{bmatrix} x \\ y \\ \texttt{disparity} (x,y) \\ z \end{bmatrix}.\) SEE: To reproject a sparse set of points {(x,y,d),...} to 3D space, use perspectiveTransform.

reprojectImageTo3D
public static void reprojectImageTo3D(Mat disparity, Mat _3dImage, Mat Q, boolean handleMissingValues)
Reprojects a disparity image to 3D space. Parameters:
disparity
 Input singlechannel 8bit unsigned, 16bit signed, 32bit signed or 32bit floatingpoint disparity image. The values of 8bit / 16bit signed formats are assumed to have no fractional bits. If the disparity is 16bit signed format, as computed by REF: StereoBM or REF: StereoSGBM and maybe other algorithms, it should be divided by 16 (and scaled to float) before being used here._3dImage
 Output 3channel floatingpoint image of the same size as disparity. Each element of _3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity map. If one uses Q obtained by REF: stereoRectify, then the returned points are represented in the first camera's rectified coordinate system.Q
 \(4 \times 4\) perspective transformation matrix that can be obtained with REF: stereoRectify.handleMissingValues
 Indicates, whether the function should handle missing values (i.e. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed to 3D points with a very large Z value (currently set to 10000). depth. ddepth can also be set to CV_16S, CV_32S or CV_32F. The function transforms a singlechannel disparity map to a 3channel image representing a 3D surface. That is, for each pixel (x,y) and the corresponding disparity d=disparity(x,y) , it computes: \(\begin{bmatrix} X \\ Y \\ Z \\ W \end{bmatrix} = Q \begin{bmatrix} x \\ y \\ \texttt{disparity} (x,y) \\ z \end{bmatrix}.\) SEE: To reproject a sparse set of points {(x,y,d),...} to 3D space, use perspectiveTransform.

reprojectImageTo3D
public static void reprojectImageTo3D(Mat disparity, Mat _3dImage, Mat Q)
Reprojects a disparity image to 3D space. Parameters:
disparity
 Input singlechannel 8bit unsigned, 16bit signed, 32bit signed or 32bit floatingpoint disparity image. The values of 8bit / 16bit signed formats are assumed to have no fractional bits. If the disparity is 16bit signed format, as computed by REF: StereoBM or REF: StereoSGBM and maybe other algorithms, it should be divided by 16 (and scaled to float) before being used here._3dImage
 Output 3channel floatingpoint image of the same size as disparity. Each element of _3dImage(x,y) contains 3D coordinates of the point (x,y) computed from the disparity map. If one uses Q obtained by REF: stereoRectify, then the returned points are represented in the first camera's rectified coordinate system.Q
 \(4 \times 4\) perspective transformation matrix that can be obtained with REF: stereoRectify. points where the disparity was not computed). If handleMissingValues=true, then pixels with the minimal disparity that corresponds to the outliers (see StereoMatcher::compute ) are transformed to 3D points with a very large Z value (currently set to 10000). depth. ddepth can also be set to CV_16S, CV_32S or CV_32F. The function transforms a singlechannel disparity map to a 3channel image representing a 3D surface. That is, for each pixel (x,y) and the corresponding disparity d=disparity(x,y) , it computes: \(\begin{bmatrix} X \\ Y \\ Z \\ W \end{bmatrix} = Q \begin{bmatrix} x \\ y \\ \texttt{disparity} (x,y) \\ z \end{bmatrix}.\) SEE: To reproject a sparse set of points {(x,y,d),...} to 3D space, use perspectiveTransform.

sampsonDistance
public static double sampsonDistance(Mat pt1, Mat pt2, Mat F)
Calculates the Sampson Distance between two points. The function cv::sampsonDistance calculates and returns the first order approximation of the geometric error as: \( sd( \texttt{pt1} , \texttt{pt2} )= \frac{(\texttt{pt2}^t \cdot \texttt{F} \cdot \texttt{pt1})^2} {((\texttt{F} \cdot \texttt{pt1})(0))^2 + ((\texttt{F} \cdot \texttt{pt1})(1))^2 + ((\texttt{F}^t \cdot \texttt{pt2})(0))^2 + ((\texttt{F}^t \cdot \texttt{pt2})(1))^2} \) The fundamental matrix may be calculated using the #findFundamentalMat function. See CITE: HartleyZ00 11.4.3 for details. Parameters:
pt1
 first homogeneous 2d pointpt2
 second homogeneous 2d pointF
 fundamental matrix Returns:
 The computed Sampson distance.

estimateAffine3D
public static int estimateAffine3D(Mat src, Mat dst, Mat out, Mat inliers, double ransacThreshold, double confidence)
Computes an optimal affine transformation between two 3D point sets. It computes \( \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ Z\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ b_3\\ \end{bmatrix} \) Parameters:
src
 First input 3D point set containing \((X,Y,Z)\).dst
 Second input 3D point set containing \((x,y,z)\).out
 Output 3D affine transformation matrix \(3 \times 4\) of the form \( \begin{bmatrix} a_{11} & a_{12} & a_{13} & b_1\\ a_{21} & a_{22} & a_{23} & b_2\\ a_{31} & a_{32} & a_{33} & b_3\\ \end{bmatrix} \)inliers
 Output vector indicating which points are inliers (1inlier, 0outlier).ransacThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier.confidence
 Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. The function estimates an optimal 3D affine transformation between two 3D point sets using the RANSAC algorithm. Returns:
 automatically generated

estimateAffine3D
public static int estimateAffine3D(Mat src, Mat dst, Mat out, Mat inliers, double ransacThreshold)
Computes an optimal affine transformation between two 3D point sets. It computes \( \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ Z\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ b_3\\ \end{bmatrix} \) Parameters:
src
 First input 3D point set containing \((X,Y,Z)\).dst
 Second input 3D point set containing \((x,y,z)\).out
 Output 3D affine transformation matrix \(3 \times 4\) of the form \( \begin{bmatrix} a_{11} & a_{12} & a_{13} & b_1\\ a_{21} & a_{22} & a_{23} & b_2\\ a_{31} & a_{32} & a_{33} & b_3\\ \end{bmatrix} \)inliers
 Output vector indicating which points are inliers (1inlier, 0outlier).ransacThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. The function estimates an optimal 3D affine transformation between two 3D point sets using the RANSAC algorithm. Returns:
 automatically generated

estimateAffine3D
public static int estimateAffine3D(Mat src, Mat dst, Mat out, Mat inliers)
Computes an optimal affine transformation between two 3D point sets. It computes \( \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ Z\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ b_3\\ \end{bmatrix} \) Parameters:
src
 First input 3D point set containing \((X,Y,Z)\).dst
 Second input 3D point set containing \((x,y,z)\).out
 Output 3D affine transformation matrix \(3 \times 4\) of the form \( \begin{bmatrix} a_{11} & a_{12} & a_{13} & b_1\\ a_{21} & a_{22} & a_{23} & b_2\\ a_{31} & a_{32} & a_{33} & b_3\\ \end{bmatrix} \)inliers
 Output vector indicating which points are inliers (1inlier, 0outlier). an inlier. between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. The function estimates an optimal 3D affine transformation between two 3D point sets using the RANSAC algorithm. Returns:
 automatically generated

estimateAffine3D
public static Mat estimateAffine3D(Mat src, Mat dst, double[] scale, boolean force_rotation)
Computes an optimal affine transformation between two 3D point sets. It computes \(R,s,t\) minimizing \(\sum{i} dst_i  c \cdot R \cdot src_i \) where \(R\) is a 3x3 rotation matrix, \(t\) is a 3x1 translation vector and \(s\) is a scalar size value. This is an implementation of the algorithm by Umeyama \cite umeyama1991least . The estimated affine transform has a homogeneous scale which is a subclass of affine transformations with 7 degrees of freedom. The paired point sets need to comprise at least 3 points each. Parameters:
src
 First input 3D point set.dst
 Second input 3D point set.scale
 If null is passed, the scale parameter c will be assumed to be 1.0. Else the pointedto variable will be set to the optimal scale.force_rotation
 If true, the returned rotation will never be a reflection. This might be unwanted, e.g. when optimizing a transform between a right and a lefthanded coordinate system. Returns:
 3D affine transformation matrix \(3 \times 4\) of the form \(T = \begin{bmatrix} R & t\\ \end{bmatrix} \)

estimateAffine3D
public static Mat estimateAffine3D(Mat src, Mat dst, double[] scale)
Computes an optimal affine transformation between two 3D point sets. It computes \(R,s,t\) minimizing \(\sum{i} dst_i  c \cdot R \cdot src_i \) where \(R\) is a 3x3 rotation matrix, \(t\) is a 3x1 translation vector and \(s\) is a scalar size value. This is an implementation of the algorithm by Umeyama \cite umeyama1991least . The estimated affine transform has a homogeneous scale which is a subclass of affine transformations with 7 degrees of freedom. The paired point sets need to comprise at least 3 points each. Parameters:
src
 First input 3D point set.dst
 Second input 3D point set.scale
 If null is passed, the scale parameter c will be assumed to be 1.0. Else the pointedto variable will be set to the optimal scale. This might be unwanted, e.g. when optimizing a transform between a right and a lefthanded coordinate system. Returns:
 3D affine transformation matrix \(3 \times 4\) of the form \(T = \begin{bmatrix} R & t\\ \end{bmatrix} \)

estimateAffine3D
public static Mat estimateAffine3D(Mat src, Mat dst)
Computes an optimal affine transformation between two 3D point sets. It computes \(R,s,t\) minimizing \(\sum{i} dst_i  c \cdot R \cdot src_i \) where \(R\) is a 3x3 rotation matrix, \(t\) is a 3x1 translation vector and \(s\) is a scalar size value. This is an implementation of the algorithm by Umeyama \cite umeyama1991least . The estimated affine transform has a homogeneous scale which is a subclass of affine transformations with 7 degrees of freedom. The paired point sets need to comprise at least 3 points each. Parameters:
src
 First input 3D point set.dst
 Second input 3D point set. Else the pointedto variable will be set to the optimal scale. This might be unwanted, e.g. when optimizing a transform between a right and a lefthanded coordinate system. Returns:
 3D affine transformation matrix \(3 \times 4\) of the form \(T = \begin{bmatrix} R & t\\ \end{bmatrix} \)

estimateTranslation3D
public static int estimateTranslation3D(Mat src, Mat dst, Mat out, Mat inliers, double ransacThreshold, double confidence)
Computes an optimal translation between two 3D point sets. It computes \( \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} X\\ Y\\ Z\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ b_3\\ \end{bmatrix} \) Parameters:
src
 First input 3D point set containing \((X,Y,Z)\).dst
 Second input 3D point set containing \((x,y,z)\).out
 Output 3D translation vector \(3 \times 1\) of the form \( \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ \end{bmatrix} \)inliers
 Output vector indicating which points are inliers (1inlier, 0outlier).ransacThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier.confidence
 Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. The function estimates an optimal 3D translation between two 3D point sets using the RANSAC algorithm. Returns:
 automatically generated

estimateTranslation3D
public static int estimateTranslation3D(Mat src, Mat dst, Mat out, Mat inliers, double ransacThreshold)
Computes an optimal translation between two 3D point sets. It computes \( \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} X\\ Y\\ Z\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ b_3\\ \end{bmatrix} \) Parameters:
src
 First input 3D point set containing \((X,Y,Z)\).dst
 Second input 3D point set containing \((x,y,z)\).out
 Output 3D translation vector \(3 \times 1\) of the form \( \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ \end{bmatrix} \)inliers
 Output vector indicating which points are inliers (1inlier, 0outlier).ransacThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. The function estimates an optimal 3D translation between two 3D point sets using the RANSAC algorithm. Returns:
 automatically generated

estimateTranslation3D
public static int estimateTranslation3D(Mat src, Mat dst, Mat out, Mat inliers)
Computes an optimal translation between two 3D point sets. It computes \( \begin{bmatrix} x\\ y\\ z\\ \end{bmatrix} = \begin{bmatrix} X\\ Y\\ Z\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ b_3\\ \end{bmatrix} \) Parameters:
src
 First input 3D point set containing \((X,Y,Z)\).dst
 Second input 3D point set containing \((x,y,z)\).out
 Output 3D translation vector \(3 \times 1\) of the form \( \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ \end{bmatrix} \)inliers
 Output vector indicating which points are inliers (1inlier, 0outlier). an inlier. between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. The function estimates an optimal 3D translation between two 3D point sets using the RANSAC algorithm. Returns:
 automatically generated

estimateAffine2D
public static Mat estimateAffine2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters, double confidence, long refineIters)
Computes an optimal affine transformation between two 2D point sets. It computes \( \begin{bmatrix} x\\ y\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ \end{bmatrix} \) Parameters:
from
 First input 2D point set containing \((X,Y)\).to
 Second input 2D point set containing \((x,y)\).inliers
 Output vector indicating which points are inliers (1inlier, 0outlier).method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
ransacReprojThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.maxIters
 The maximum number of robust method iterations.confidence
 Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation.refineIters
 Maximum number of iterations of refining algorithm (LevenbergMarquardt). Passing 0 will disable refining, so the output matrix will be output of robust method. Returns:
 Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The returned matrix has the following form: \( \begin{bmatrix} a_{11} & a_{12} & b_1\\ a_{21} & a_{22} & b_2\\ \end{bmatrix} \) The function estimates an optimal 2D affine transformation between two 2D point sets using the selected robust algorithm. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Note: The RANSAC method can handle practically any ratio of outliers but needs a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffinePartial2D, getAffineTransform

estimateAffine2D
public static Mat estimateAffine2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters, double confidence)
Computes an optimal affine transformation between two 2D point sets. It computes \( \begin{bmatrix} x\\ y\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ \end{bmatrix} \) Parameters:
from
 First input 2D point set containing \((X,Y)\).to
 Second input 2D point set containing \((x,y)\).inliers
 Output vector indicating which points are inliers (1inlier, 0outlier).method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
ransacReprojThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.maxIters
 The maximum number of robust method iterations.confidence
 Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. Passing 0 will disable refining, so the output matrix will be output of robust method. Returns:
 Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The returned matrix has the following form: \( \begin{bmatrix} a_{11} & a_{12} & b_1\\ a_{21} & a_{22} & b_2\\ \end{bmatrix} \) The function estimates an optimal 2D affine transformation between two 2D point sets using the selected robust algorithm. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Note: The RANSAC method can handle practically any ratio of outliers but needs a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffinePartial2D, getAffineTransform

estimateAffine2D
public static Mat estimateAffine2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters)
Computes an optimal affine transformation between two 2D point sets. It computes \( \begin{bmatrix} x\\ y\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ \end{bmatrix} \) Parameters:
from
 First input 2D point set containing \((X,Y)\).to
 Second input 2D point set containing \((x,y)\).inliers
 Output vector indicating which points are inliers (1inlier, 0outlier).method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
ransacReprojThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.maxIters
 The maximum number of robust method iterations. between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. Passing 0 will disable refining, so the output matrix will be output of robust method. Returns:
 Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The returned matrix has the following form: \( \begin{bmatrix} a_{11} & a_{12} & b_1\\ a_{21} & a_{22} & b_2\\ \end{bmatrix} \) The function estimates an optimal 2D affine transformation between two 2D point sets using the selected robust algorithm. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Note: The RANSAC method can handle practically any ratio of outliers but needs a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffinePartial2D, getAffineTransform

estimateAffine2D
public static Mat estimateAffine2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold)
Computes an optimal affine transformation between two 2D point sets. It computes \( \begin{bmatrix} x\\ y\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ \end{bmatrix} \) Parameters:
from
 First input 2D point set containing \((X,Y)\).to
 Second input 2D point set containing \((x,y)\).inliers
 Output vector indicating which points are inliers (1inlier, 0outlier).method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
ransacReprojThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC. between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. Passing 0 will disable refining, so the output matrix will be output of robust method. Returns:
 Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The returned matrix has the following form: \( \begin{bmatrix} a_{11} & a_{12} & b_1\\ a_{21} & a_{22} & b_2\\ \end{bmatrix} \) The function estimates an optimal 2D affine transformation between two 2D point sets using the selected robust algorithm. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Note: The RANSAC method can handle practically any ratio of outliers but needs a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffinePartial2D, getAffineTransform

estimateAffine2D
public static Mat estimateAffine2D(Mat from, Mat to, Mat inliers, int method)
Computes an optimal affine transformation between two 2D point sets. It computes \( \begin{bmatrix} x\\ y\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ \end{bmatrix} \) Parameters:
from
 First input 2D point set containing \((X,Y)\).to
 Second input 2D point set containing \((x,y)\).inliers
 Output vector indicating which points are inliers (1inlier, 0outlier).method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
 Returns:
 Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The returned matrix has the following form: \( \begin{bmatrix} a_{11} & a_{12} & b_1\\ a_{21} & a_{22} & b_2\\ \end{bmatrix} \) The function estimates an optimal 2D affine transformation between two 2D point sets using the selected robust algorithm. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Note: The RANSAC method can handle practically any ratio of outliers but needs a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffinePartial2D, getAffineTransform

estimateAffine2D
public static Mat estimateAffine2D(Mat from, Mat to, Mat inliers)
Computes an optimal affine transformation between two 2D point sets. It computes \( \begin{bmatrix} x\\ y\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ \end{bmatrix} \) Parameters:
from
 First input 2D point set containing \((X,Y)\).to
 Second input 2D point set containing \((x,y)\).inliers
 Output vector indicating which points are inliers (1inlier, 0outlier). REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
 Returns:
 Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The returned matrix has the following form: \( \begin{bmatrix} a_{11} & a_{12} & b_1\\ a_{21} & a_{22} & b_2\\ \end{bmatrix} \) The function estimates an optimal 2D affine transformation between two 2D point sets using the selected robust algorithm. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Note: The RANSAC method can handle practically any ratio of outliers but needs a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffinePartial2D, getAffineTransform

estimateAffine2D
public static Mat estimateAffine2D(Mat from, Mat to)
Computes an optimal affine transformation between two 2D point sets. It computes \( \begin{bmatrix} x\\ y\\ \end{bmatrix} = \begin{bmatrix} a_{11} & a_{12}\\ a_{21} & a_{22}\\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ \end{bmatrix} + \begin{bmatrix} b_1\\ b_2\\ \end{bmatrix} \) Parameters:
from
 First input 2D point set containing \((X,Y)\).to
 Second input 2D point set containing \((x,y)\). REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
 Returns:
 Output 2D affine transformation matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The returned matrix has the following form: \( \begin{bmatrix} a_{11} & a_{12} & b_1\\ a_{21} & a_{22} & b_2\\ \end{bmatrix} \) The function estimates an optimal 2D affine transformation between two 2D point sets using the selected robust algorithm. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Note: The RANSAC method can handle practically any ratio of outliers but needs a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffinePartial2D, getAffineTransform

estimateAffine2D
public static Mat estimateAffine2D(Mat pts1, Mat pts2, Mat inliers, UsacParams params)

estimateAffinePartial2D
public static Mat estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters, double confidence, long refineIters)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. Parameters:
from
 First input 2D point set.to
 Second input 2D point set.inliers
 Output vector indicating which points are inliers.method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
ransacReprojThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.maxIters
 The maximum number of robust method iterations.confidence
 Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation.refineIters
 Maximum number of iterations of refining algorithm (LevenbergMarquardt). Passing 0 will disable refining, so the output matrix will be output of robust method. Returns:
 Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust estimation. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Estimated transformation matrix is: \( \begin{bmatrix} \cos(\theta) \cdot s & \sin(\theta) \cdot s & t_x \\ \sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y \end{bmatrix} \) Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are translations in \( x, y \) axes respectively. Note: The RANSAC method can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffine2D, getAffineTransform

estimateAffinePartial2D
public static Mat estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters, double confidence)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. Parameters:
from
 First input 2D point set.to
 Second input 2D point set.inliers
 Output vector indicating which points are inliers.method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
ransacReprojThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.maxIters
 The maximum number of robust method iterations.confidence
 Confidence level, between 0 and 1, for the estimated transformation. Anything between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. Passing 0 will disable refining, so the output matrix will be output of robust method. Returns:
 Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust estimation. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Estimated transformation matrix is: \( \begin{bmatrix} \cos(\theta) \cdot s & \sin(\theta) \cdot s & t_x \\ \sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y \end{bmatrix} \) Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are translations in \( x, y \) axes respectively. Note: The RANSAC method can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffine2D, getAffineTransform

estimateAffinePartial2D
public static Mat estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold, long maxIters)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. Parameters:
from
 First input 2D point set.to
 Second input 2D point set.inliers
 Output vector indicating which points are inliers.method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
ransacReprojThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC.maxIters
 The maximum number of robust method iterations. between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. Passing 0 will disable refining, so the output matrix will be output of robust method. Returns:
 Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust estimation. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Estimated transformation matrix is: \( \begin{bmatrix} \cos(\theta) \cdot s & \sin(\theta) \cdot s & t_x \\ \sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y \end{bmatrix} \) Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are translations in \( x, y \) axes respectively. Note: The RANSAC method can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffine2D, getAffineTransform

estimateAffinePartial2D
public static Mat estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method, double ransacReprojThreshold)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. Parameters:
from
 First input 2D point set.to
 Second input 2D point set.inliers
 Output vector indicating which points are inliers.method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
ransacReprojThreshold
 Maximum reprojection error in the RANSAC algorithm to consider a point as an inlier. Applies only to RANSAC. between 0.95 and 0.99 is usually good enough. Values too close to 1 can slow down the estimation significantly. Values lower than 0.80.9 can result in an incorrectly estimated transformation. Passing 0 will disable refining, so the output matrix will be output of robust method. Returns:
 Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust estimation. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Estimated transformation matrix is: \( \begin{bmatrix} \cos(\theta) \cdot s & \sin(\theta) \cdot s & t_x \\ \sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y \end{bmatrix} \) Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are translations in \( x, y \) axes respectively. Note: The RANSAC method can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffine2D, getAffineTransform

estimateAffinePartial2D
public static Mat estimateAffinePartial2D(Mat from, Mat to, Mat inliers, int method)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. Parameters:
from
 First input 2D point set.to
 Second input 2D point set.inliers
 Output vector indicating which points are inliers.method
 Robust method used to compute transformation. The following methods are possible: REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
 Returns:
 Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust estimation. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Estimated transformation matrix is: \( \begin{bmatrix} \cos(\theta) \cdot s & \sin(\theta) \cdot s & t_x \\ \sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y \end{bmatrix} \) Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are translations in \( x, y \) axes respectively. Note: The RANSAC method can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffine2D, getAffineTransform

estimateAffinePartial2D
public static Mat estimateAffinePartial2D(Mat from, Mat to, Mat inliers)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. Parameters:
from
 First input 2D point set.to
 Second input 2D point set.inliers
 Output vector indicating which points are inliers. REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
 Returns:
 Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust estimation. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Estimated transformation matrix is: \( \begin{bmatrix} \cos(\theta) \cdot s & \sin(\theta) \cdot s & t_x \\ \sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y \end{bmatrix} \) Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are translations in \( x, y \) axes respectively. Note: The RANSAC method can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffine2D, getAffineTransform

estimateAffinePartial2D
public static Mat estimateAffinePartial2D(Mat from, Mat to)
Computes an optimal limited affine transformation with 4 degrees of freedom between two 2D point sets. Parameters:
from
 First input 2D point set.to
 Second input 2D point set. REF: RANSAC  RANSACbased robust method
 REF: LMEDS  LeastMedian robust method RANSAC is the default method.
 Returns:
 Output 2D affine transformation (4 degrees of freedom) matrix \(2 \times 3\) or empty matrix if transformation could not be estimated. The function estimates an optimal 2D affine transformation with 4 degrees of freedom limited to combinations of translation, rotation, and uniform scaling. Uses the selected algorithm for robust estimation. The computed transformation is then refined further (using only inliers) with the LevenbergMarquardt method to reduce the reprojection error even more. Estimated transformation matrix is: \( \begin{bmatrix} \cos(\theta) \cdot s & \sin(\theta) \cdot s & t_x \\ \sin(\theta) \cdot s & \cos(\theta) \cdot s & t_y \end{bmatrix} \) Where \( \theta \) is the rotation angle, \( s \) the scaling factor and \( t_x, t_y \) are translations in \( x, y \) axes respectively. Note: The RANSAC method can handle practically any ratio of outliers but need a threshold to distinguish inliers from outliers. The method LMeDS does not need any threshold but it works correctly only when there are more than 50% of inliers. SEE: estimateAffine2D, getAffineTransform

decomposeHomographyMat
public static int decomposeHomographyMat(Mat H, Mat K, java.util.List<Mat> rotations, java.util.List<Mat> translations, java.util.List<Mat> normals)
Decompose a homography matrix to rotation(s), translation(s) and plane normal(s). Parameters:
H
 The input homography matrix between two images.K
 The input camera intrinsic matrix.rotations
 Array of rotation matrices.translations
 Array of translation matrices.normals
 Array of plane normal matrices. This function extracts relative camera motion between two views of a planar object and returns up to four mathematical solution tuples of rotation, translation, and plane normal. The decomposition of the homography matrix H is described in detail in CITE: Malis2007. If the homography H, induced by the plane, gives the constraint \(s_i \vecthree{x'_i}{y'_i}{1} \sim H \vecthree{x_i}{y_i}{1}\) on the source image points \(p_i\) and the destination image points \(p'_i\), then the tuple of rotations[k] and translations[k] is a change of basis from the source camera's coordinate system to the destination camera's coordinate system. However, by decomposing H, one can only get the translation normalized by the (typically unknown) depth of the scene, i.e. its direction but with normalized length. If point correspondences are available, at least two solutions may further be invalidated, by applying positive depth constraint, i.e. all points must be in front of the camera. Returns:
 automatically generated

filterHomographyDecompByVisibleRefpoints
public static void filterHomographyDecompByVisibleRefpoints(java.util.List<Mat> rotations, java.util.List<Mat> normals, Mat beforePoints, Mat afterPoints, Mat possibleSolutions, Mat pointsMask)
Filters homography decompositions based on additional information. Parameters:
rotations
 Vector of rotation matrices.normals
 Vector of plane normal matrices.beforePoints
 Vector of (rectified) visible reference points before the homography is appliedafterPoints
 Vector of (rectified) visible reference points after the homography is appliedpossibleSolutions
 Vector of int indices representing the viable solution set after filteringpointsMask
 optional Mat/Vector of 8u type representing the mask for the inliers as given by the #findHomography function This function is intended to filter the output of the #decomposeHomographyMat based on additional information as described in CITE: Malis2007 . The summary of the method: the #decomposeHomographyMat function returns 2 unique solutions and their "opposites" for a total of 4 solutions. If we have access to the sets of points visible in the camera frame before and after the homography transformation is applied, we can determine which are the true potential solutions and which are the opposites by verifying which homographies are consistent with all visible reference points being in front of the camera. The inputs are left unchanged; the filtered solution set is returned as indices into the existing one.

filterHomographyDecompByVisibleRefpoints
public static void filterHomographyDecompByVisibleRefpoints(java.util.List<Mat> rotations, java.util.List<Mat> normals, Mat beforePoints, Mat afterPoints, Mat possibleSolutions)
Filters homography decompositions based on additional information. Parameters:
rotations
 Vector of rotation matrices.normals
 Vector of plane normal matrices.beforePoints
 Vector of (rectified) visible reference points before the homography is appliedafterPoints
 Vector of (rectified) visible reference points after the homography is appliedpossibleSolutions
 Vector of int indices representing the viable solution set after filtering This function is intended to filter the output of the #decomposeHomographyMat based on additional information as described in CITE: Malis2007 . The summary of the method: the #decomposeHomographyMat function returns 2 unique solutions and their "opposites" for a total of 4 solutions. If we have access to the sets of points visible in the camera frame before and after the homography transformation is applied, we can determine which are the true potential solutions and which are the opposites by verifying which homographies are consistent with all visible reference points being in front of the camera. The inputs are left unchanged; the filtered solution set is returned as indices into the existing one.

undistort
public static void undistort(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, Mat newCameraMatrix)
Transforms an image to compensate for lens distortion. The function transforms an image to compensate radial and tangential lens distortion. The function is simply a combination of #initUndistortRectifyMap (with unity R ) and #remap (with bilinear interpolation). See the former function for details of the transformation being performed. Those pixels in the destination image, for which there is no correspondent pixels in the source image, are filled with zeros (black color). A particular subset of the source image that will be visible in the corrected image can be regulated by newCameraMatrix. You can use #getOptimalNewCameraMatrix to compute the appropriate newCameraMatrix depending on your requirements. The camera matrix and the distortion parameters can be determined using #calibrateCamera. If the resolution of images is different from the resolution used at the calibration stage, \(f_x, f_y, c_x\) and \(c_y\) need to be scaled accordingly, while the distortion coefficients remain the same. Parameters:
src
 Input (distorted) image.dst
 Output (corrected) image that has the same size and type as src .cameraMatrix
 Input camera matrix \(A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .distCoeffs
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.newCameraMatrix
 Camera matrix of the distorted image. By default, it is the same as cameraMatrix but you may additionally scale and shift the result by using a different matrix.

undistort
public static void undistort(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs)
Transforms an image to compensate for lens distortion. The function transforms an image to compensate radial and tangential lens distortion. The function is simply a combination of #initUndistortRectifyMap (with unity R ) and #remap (with bilinear interpolation). See the former function for details of the transformation being performed. Those pixels in the destination image, for which there is no correspondent pixels in the source image, are filled with zeros (black color). A particular subset of the source image that will be visible in the corrected image can be regulated by newCameraMatrix. You can use #getOptimalNewCameraMatrix to compute the appropriate newCameraMatrix depending on your requirements. The camera matrix and the distortion parameters can be determined using #calibrateCamera. If the resolution of images is different from the resolution used at the calibration stage, \(f_x, f_y, c_x\) and \(c_y\) need to be scaled accordingly, while the distortion coefficients remain the same. Parameters:
src
 Input (distorted) image.dst
 Output (corrected) image that has the same size and type as src .cameraMatrix
 Input camera matrix \(A = \vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .distCoeffs
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed. cameraMatrix but you may additionally scale and shift the result by using a different matrix.

initUndistortRectifyMap
public static void initUndistortRectifyMap(Mat cameraMatrix, Mat distCoeffs, Mat R, Mat newCameraMatrix, Size size, int m1type, Mat map1, Mat map2)
Computes the undistortion and rectification transformation map. The function computes the joint undistortion and rectification transformation and represents the result in the form of maps for #remap. The undistorted image looks like original, as if it is captured with a camera using the camera matrix =newCameraMatrix and zero distortion. In case of a monocular camera, newCameraMatrix is usually equal to cameraMatrix, or it can be computed by #getOptimalNewCameraMatrix for a better control over scaling. In case of a stereo camera, newCameraMatrix is normally set to P1 or P2 computed by #stereoRectify . Also, this new camera is oriented differently in the coordinate space, according to R. That, for example, helps to align two heads of a stereo camera so that the epipolar lines on both images become horizontal and have the same y coordinate (in case of a horizontally aligned stereo camera). The function actually builds the maps for the inverse mapping algorithm that is used by #remap. That is, for each pixel \((u, v)\) in the destination (corrected and rectified) image, the function computes the corresponding coordinates in the source image (that is, in the original image from camera). The following process is applied: \( \begin{array}{l} x \leftarrow (u  {c'}_x)/{f'}_x \\ y \leftarrow (v  {c'}_y)/{f'}_y \\ {[X\,Y\,W]} ^T \leftarrow R^{1}*[x \, y \, 1]^T \\ x' \leftarrow X/W \\ y' \leftarrow Y/W \\ r^2 \leftarrow x'^2 + y'^2 \\ x'' \leftarrow x' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + 2p_1 x' y' + p_2(r^2 + 2 x'^2) + s_1 r^2 + s_2 r^4\\ y'' \leftarrow y' \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6} + p_1 (r^2 + 2 y'^2) + 2 p_2 x' y' + s_3 r^2 + s_4 r^4 \\ s\vecthree{x'''}{y'''}{1} = \vecthreethree{R_{33}(\tau_x, \tau_y)}{0}{R_{13}((\tau_x, \tau_y)} {0}{R_{33}(\tau_x, \tau_y)}{R_{23}(\tau_x, \tau_y)} {0}{0}{1} R(\tau_x, \tau_y) \vecthree{x''}{y''}{1}\\ map_x(u,v) \leftarrow x''' f_x + c_x \\ map_y(u,v) \leftarrow y''' f_y + c_y \end{array} \) where \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) are the distortion coefficients. In case of a stereo camera, this function is called twice: once for each camera head, after #stereoRectify, which in its turn is called after #stereoCalibrate. But if the stereo camera was not calibrated, it is still possible to compute the rectification transformations directly from the fundamental matrix using #stereoRectifyUncalibrated. For each camera, the function computes homography H as the rectification transformation in a pixel domain, not a rotation matrix R in 3D space. R can be computed from H as \(\texttt{R} = \texttt{cameraMatrix} ^{1} \cdot \texttt{H} \cdot \texttt{cameraMatrix}\) where cameraMatrix can be chosen arbitrarily. Parameters:
cameraMatrix
 Input camera matrix \(A=\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .distCoeffs
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.R
 Optional rectification transformation in the object space (3x3 matrix). R1 or R2 , computed by #stereoRectify can be passed here. If the matrix is empty, the identity transformation is assumed. In #initUndistortRectifyMap R assumed to be an identity matrix.newCameraMatrix
 New camera matrix \(A'=\vecthreethree{f_x'}{0}{c_x'}{0}{f_y'}{c_y'}{0}{0}{1}\).size
 Undistorted image size.m1type
 Type of the first output map that can be CV_32FC1, CV_32FC2 or CV_16SC2, see #convertMapsmap1
 The first output map.map2
 The second output map.

initInverseRectificationMap
public static void initInverseRectificationMap(Mat cameraMatrix, Mat distCoeffs, Mat R, Mat newCameraMatrix, Size size, int m1type, Mat map1, Mat map2)
Computes the projection and inverserectification transformation map. In essense, this is the inverse of #initUndistortRectifyMap to accomodate stereorectification of projectors ('inversecameras') in projectorcamera pairs. The function computes the joint projection and inverse rectification transformation and represents the result in the form of maps for #remap. The projected image looks like a distorted version of the original which, once projected by a projector, should visually match the original. In case of a monocular camera, newCameraMatrix is usually equal to cameraMatrix, or it can be computed by #getOptimalNewCameraMatrix for a better control over scaling. In case of a projectorcamera pair, newCameraMatrix is normally set to P1 or P2 computed by #stereoRectify . The projector is oriented differently in the coordinate space, according to R. In case of projectorcamera pairs, this helps align the projector (in the same manner as #initUndistortRectifyMap for the camera) to create a stereorectified pair. This allows epipolar lines on both images to become horizontal and have the same ycoordinate (in case of a horizontally aligned projectorcamera pair). The function builds the maps for the inverse mapping algorithm that is used by #remap. That is, for each pixel \((u, v)\) in the destination (projected and inverserectified) image, the function computes the corresponding coordinates in the source image (that is, in the original digital image). The following process is applied: \( \begin{array}{l} \text{newCameraMatrix}\\ x \leftarrow (u  {c'}_x)/{f'}_x \\ y \leftarrow (v  {c'}_y)/{f'}_y \\ \\\text{Undistortion} \\\scriptsize{\textit{though equation shown is for radial undistortion, function implements cv::undistortPoints()}}\\ r^2 \leftarrow x^2 + y^2 \\ \theta \leftarrow \frac{1 + k_1 r^2 + k_2 r^4 + k_3 r^6}{1 + k_4 r^2 + k_5 r^4 + k_6 r^6}\\ x' \leftarrow \frac{x}{\theta} \\ y' \leftarrow \frac{y}{\theta} \\ \\\text{Rectification}\\ {[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\ x'' \leftarrow X/W \\ y'' \leftarrow Y/W \\ \\\text{cameraMatrix}\\ map_x(u,v) \leftarrow x'' f_x + c_x \\ map_y(u,v) \leftarrow y'' f_y + c_y \end{array} \) where \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) are the distortion coefficients vector distCoeffs. In case of a stereorectified projectorcamera pair, this function is called for the projector while #initUndistortRectifyMap is called for the camera head. This is done after #stereoRectify, which in turn is called after #stereoCalibrate. If the projectorcamera pair is not calibrated, it is still possible to compute the rectification transformations directly from the fundamental matrix using #stereoRectifyUncalibrated. For the projector and camera, the function computes homography H as the rectification transformation in a pixel domain, not a rotation matrix R in 3D space. R can be computed from H as \(\texttt{R} = \texttt{cameraMatrix} ^{1} \cdot \texttt{H} \cdot \texttt{cameraMatrix}\) where cameraMatrix can be chosen arbitrarily. Parameters:
cameraMatrix
 Input camera matrix \(A=\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .distCoeffs
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.R
 Optional rectification transformation in the object space (3x3 matrix). R1 or R2, computed by #stereoRectify can be passed here. If the matrix is empty, the identity transformation is assumed.newCameraMatrix
 New camera matrix \(A'=\vecthreethree{f_x'}{0}{c_x'}{0}{f_y'}{c_y'}{0}{0}{1}\).size
 Distorted image size.m1type
 Type of the first output map. Can be CV_32FC1, CV_32FC2 or CV_16SC2, see #convertMapsmap1
 The first output map for #remap.map2
 The second output map for #remap.

getDefaultNewCameraMatrix
public static Mat getDefaultNewCameraMatrix(Mat cameraMatrix, Size imgsize, boolean centerPrincipalPoint)
Returns the default new camera matrix. The function returns the camera matrix that is either an exact copy of the input cameraMatrix (when centerPrinicipalPoint=false ), or the modified one (when centerPrincipalPoint=true). In the latter case, the new camera matrix will be: \(\begin{bmatrix} f_x && 0 && ( \texttt{imgSize.width} 1)*0.5 \\ 0 && f_y && ( \texttt{imgSize.height} 1)*0.5 \\ 0 && 0 && 1 \end{bmatrix} ,\) where \(f_x\) and \(f_y\) are \((0,0)\) and \((1,1)\) elements of cameraMatrix, respectively. By default, the undistortion functions in OpenCV (see #initUndistortRectifyMap, #undistort) do not move the principal point. However, when you work with stereo, it is important to move the principal points in both views to the same ycoordinate (which is required by most of stereo correspondence algorithms), and may be to the same xcoordinate too. So, you can form the new camera matrix for each view where the principal points are located at the center. Parameters:
cameraMatrix
 Input camera matrix.imgsize
 Camera view image size in pixels.centerPrincipalPoint
 Location of the principal point in the new camera matrix. The parameter indicates whether this location should be at the image center or not. Returns:
 automatically generated

getDefaultNewCameraMatrix
public static Mat getDefaultNewCameraMatrix(Mat cameraMatrix, Size imgsize)
Returns the default new camera matrix. The function returns the camera matrix that is either an exact copy of the input cameraMatrix (when centerPrinicipalPoint=false ), or the modified one (when centerPrincipalPoint=true). In the latter case, the new camera matrix will be: \(\begin{bmatrix} f_x && 0 && ( \texttt{imgSize.width} 1)*0.5 \\ 0 && f_y && ( \texttt{imgSize.height} 1)*0.5 \\ 0 && 0 && 1 \end{bmatrix} ,\) where \(f_x\) and \(f_y\) are \((0,0)\) and \((1,1)\) elements of cameraMatrix, respectively. By default, the undistortion functions in OpenCV (see #initUndistortRectifyMap, #undistort) do not move the principal point. However, when you work with stereo, it is important to move the principal points in both views to the same ycoordinate (which is required by most of stereo correspondence algorithms), and may be to the same xcoordinate too. So, you can form the new camera matrix for each view where the principal points are located at the center. Parameters:
cameraMatrix
 Input camera matrix.imgsize
 Camera view image size in pixels. parameter indicates whether this location should be at the image center or not. Returns:
 automatically generated

getDefaultNewCameraMatrix
public static Mat getDefaultNewCameraMatrix(Mat cameraMatrix)
Returns the default new camera matrix. The function returns the camera matrix that is either an exact copy of the input cameraMatrix (when centerPrinicipalPoint=false ), or the modified one (when centerPrincipalPoint=true). In the latter case, the new camera matrix will be: \(\begin{bmatrix} f_x && 0 && ( \texttt{imgSize.width} 1)*0.5 \\ 0 && f_y && ( \texttt{imgSize.height} 1)*0.5 \\ 0 && 0 && 1 \end{bmatrix} ,\) where \(f_x\) and \(f_y\) are \((0,0)\) and \((1,1)\) elements of cameraMatrix, respectively. By default, the undistortion functions in OpenCV (see #initUndistortRectifyMap, #undistort) do not move the principal point. However, when you work with stereo, it is important to move the principal points in both views to the same ycoordinate (which is required by most of stereo correspondence algorithms), and may be to the same xcoordinate too. So, you can form the new camera matrix for each view where the principal points are located at the center. Parameters:
cameraMatrix
 Input camera matrix. parameter indicates whether this location should be at the image center or not. Returns:
 automatically generated

undistortPoints
public static void undistortPoints(MatOfPoint2f src, MatOfPoint2f dst, Mat cameraMatrix, Mat distCoeffs, Mat R, Mat P)
Computes the ideal point coordinates from the observed point coordinates. The function is similar to #undistort and #initUndistortRectifyMap but it operates on a sparse set of points instead of a raster image. Also the function performs a reverse transformation to #projectPoints. In case of a 3D object, it does not reconstruct its 3D coordinates, but for a planar object, it does, up to a translation vector, if the proper R is specified. For each observed point coordinate \((u, v)\) the function computes: \( \begin{array}{l} x^{"} \leftarrow (u  c_x)/f_x \\ y^{"} \leftarrow (v  c_y)/f_y \\ (x',y') = undistort(x^{"},y^{"}, \texttt{distCoeffs}) \\ {[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\ x \leftarrow X/W \\ y \leftarrow Y/W \\ \text{only performed if P is specified:} \\ u' \leftarrow x {f'}_x + {c'}_x \\ v' \leftarrow y {f'}_y + {c'}_y \end{array} \) where *undistort* is an approximate iterative algorithm that estimates the normalized original point coordinates out of the normalized distorted point coordinates ("normalized" means that the coordinates do not depend on the camera matrix). The function can be used for both a stereo camera head or a monocular camera (when R is empty). Parameters:
src
 Observed point coordinates, 2xN/Nx2 1channel or 1xN/Nx1 2channel (CV_32FC2 or CV_64FC2) (or vector<Point2f> ).dst
 Output ideal point coordinates (1xN/Nx1 2channel or vector<Point2f> ) after undistortion and reverse perspective transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.cameraMatrix
 Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .distCoeffs
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.R
 Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by #stereoRectify can be passed here. If the matrix is empty, the identity transformation is used.P
 New camera matrix (3x3) or new projection matrix (3x4) \(\begin{bmatrix} {f'}_x & 0 & {c'}_x & t_x \\ 0 & {f'}_y & {c'}_y & t_y \\ 0 & 0 & 1 & t_z \end{bmatrix}\). P1 or P2 computed by #stereoRectify can be passed here. If the matrix is empty, the identity new camera matrix is used.

undistortPoints
public static void undistortPoints(MatOfPoint2f src, MatOfPoint2f dst, Mat cameraMatrix, Mat distCoeffs, Mat R)
Computes the ideal point coordinates from the observed point coordinates. The function is similar to #undistort and #initUndistortRectifyMap but it operates on a sparse set of points instead of a raster image. Also the function performs a reverse transformation to #projectPoints. In case of a 3D object, it does not reconstruct its 3D coordinates, but for a planar object, it does, up to a translation vector, if the proper R is specified. For each observed point coordinate \((u, v)\) the function computes: \( \begin{array}{l} x^{"} \leftarrow (u  c_x)/f_x \\ y^{"} \leftarrow (v  c_y)/f_y \\ (x',y') = undistort(x^{"},y^{"}, \texttt{distCoeffs}) \\ {[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\ x \leftarrow X/W \\ y \leftarrow Y/W \\ \text{only performed if P is specified:} \\ u' \leftarrow x {f'}_x + {c'}_x \\ v' \leftarrow y {f'}_y + {c'}_y \end{array} \) where *undistort* is an approximate iterative algorithm that estimates the normalized original point coordinates out of the normalized distorted point coordinates ("normalized" means that the coordinates do not depend on the camera matrix). The function can be used for both a stereo camera head or a monocular camera (when R is empty). Parameters:
src
 Observed point coordinates, 2xN/Nx2 1channel or 1xN/Nx1 2channel (CV_32FC2 or CV_64FC2) (or vector<Point2f> ).dst
 Output ideal point coordinates (1xN/Nx1 2channel or vector<Point2f> ) after undistortion and reverse perspective transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.cameraMatrix
 Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .distCoeffs
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed.R
 Rectification transformation in the object space (3x3 matrix). R1 or R2 computed by #stereoRectify can be passed here. If the matrix is empty, the identity transformation is used. #stereoRectify can be passed here. If the matrix is empty, the identity new camera matrix is used.

undistortPoints
public static void undistortPoints(MatOfPoint2f src, MatOfPoint2f dst, Mat cameraMatrix, Mat distCoeffs)
Computes the ideal point coordinates from the observed point coordinates. The function is similar to #undistort and #initUndistortRectifyMap but it operates on a sparse set of points instead of a raster image. Also the function performs a reverse transformation to #projectPoints. In case of a 3D object, it does not reconstruct its 3D coordinates, but for a planar object, it does, up to a translation vector, if the proper R is specified. For each observed point coordinate \((u, v)\) the function computes: \( \begin{array}{l} x^{"} \leftarrow (u  c_x)/f_x \\ y^{"} \leftarrow (v  c_y)/f_y \\ (x',y') = undistort(x^{"},y^{"}, \texttt{distCoeffs}) \\ {[X\,Y\,W]} ^T \leftarrow R*[x' \, y' \, 1]^T \\ x \leftarrow X/W \\ y \leftarrow Y/W \\ \text{only performed if P is specified:} \\ u' \leftarrow x {f'}_x + {c'}_x \\ v' \leftarrow y {f'}_y + {c'}_y \end{array} \) where *undistort* is an approximate iterative algorithm that estimates the normalized original point coordinates out of the normalized distorted point coordinates ("normalized" means that the coordinates do not depend on the camera matrix). The function can be used for both a stereo camera head or a monocular camera (when R is empty). Parameters:
src
 Observed point coordinates, 2xN/Nx2 1channel or 1xN/Nx1 2channel (CV_32FC2 or CV_64FC2) (or vector<Point2f> ).dst
 Output ideal point coordinates (1xN/Nx1 2channel or vector<Point2f> ) after undistortion and reverse perspective transformation. If matrix P is identity or omitted, dst will contain normalized point coordinates.cameraMatrix
 Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .distCoeffs
 Input vector of distortion coefficients \((k_1, k_2, p_1, p_2[, k_3[, k_4, k_5, k_6[, s_1, s_2, s_3, s_4[, \tau_x, \tau_y]]]])\) of 4, 5, 8, 12 or 14 elements. If the vector is NULL/empty, the zero distortion coefficients are assumed. #stereoRectify can be passed here. If the matrix is empty, the identity transformation is used. #stereoRectify can be passed here. If the matrix is empty, the identity new camera matrix is used.

undistortPointsIter
public static void undistortPointsIter(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, Mat R, Mat P, TermCriteria criteria)
Note: Default version of #undistortPoints does 5 iterations to compute undistorted points. Parameters:
src
 automatically generateddst
 automatically generatedcameraMatrix
 automatically generateddistCoeffs
 automatically generatedR
 automatically generatedP
 automatically generatedcriteria
 automatically generated

undistortImagePoints
public static void undistortImagePoints(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs, TermCriteria arg1)
Compute undistorted image points position Parameters:
src
 Observed points position, 2xN/Nx2 1channel or 1xN/Nx1 2channel (CV_32FC2 or CV_64FC2) (or vector<Point2f> ).dst
 Output undistorted points position (1xN/Nx1 2channel or vector<Point2f> ).cameraMatrix
 Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .distCoeffs
 Distortion coefficientsarg1
 automatically generated

undistortImagePoints
public static void undistortImagePoints(Mat src, Mat dst, Mat cameraMatrix, Mat distCoeffs)
Compute undistorted image points position Parameters:
src
 Observed points position, 2xN/Nx2 1channel or 1xN/Nx1 2channel (CV_32FC2 or CV_64FC2) (or vector<Point2f> ).dst
 Output undistorted points position (1xN/Nx1 2channel or vector<Point2f> ).cameraMatrix
 Camera matrix \(\vecthreethree{f_x}{0}{c_x}{0}{f_y}{c_y}{0}{0}{1}\) .distCoeffs
 Distortion coefficients

fisheye_projectPoints
public static void fisheye_projectPoints(Mat objectPoints, Mat imagePoints, Mat rvec, Mat tvec, Mat K, Mat D, double alpha, Mat jacobian)

fisheye_projectPoints
public static void fisheye_projectPoints(Mat objectPoints, Mat imagePoints, Mat rvec, Mat tvec, Mat K, Mat D, double alpha)

fisheye_projectPoints
public static void fisheye_projectPoints(Mat objectPoints, Mat imagePoints, Mat rvec, Mat tvec, Mat K, Mat D)

fisheye_distortPoints
public static void fisheye_distortPoints(Mat undistorted, Mat distorted, Mat K, Mat D, double alpha)
Distorts 2D points using fisheye model. Parameters:
undistorted
 Array of object points, 1xN/Nx1 2channel (or vector<Point2f> ), where N is the number of points in the view.K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\).alpha
 The skew coefficient.distorted
 Output array of image points, 1xN/Nx1 2channel, or vector<Point2f> . Note that the function assumes the camera intrinsic matrix of the undistorted points to be identity. This means if you want to distort image points you have to multiply them with \(K^{1}\).

fisheye_distortPoints
public static void fisheye_distortPoints(Mat undistorted, Mat distorted, Mat K, Mat D)
Distorts 2D points using fisheye model. Parameters:
undistorted
 Array of object points, 1xN/Nx1 2channel (or vector<Point2f> ), where N is the number of points in the view.K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\).distorted
 Output array of image points, 1xN/Nx1 2channel, or vector<Point2f> . Note that the function assumes the camera intrinsic matrix of the undistorted points to be identity. This means if you want to distort image points you have to multiply them with \(K^{1}\).

fisheye_undistortPoints
public static void fisheye_undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D, Mat R, Mat P, TermCriteria criteria)
Undistorts 2D points using fisheye model Parameters:
distorted
 Array of object points, 1xN/Nx1 2channel (or vector<Point2f> ), where N is the number of points in the view.K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\).R
 Rectification transformation in the object space: 3x3 1channel, or vector: 3x1/1x3 1channel or 1x1 3channelP
 New camera intrinsic matrix (3x3) or new projection matrix (3x4)criteria
 Termination criteriaundistorted
 Output array of image points, 1xN/Nx1 2channel, or vector<Point2f> .

fisheye_undistortPoints
public static void fisheye_undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D, Mat R, Mat P)
Undistorts 2D points using fisheye model Parameters:
distorted
 Array of object points, 1xN/Nx1 2channel (or vector<Point2f> ), where N is the number of points in the view.K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\).R
 Rectification transformation in the object space: 3x3 1channel, or vector: 3x1/1x3 1channel or 1x1 3channelP
 New camera intrinsic matrix (3x3) or new projection matrix (3x4)undistorted
 Output array of image points, 1xN/Nx1 2channel, or vector<Point2f> .

fisheye_undistortPoints
public static void fisheye_undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D, Mat R)
Undistorts 2D points using fisheye model Parameters:
distorted
 Array of object points, 1xN/Nx1 2channel (or vector<Point2f> ), where N is the number of points in the view.K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\).R
 Rectification transformation in the object space: 3x3 1channel, or vector: 3x1/1x3 1channel or 1x1 3channelundistorted
 Output array of image points, 1xN/Nx1 2channel, or vector<Point2f> .

fisheye_undistortPoints
public static void fisheye_undistortPoints(Mat distorted, Mat undistorted, Mat K, Mat D)
Undistorts 2D points using fisheye model Parameters:
distorted
 Array of object points, 1xN/Nx1 2channel (or vector<Point2f> ), where N is the number of points in the view.K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\). 1channel or 1x1 3channelundistorted
 Output array of image points, 1xN/Nx1 2channel, or vector<Point2f> .

fisheye_initUndistortRectifyMap
public static void fisheye_initUndistortRectifyMap(Mat K, Mat D, Mat R, Mat P, Size size, int m1type, Mat map1, Mat map2)
Computes undistortion and rectification maps for image transform by #remap. If D is empty zero distortion is used, if R or P is empty identity matrixes are used. Parameters:
K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\).R
 Rectification transformation in the object space: 3x3 1channel, or vector: 3x1/1x3 1channel or 1x1 3channelP
 New camera intrinsic matrix (3x3) or new projection matrix (3x4)size
 Undistorted image size.m1type
 Type of the first output map that can be CV_32FC1 or CV_16SC2 . See #convertMaps for details.map1
 The first output map.map2
 The second output map.

fisheye_undistortImage
public static void fisheye_undistortImage(Mat distorted, Mat undistorted, Mat K, Mat D, Mat Knew, Size new_size)
Transforms an image to compensate for fisheye lens distortion. Parameters:
distorted
 image with fisheye lens distortion.undistorted
 Output image with compensated fisheye lens distortion.K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\).Knew
 Camera intrinsic matrix of the distorted image. By default, it is the identity matrix but you may additionally scale and shift the result by using a different matrix.new_size
 the new size The function transforms an image to compensate radial and tangential lens distortion. The function is simply a combination of #fisheye::initUndistortRectifyMap (with unity R ) and #remap (with bilinear interpolation). See the former function for details of the transformation being performed. See below the results of undistortImage.
a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
k_4, k_5, k_6) of distortion were optimized under calibration)
 b\) result of #fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2, k_3, k_4) of fisheye distortion were optimized under calibration)
 c\) original image was captured with fisheye lens

a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
k_4, k_5, k_6) of distortion were optimized under calibration)

fisheye_undistortImage
public static void fisheye_undistortImage(Mat distorted, Mat undistorted, Mat K, Mat D, Mat Knew)
Transforms an image to compensate for fisheye lens distortion. Parameters:
distorted
 image with fisheye lens distortion.undistorted
 Output image with compensated fisheye lens distortion.K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\).Knew
 Camera intrinsic matrix of the distorted image. By default, it is the identity matrix but you may additionally scale and shift the result by using a different matrix. The function transforms an image to compensate radial and tangential lens distortion. The function is simply a combination of #fisheye::initUndistortRectifyMap (with unity R ) and #remap (with bilinear interpolation). See the former function for details of the transformation being performed. See below the results of undistortImage.
a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
k_4, k_5, k_6) of distortion were optimized under calibration)
 b\) result of #fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2, k_3, k_4) of fisheye distortion were optimized under calibration)
 c\) original image was captured with fisheye lens

a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
k_4, k_5, k_6) of distortion were optimized under calibration)

fisheye_undistortImage
public static void fisheye_undistortImage(Mat distorted, Mat undistorted, Mat K, Mat D)
Transforms an image to compensate for fisheye lens distortion. Parameters:
distorted
 image with fisheye lens distortion.undistorted
 Output image with compensated fisheye lens distortion.K
 Camera intrinsic matrix \(cameramatrix{K}\).D
 Input vector of distortion coefficients \(\distcoeffsfisheye\). may additionally scale and shift the result by using a different matrix. The function transforms an image to compensate radial and tangential lens distortion. The function is simply a combination of #fisheye::initUndistortRectifyMap (with unity R ) and #remap (with bilinear interpolation). See the former function for details of the transformation being performed. See below the results of undistortImage.
a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
k_4, k_5, k_6) of distortion were optimized under calibration)
 b\) result of #fisheye::undistortImage of fisheye camera model (all possible coefficients (k_1, k_2, k_3, k_4) of fisheye distortion were optimized under calibration)
 c\) original image was captured with fisheye lens

a\) result of undistort of perspective camera model (all possible coefficients (k_1, k_2, k_3,
k_4, k_5, k_6) of distortion were optimized under calibration)

