OpenCV 4.10.0-dev
Open Source Computer Vision
Loading...
Searching...
No Matches
Calibration with ArUco and ChArUco

Prev Tutorial: Detection of Diamond Markers
Next Tutorial: Aruco module FAQ
The ArUco module can also be used to calibrate a camera. Camera calibration consists in obtaining the camera intrinsic parameters and distortion coefficients. This parameters remain fixed unless the camera optic is modified, thus camera calibration only need to be done once.

Camera calibration is usually performed using the OpenCV cv::calibrateCamera() function. This function requires some correspondences between environment points and their projection in the camera image from different viewpoints. In general, these correspondences are obtained from the corners of chessboard patterns. See cv::calibrateCamera() function documentation or the OpenCV calibration tutorial for more detailed information.

Using the ArUco module, calibration can be performed based on ArUco markers corners or ChArUco corners. Calibrating using ArUco is much more versatile than using traditional chessboard patterns, since it allows occlusions or partial views.

As it can be stated, calibration can be done using both, marker corners or ChArUco corners. However, it is highly recommended using the ChArUco corners approach since the provided corners are much more accurate in comparison to the marker corners. Calibration using a standard Board should only be employed in those scenarios where the ChArUco boards cannot be employed because of any kind of restriction.

Calibration with ChArUco Boards

To calibrate using a ChArUco board, it is necessary to detect the board from different viewpoints, in the same way that the standard calibration does with the traditional chessboard pattern. However, due to the benefits of using ChArUco, occlusions and partial views are allowed, and not all the corners need to be visible in all the viewpoints.

ChArUco calibration viewpoints

The example of using cv::calibrateCamera() for cv::aruco::CharucoBoard:

// Create charuco board object and CharucoDetector
aruco::CharucoBoard board(Size(squaresX, squaresY), squareLength, markerLength, dictionary);
aruco::CharucoDetector detector(board, charucoParams, detectorParams);
// Collect data from each frame
vector<Mat> allCharucoCorners, allCharucoIds;
vector<vector<Point2f>> allImagePoints;
vector<vector<Point3f>> allObjectPoints;
vector<Mat> allImages;
Size imageSize;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
vector<int> markerIds;
vector<vector<Point2f>> markerCorners;
Mat currentCharucoCorners, currentCharucoIds;
vector<Point3f> currentObjectPoints;
vector<Point2f> currentImagePoints;
// Detect ChArUco board
detector.detectBoard(image, currentCharucoCorners, currentCharucoIds);
if(key == 'c' && currentCharucoCorners.total() > 3) {
// Match image points
board.matchImagePoints(currentCharucoCorners, currentCharucoIds, currentObjectPoints, currentImagePoints);
if(currentImagePoints.empty() || currentObjectPoints.empty()) {
cout << "Point matching failed, try again." << endl;
continue;
}
cout << "Frame captured" << endl;
allCharucoCorners.push_back(currentCharucoCorners);
allCharucoIds.push_back(currentCharucoIds);
allImagePoints.push_back(currentImagePoints);
allObjectPoints.push_back(currentObjectPoints);
allImages.push_back(image);
imageSize = image.size();
}
}
Mat cameraMatrix, distCoeffs;
if(calibrationFlags & CALIB_FIX_ASPECT_RATIO) {
cameraMatrix = Mat::eye(3, 3, CV_64F);
cameraMatrix.at<double>(0, 0) = aspectRatio;
}
// Calibrate camera using ChArUco
double repError = calibrateCamera(allObjectPoints, allImagePoints, imageSize, cameraMatrix, distCoeffs,
noArray(), noArray(), noArray(), noArray(), noArray(), calibrationFlags);

The ChArUco corners and ChArUco identifiers captured on each viewpoint are stored in the vectors allCharucoCorners and allCharucoIds, one element per viewpoint.

The calibrateCamera() function will fill the cameraMatrix and distCoeffs arrays with the camera calibration parameters. It will return the reprojection error obtained from the calibration. The elements in rvecs and tvecs will be filled with the estimated pose of the camera (respect to the ChArUco board) in each of the viewpoints.

Finally, the calibrationFlags parameter determines some of the options for the calibration.

A full working example is included in the calibrate_camera_charuco.cpp inside the samples/cpp/tutorial_code/objectDetection folder.

The samples now take input via commandline via the cv::CommandLineParser. For this file the example parameters will look like:

"camera_calib.txt" -w=5 -h=7 -sl=0.04 -ml=0.02 -d=10
-v=path/img_%02d.jpg

The camera calibration parameters from opencv/samples/cpp/tutorial_code/objectDetection/tutorial_camera_charuco.yml were obtained by the img_00.jpg-img_03.jpg placed from this folder.

Calibration with ArUco Boards

As it has been stated, it is recommended the use of ChAruco boards instead of ArUco boards for camera calibration, since ChArUco corners are more accurate than marker corners. However, in some special cases it must be required to use calibration based on ArUco boards. As in the previous case, it requires the detections of an ArUco board from different viewpoints.

ArUco calibration viewpoints

The example of using cv::calibrateCamera() for cv::aruco::GridBoard:

// Create board object and ArucoDetector
aruco::GridBoard gridboard(Size(markersX, markersY), markerLength, markerSeparation, dictionary);
aruco::ArucoDetector detector(dictionary, detectorParams);
// Collected frames for calibration
vector<vector<vector<Point2f>>> allMarkerCorners;
vector<vector<int>> allMarkerIds;
Size imageSize;
while(inputVideo.grab()) {
Mat image, imageCopy;
inputVideo.retrieve(image);
vector<int> markerIds;
vector<vector<Point2f>> markerCorners, rejectedMarkers;
// Detect markers
detector.detectMarkers(image, markerCorners, markerIds, rejectedMarkers);
// Refind strategy to detect more markers
if(refindStrategy) {
detector.refineDetectedMarkers(image, gridboard, markerCorners, markerIds, rejectedMarkers);
}
if(key == 'c' && !markerIds.empty()) {
cout << "Frame captured" << endl;
allMarkerCorners.push_back(markerCorners);
allMarkerIds.push_back(markerIds);
imageSize = image.size();
}
}
Mat cameraMatrix, distCoeffs;
if(calibrationFlags & CALIB_FIX_ASPECT_RATIO) {
cameraMatrix = Mat::eye(3, 3, CV_64F);
cameraMatrix.at<double>(0, 0) = aspectRatio;
}
// Prepare data for calibration
vector<Point3f> objectPoints;
vector<Point2f> imagePoints;
vector<Mat> processedObjectPoints, processedImagePoints;
size_t nFrames = allMarkerCorners.size();
for(size_t frame = 0; frame < nFrames; frame++) {
Mat currentImgPoints, currentObjPoints;
gridboard.matchImagePoints(allMarkerCorners[frame], allMarkerIds[frame], currentObjPoints, currentImgPoints);
if(currentImgPoints.total() > 0 && currentObjPoints.total() > 0) {
processedImagePoints.push_back(currentImgPoints);
processedObjectPoints.push_back(currentObjPoints);
}
}
// Calibrate camera
double repError = calibrateCamera(processedObjectPoints, processedImagePoints, imageSize, cameraMatrix, distCoeffs,
noArray(), noArray(), noArray(), noArray(), noArray(), calibrationFlags);

A full working example is included in the calibrate_camera.cpp inside the samples/cpp/tutorial_code/objectDetection folder.

The samples now take input via commandline via the cv::CommandLineParser. For this file the example parameters will look like:

"camera_calib.txt" -w=5 -h=7 -l=100 -s=10 -d=10 -v=path/aruco_videos_or_images