The functions in this section perform various geometrical transformations of 2D images. They do not change the image content but deform the pixel grid and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel of the destination image, the functions compute coordinates of the corresponding “donor” pixel in the source image and copy the pixel value:
In case when you specify the forward mapping , the OpenCV functions first compute the corresponding inverse mapping and then use the above formula.
The actual implementations of the geometrical transformations, from the most generic
remap()
and to the simplest and the fastest
resize()
, need to solve two main problems with the above formula:
BORDER_TRANSPARENT
. This means that the corresponding pixels in the destination image will not be modified at all.resize()
for details.Converts image transformation maps from one representation to another.
void convertMaps
(InputArray map1, InputArray map2, OutputArray dstmap1, OutputArray dstmap2, int dstmap1type, bool nninterpolation=false )¶
cv2.
convertMaps
(map1, map2, dstmap1type[, dstmap1[, dstmap2[, nninterpolation]]]) → dstmap1, dstmap2¶Parameters: |
|
---|
The function converts a pair of maps for
remap()
from one representation to another. The following options ( (map1.type(), map2.type())
(dstmap1.type(), dstmap2.type())
) are supported:
remap()
) are converted to a more compact and much faster fixed-point representation. The first output array contains the rounded coordinates and the second array (created only when nninterpolation=false
) contains indices in the interpolation tables.See also
Calculates an affine transform from three pairs of the corresponding points.
Mat getAffineTransform
(InputArray src, InputArray dst)¶
Mat getAffineTransform
(const Point2f src[], const Point2f dst[])¶
cv2.
getAffineTransform
(src, dst) → retval¶
CvMat* cvGetAffineTransform
(const CvPoint2D32f* src, const CvPoint2D32f* dst, CvMat* map_matrix)¶
cv.
GetAffineTransform
(src, dst, mapMatrix) → None¶Parameters: |
|
---|
The function calculates the matrix of an affine transform so that:
where
See also
Calculates a perspective transform from four pairs of the corresponding points.
Mat getPerspectiveTransform
(InputArray src, InputArray dst)¶
Mat getPerspectiveTransform
(const Point2f src[], const Point2f dst[])¶
cv2.
getPerspectiveTransform
(src, dst) → retval¶
CvMat* cvGetPerspectiveTransform
(const CvPoint2D32f* src, const CvPoint2D32f* dst, CvMat* map_matrix)¶
cv.
GetPerspectiveTransform
(src, dst, mapMatrix) → None¶Parameters: |
|
---|
The function calculates the matrix of a perspective transform so that:
where
Retrieves a pixel rectangle from an image with sub-pixel accuracy.
void getRectSubPix
(InputArray image, Size patchSize, Point2f center, OutputArray patch, int patchType=-1 )¶
cv2.
getRectSubPix
(image, patchSize, center[, patch[, patchType]]) → patch¶
void cvGetRectSubPix
(const CvArr* src, CvArr* dst, CvPoint2D32f center)¶
cv.
GetRectSubPix
(src, dst, center) → None¶Parameters: |
|
---|
The function getRectSubPix
extracts pixels from src
:
where the values of the pixels at non-integer coordinates are retrieved
using bilinear interpolation. Every channel of multi-channel
images is processed independently. While the center of the rectangle
must be inside the image, parts of the rectangle may be
outside. In this case, the replication border mode (see
borderInterpolate()
) is used to extrapolate
the pixel values outside of the image.
See also
Calculates an affine matrix of 2D rotation.
Mat getRotationMatrix2D
(Point2f center, double angle, double scale)¶
cv2.
getRotationMatrix2D
(center, angle, scale) → retval¶
CvMat* cv2DRotationMatrix
(CvPoint2D32f center, double angle, double scale, CvMat* map_matrix)¶
cv.
GetRotationMatrix2D
(center, angle, scale, mapMatrix) → None¶Parameters: |
|
---|
The function calculates the following matrix:
where
The transformation maps the rotation center to itself. If this is not the target, adjust the shift.
See also
Inverts an affine transformation.
void invertAffineTransform
(InputArray M, OutputArray iM)¶
cv2.
invertAffineTransform
(M[, iM]) → iM¶Parameters: |
|
---|
The function computes an inverse affine transformation represented by
matrix M
:
The result is also a
matrix of the same type as M
.
Remaps an image to polar space.
void cvLinearPolar
(const CvArr* src, CvArr* dst, CvPoint2D32f center, double maxRadius, int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS )¶Parameters: |
|
---|
The function cvLinearPolar
transforms the source image using the following transformation:
Forward transformation (
CV_WARP_INVERSE_MAP
is not set):Inverse transformation (
CV_WARP_INVERSE_MAP
is set):
where
and
Note
cvCartToPolar()
is used internally thus angles are measured from 0 to 360 with accuracy about 0.3 degrees.Remaps an image to log-polar space.
void cvLogPolar
(const CvArr* src, CvArr* dst, CvPoint2D32f center, double M, int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS )¶
cv.
LogPolar
(src, dst, center, M, flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS) → None¶Parameters: |
|
---|
The function cvLogPolar
transforms the source image using the following transformation:
Forward transformation (
CV_WARP_INVERSE_MAP
is not set):Inverse transformation (
CV_WARP_INVERSE_MAP
is set):
where
and
The function emulates the human “foveal” vision and can be used for fast scale and rotation-invariant template matching, for object tracking and so forth.
Note
cvCartToPolar()
is used internally thus angles are measured from 0 to 360 with accuracy about 0.3 degrees.Applies a generic geometrical transformation to an image.
void remap
(InputArray src, OutputArray dst, InputArray map1, InputArray map2, int interpolation, int borderMode=BORDER_CONSTANT, const Scalar& borderValue=Scalar())¶
cv2.
remap
(src, map1, map2, interpolation[, dst[, borderMode[, borderValue]]]) → dst¶
void cvRemap
(const CvArr* src, CvArr* dst, const CvArr* mapx, const CvArr* mapy, int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, CvScalar fillval=cvScalarAll(0) )¶
cv.
Remap
(src, dst, mapx, mapy, flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, fillval=(0, 0, 0, 0)) → None¶Parameters: |
|
---|
The function remap
transforms the source image using the specified map:
where values of pixels with non-integer coordinates are computed using one of available interpolation methods.
and
can be encoded as separate floating-point maps in
and
respectively, or interleaved floating-point maps of
in
, or
fixed-point maps created by using
convertMaps()
. The reason you might want to convert from floating to fixed-point
representations of a map is that they can yield much faster (~2x) remapping operations. In the converted case,
contains pairs (cvFloor(x), cvFloor(y))
and
contains indices in a table of interpolation coefficients.
This function cannot operate in-place.
Resizes an image.
void resize
(InputArray src, OutputArray dst, Size dsize, double fx=0, double fy=0, int interpolation=INTER_LINEAR )¶
cv2.
resize
(src, dsize[, dst[, fx[, fy[, interpolation]]]]) → dst¶
void cvResize
(const CvArr* src, CvArr* dst, int interpolation=CV_INTER_LINEAR )¶
cv.
Resize
(src, dst, interpolation=CV_INTER_LINEAR) → None¶Parameters: |
|
---|
The function resize
resizes the image src
down to or up to the specified size.
Note that the initial dst
type or size are not taken into account. Instead, the size and type are derived from the src
,``dsize``,``fx`` , and fy
. If you want to resize src
so that it fits the pre-created dst
, you may call the function as follows:
// explicitly specify dsize=dst.size(); fx and fy will be computed from that.
resize(src, dst, dst.size(), 0, 0, interpolation);
If you want to decimate the image by factor of 2 in each direction, you can call the function this way:
// specify fx and fy and let the function compute the destination image size.
resize(src, dst, Size(), 0.5, 0.5, interpolation);
To shrink an image, it will generally look best with CV_INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with CV_INTER_CUBIC (slow) or CV_INTER_LINEAR (faster but still looks OK).
See also
Applies an affine transformation to an image.
void warpAffine
(InputArray src, OutputArray dst, InputArray M, Size dsize, int flags=INTER_LINEAR, int borderMode=BORDER_CONSTANT, const Scalar& borderValue=Scalar())¶
cv2.
warpAffine
(src, M, dsize[, dst[, flags[, borderMode[, borderValue]]]]) → dst¶
void cvWarpAffine
(const CvArr* src, CvArr* dst, const CvMat* map_matrix, int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, CvScalar fillval=cvScalarAll(0) )¶
cv.
WarpAffine
(src, dst, mapMatrix, flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, fillval=(0, 0, 0, 0)) → None¶
void cvGetQuadrangleSubPix
(const CvArr* src, CvArr* dst, const CvMat* map_matrix)¶
cv.
GetQuadrangleSubPix
(src, dst, mapMatrix) → None¶Parameters: |
|
---|
The function warpAffine
transforms the source image using the specified matrix:
when the flag WARP_INVERSE_MAP
is set. Otherwise, the transformation is first inverted with
invertAffineTransform()
and then put in the formula above instead of M
.
The function cannot operate in-place.
See also
warpPerspective()
,
resize()
,
remap()
,
getRectSubPix()
,
transform()
Note
cvGetQuadrangleSubPix
is similar to cvWarpAffine
, but the outliers are extrapolated using replication border mode.
Applies a perspective transformation to an image.
void warpPerspective
(InputArray src, OutputArray dst, InputArray M, Size dsize, int flags=INTER_LINEAR, int borderMode=BORDER_CONSTANT, const Scalar& borderValue=Scalar())¶
cv2.
warpPerspective
(src, M, dsize[, dst[, flags[, borderMode[, borderValue]]]]) → dst¶
void cvWarpPerspective
(const CvArr* src, CvArr* dst, const CvMat* map_matrix, int flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, CvScalar fillval=cvScalarAll(0) )¶
cv.
WarpPerspective
(src, dst, mapMatrix, flags=CV_INTER_LINEAR+CV_WARP_FILL_OUTLIERS, fillval=(0, 0, 0, 0)) → None¶Parameters: |
|
---|
The function warpPerspective
transforms the source image using the specified matrix:
when the flag WARP_INVERSE_MAP
is set. Otherwise, the transformation is first inverted with
invert()
and then put in the formula above instead of M
.
The function cannot operate in-place.
See also
warpAffine()
,
resize()
,
remap()
,
getRectSubPix()
,
perspectiveTransform()
Computes the undistortion and rectification transformation map.
void initUndistortRectifyMap
(InputArray cameraMatrix, InputArray distCoeffs, InputArray R, InputArray newCameraMatrix, Size size, int m1type, OutputArray map1, OutputArray map2)¶
cv2.
initUndistortRectifyMap
(cameraMatrix, distCoeffs, R, newCameraMatrix, size, m1type[, map1[, map2]]) → map1, map2¶
void cvInitUndistortRectifyMap
(const CvMat* camera_matrix, const CvMat* dist_coeffs, const CvMat* R, const CvMat* new_camera_matrix, CvArr* mapx, CvArr* mapy)¶
void cvInitUndistortMap
(const CvMat* camera_matrix, const CvMat* distortion_coeffs, CvArr* mapx, CvArr* mapy)¶
cv.
InitUndistortRectifyMap
(cameraMatrix, distCoeffs, R, newCameraMatrix, map1, map2) → None¶
cv.
InitUndistortMap
(cameraMatrix, distCoeffs, map1, map2) → None¶Parameters: |
|
---|
The function computes the joint undistortion and rectification transformation and represents the result in the form of maps for
remap()
. The undistorted image looks like original, as if it is captured with a camera using the camera matrix =newCameraMatrix
and zero distortion. In case of a monocular camera, newCameraMatrix
is usually equal to cameraMatrix
, or it can be computed by
getOptimalNewCameraMatrix()
for a better control over scaling. In case of a stereo camera, newCameraMatrix
is normally set to P1
or P2
computed by
stereoRectify()
.
Also, this new camera is oriented differently in the coordinate space, according to R
. That, for example, helps to align two heads of a stereo camera so that the epipolar lines on both images become horizontal and have the same y- coordinate (in case of a horizontally aligned stereo camera).
The function actually builds the maps for the inverse mapping algorithm that is used by
remap()
. That is, for each pixel
in the destination (corrected and rectified) image, the function computes the corresponding coordinates in the source image (that is, in the original image from camera). The following process is applied:
where are the distortion coefficients.
In case of a stereo camera, this function is called twice: once for each camera head, after
stereoRectify()
, which in its turn is called after
stereoCalibrate()
. But if the stereo camera was not calibrated, it is still possible to compute the rectification transformations directly from the fundamental matrix using
stereoRectifyUncalibrated()
. For each camera, the function computes homography H
as the rectification transformation in a pixel domain, not a rotation matrix R
in 3D space. R
can be computed from H
as
where cameraMatrix
can be chosen arbitrarily.
Returns the default new camera matrix.
Mat getDefaultNewCameraMatrix
(InputArray cameraMatrix, Size imgsize=Size(), bool centerPrincipalPoint=false )¶
cv2.
getDefaultNewCameraMatrix
(cameraMatrix[, imgsize[, centerPrincipalPoint]]) → retval¶Parameters: |
|
---|
The function returns the camera matrix that is either an exact copy of the input cameraMatrix
(when centerPrinicipalPoint=false
), or the modified one (when centerPrincipalPoint=true
).
In the latter case, the new camera matrix will be:
where
and
are
and
elements of cameraMatrix
, respectively.
By default, the undistortion functions in OpenCV (see
initUndistortRectifyMap()
,
undistort()
) do not move the principal point. However, when you work with stereo, it is important to move the principal points in both views to the same y-coordinate (which is required by most of stereo correspondence algorithms), and may be to the same x-coordinate too. So, you can form the new camera matrix for each view where the principal points are located at the center.
Transforms an image to compensate for lens distortion.
void undistort
(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray newCameraMatrix=noArray() )¶
cv2.
undistort
(src, cameraMatrix, distCoeffs[, dst[, newCameraMatrix]]) → dst¶
void cvUndistort2
(const CvArr* src, CvArr* dst, const CvMat* camera_matrix, const CvMat* distortion_coeffs, const CvMat* new_camera_matrix=0 )¶
cv.
Undistort2
(src, dst, cameraMatrix, distCoeffs) → None¶Parameters: |
|
---|
The function transforms an image to compensate radial and tangential lens distortion.
The function is simply a combination of
initUndistortRectifyMap()
(with unity R
) and
remap()
(with bilinear interpolation). See the former function for details of the transformation being performed.
Those pixels in the destination image, for which there is no correspondent pixels in the source image, are filled with zeros (black color).
A particular subset of the source image that will be visible in the corrected image can be regulated by newCameraMatrix
. You can use
getOptimalNewCameraMatrix()
to compute the appropriate newCameraMatrix
depending on your requirements.
The camera matrix and the distortion parameters can be determined using
calibrateCamera()
. If the resolution of images is different from the resolution used at the calibration stage,
and
need to be scaled accordingly, while the distortion coefficients remain the same.
Computes the ideal point coordinates from the observed point coordinates.
void undistortPoints
(InputArray src, OutputArray dst, InputArray cameraMatrix, InputArray distCoeffs, InputArray R=noArray(), InputArray P=noArray())¶
cv2.
undistortPoints
(src, cameraMatrix, distCoeffs[, dst[, R[, P]]]) → dst¶
void cvUndistortPoints
(const CvMat* src, CvMat* dst, const CvMat* camera_matrix, const CvMat* dist_coeffs, const CvMat* R=0, const CvMat* P=0 )¶
cv.
UndistortPoints
(src, dst, cameraMatrix, distCoeffs, R=None, P=None) → None¶Parameters: |
|
---|
The function is similar to
undistort()
and
initUndistortRectifyMap()
but it operates on a sparse set of points instead of a raster image. Also the function performs a reverse transformation to
projectPoints()
. In case of a 3D object, it does not reconstruct its 3D coordinates, but for a planar object, it does, up to a translation vector, if the proper R
is specified.
// (u,v) is the input point, (u', v') is the output point
// camera_matrix=[fx 0 cx; 0 fy cy; 0 0 1]
// P=[fx' 0 cx' tx; 0 fy' cy' ty; 0 0 1 tz]
x" = (u - cx)/fx
y" = (v - cy)/fy
(x',y') = undistort(x",y",dist_coeffs)
[X,Y,W]T = R*[x' y' 1]T
x = X/W, y = Y/W
// only performed if P=[fx' 0 cx' [tx]; 0 fy' cy' [ty]; 0 0 1 [tz]] is specified
u' = x*fx' + cx'
v' = y*fy' + cy',
where undistort()
is an approximate iterative algorithm that estimates the normalized original point coordinates out of the normalized distorted point coordinates (“normalized” means that the coordinates do not depend on the camera matrix).
The function can be used for both a stereo camera head or a monocular camera (when R is empty).