OpenCV  4.4.0-dev Open Source Computer Vision
Geometric Image Transformations

Enumerations

enum  cv::InterpolationFlags {
cv::INTER_NEAREST = 0,
cv::INTER_LINEAR = 1,
cv::INTER_CUBIC = 2,
cv::INTER_AREA = 3,
cv::INTER_LANCZOS4 = 4,
cv::INTER_LINEAR_EXACT = 5,
cv::INTER_MAX = 7,
cv::WARP_FILL_OUTLIERS = 8,
cv::WARP_INVERSE_MAP = 16
}
interpolation algorithm More...

cv::INTER_BITS = 5,
cv::INTER_BITS2 = INTER_BITS * 2,
cv::INTER_TAB_SIZE = 1 << INTER_BITS,
cv::INTER_TAB_SIZE2 = INTER_TAB_SIZE * INTER_TAB_SIZE
}

enum  cv::WarpPolarMode {
cv::WARP_POLAR_LINEAR = 0,
cv::WARP_POLAR_LOG = 256
}
Specify the polar mapping mode. More...

Functions

void cv::convertMaps (InputArray map1, InputArray map2, OutputArray dstmap1, OutputArray dstmap2, int dstmap1type, bool nninterpolation=false)
Converts image transformation maps from one representation to another. More...

Mat cv::getAffineTransform (const Point2f src[], const Point2f dst[])
Calculates an affine transform from three pairs of the corresponding points. More...

Mat cv::getAffineTransform (InputArray src, InputArray dst)

Mat cv::getPerspectiveTransform (InputArray src, InputArray dst, int solveMethod=DECOMP_LU)
Calculates a perspective transform from four pairs of the corresponding points. More...

Mat cv::getPerspectiveTransform (const Point2f src[], const Point2f dst[], int solveMethod=DECOMP_LU)

void cv::getRectSubPix (InputArray image, Size patchSize, Point2f center, OutputArray patch, int patchType=-1)
Retrieves a pixel rectangle from an image with sub-pixel accuracy. More...

Mat cv::getRotationMatrix2D (Point2f center, double angle, double scale)
Calculates an affine matrix of 2D rotation. More...

Matx23d cv::getRotationMatrix2D_ (Point2f center, double angle, double scale)

void cv::invertAffineTransform (InputArray M, OutputArray iM)
Inverts an affine transformation. More...

void cv::linearPolar (InputArray src, OutputArray dst, Point2f center, double maxRadius, int flags)
Remaps an image to polar coordinates space. More...

void cv::logPolar (InputArray src, OutputArray dst, Point2f center, double M, int flags)
Remaps an image to semilog-polar coordinates space. More...

void cv::remap (InputArray src, OutputArray dst, InputArray map1, InputArray map2, int interpolation, int borderMode=BORDER_CONSTANT, const Scalar &borderValue=Scalar())
Applies a generic geometrical transformation to an image. More...

void cv::resize (InputArray src, OutputArray dst, Size dsize, double fx=0, double fy=0, int interpolation=INTER_LINEAR)
Resizes an image. More...

void cv::warpAffine (InputArray src, OutputArray dst, InputArray M, Size dsize, int flags=INTER_LINEAR, int borderMode=BORDER_CONSTANT, const Scalar &borderValue=Scalar())
Applies an affine transformation to an image. More...

void cv::warpPerspective (InputArray src, OutputArray dst, InputArray M, Size dsize, int flags=INTER_LINEAR, int borderMode=BORDER_CONSTANT, const Scalar &borderValue=Scalar())
Applies a perspective transformation to an image. More...

void cv::warpPolar (InputArray src, OutputArray dst, Size dsize, Point2f center, double maxRadius, int flags)
Remaps an image to polar or semilog-polar coordinates space. More...

Detailed Description

The functions in this section perform various geometrical transformations of 2D images. They do not change the image content but deform the pixel grid and map this deformed grid to the destination image. In fact, to avoid sampling artifacts, the mapping is done in the reverse order, from destination to the source. That is, for each pixel $$(x, y)$$ of the destination image, the functions compute coordinates of the corresponding "donor" pixel in the source image and copy the pixel value:

$\texttt{dst} (x,y)= \texttt{src} (f_x(x,y), f_y(x,y))$

In case when you specify the forward mapping $$\left<g_x, g_y\right>: \texttt{src} \rightarrow \texttt{dst}$$, the OpenCV functions first compute the corresponding inverse mapping $$\left<f_x, f_y\right>: \texttt{dst} \rightarrow \texttt{src}$$ and then use the above formula.

The actual implementations of the geometrical transformations, from the most generic remap and to the simplest and the fastest resize, need to solve two main problems with the above formula:

• Extrapolation of non-existing pixels. Similarly to the filtering functions described in the previous section, for some $$(x,y)$$, either one of $$f_x(x,y)$$, or $$f_y(x,y)$$, or both of them may fall outside of the image. In this case, an extrapolation method needs to be used. OpenCV provides the same selection of extrapolation methods as in the filtering functions. In addition, it provides the method BORDER_TRANSPARENT. This means that the corresponding pixels in the destination image will not be modified at all.
• Interpolation of pixel values. Usually $$f_x(x,y)$$ and $$f_y(x,y)$$ are floating-point numbers. This means that $$\left<f_x, f_y\right>$$ can be either an affine or perspective transformation, or radial lens distortion correction, and so on. So, a pixel value at fractional coordinates needs to be retrieved. In the simplest case, the coordinates can be just rounded to the nearest integer coordinates and the corresponding pixel can be used. This is called a nearest-neighbor interpolation. However, a better result can be achieved by using more sophisticated interpolation methods , where a polynomial function is fit into some neighborhood of the computed pixel $$(f_x(x,y), f_y(x,y))$$, and then the value of the polynomial at $$(f_x(x,y), f_y(x,y))$$ is taken as the interpolated pixel value. In OpenCV, you can choose between several interpolation methods. See resize for details.
Note
The geometrical transformations do not work with CV_8S or CV_32S images.

◆ InterpolationFlags

#include <opencv2/imgproc.hpp>

interpolation algorithm

Enumerator
INTER_NEAREST
Python: cv.INTER_NEAREST

nearest neighbor interpolation

INTER_LINEAR
Python: cv.INTER_LINEAR

bilinear interpolation

INTER_CUBIC
Python: cv.INTER_CUBIC

bicubic interpolation

INTER_AREA
Python: cv.INTER_AREA

resampling using pixel area relation. It may be a preferred method for image decimation, as it gives moire'-free results. But when the image is zoomed, it is similar to the INTER_NEAREST method.

INTER_LANCZOS4
Python: cv.INTER_LANCZOS4

Lanczos interpolation over 8x8 neighborhood

INTER_LINEAR_EXACT
Python: cv.INTER_LINEAR_EXACT

Bit exact bilinear interpolation

INTER_MAX
Python: cv.INTER_MAX

WARP_FILL_OUTLIERS
Python: cv.WARP_FILL_OUTLIERS

flag, fills all of the destination image pixels. If some of them correspond to outliers in the source image, they are set to zero

WARP_INVERSE_MAP
Python: cv.WARP_INVERSE_MAP

flag, inverse transformation

For example, linearPolar or logPolar transforms:

• flag is not set: $$dst( \rho , \phi ) = src(x,y)$$
• flag is set: $$dst(x,y) = src( \rho , \phi )$$

#include <opencv2/imgproc.hpp>

Enumerator
INTER_BITS
Python: cv.INTER_BITS
INTER_BITS2
Python: cv.INTER_BITS2
INTER_TAB_SIZE
Python: cv.INTER_TAB_SIZE
INTER_TAB_SIZE2
Python: cv.INTER_TAB_SIZE2

◆ WarpPolarMode

 enum cv::WarpPolarMode

#include <opencv2/imgproc.hpp>

Specify the polar mapping mode.

warpPolar
Enumerator
WARP_POLAR_LINEAR
Python: cv.WARP_POLAR_LINEAR

Remaps an image to/from polar space.

WARP_POLAR_LOG
Python: cv.WARP_POLAR_LOG

Remaps an image to/from semilog-polar space.

◆ convertMaps()

 void cv::convertMaps ( InputArray map1, InputArray map2, OutputArray dstmap1, OutputArray dstmap2, int dstmap1type, bool nninterpolation = false )
Python:
dstmap1, dstmap2=cv.convertMaps(map1, map2, dstmap1type[, dstmap1[, dstmap2[, nninterpolation]]])

#include <opencv2/imgproc.hpp>

Converts image transformation maps from one representation to another.

The function converts a pair of maps for remap from one representation to another. The following options ( (map1.type(), map2.type()) $$\rightarrow$$ (dstmap1.type(), dstmap2.type()) ) are supported:

• $$\texttt{(CV_32FC1, CV_32FC1)} \rightarrow \texttt{(CV_16SC2, CV_16UC1)}$$. This is the most frequently used conversion operation, in which the original floating-point maps (see remap ) are converted to a more compact and much faster fixed-point representation. The first output array contains the rounded coordinates and the second array (created only when nninterpolation=false ) contains indices in the interpolation tables.
• $$\texttt{(CV_32FC2)} \rightarrow \texttt{(CV_16SC2, CV_16UC1)}$$. The same as above but the original maps are stored in one 2-channel matrix.
• Reverse conversion. Obviously, the reconstructed floating-point maps will not be exactly the same as the originals.
Parameters
 map1 The first input map of type CV_16SC2, CV_32FC1, or CV_32FC2 . map2 The second input map of type CV_16UC1, CV_32FC1, or none (empty matrix), respectively. dstmap1 The first output map that has the type dstmap1type and the same size as src . dstmap2 The second output map. dstmap1type Type of the first output map that should be CV_16SC2, CV_32FC1, or CV_32FC2 . nninterpolation Flag indicating whether the fixed-point maps are used for the nearest-neighbor or for a more complex interpolation.
remap, undistort, initUndistortRectifyMap

◆ getAffineTransform() [1/2]

 Mat cv::getAffineTransform ( const Point2f src[], const Point2f dst[] )
Python:
retval=cv.getAffineTransform(src, dst)

#include <opencv2/imgproc.hpp>

Calculates an affine transform from three pairs of the corresponding points.

The function calculates the $$2 \times 3$$ matrix of an affine transform so that:

$\begin{bmatrix} x'_i \\ y'_i \end{bmatrix} = \texttt{map_matrix} \cdot \begin{bmatrix} x_i \\ y_i \\ 1 \end{bmatrix}$

where

$dst(i)=(x'_i,y'_i), src(i)=(x_i, y_i), i=0,1,2$

Parameters
 src Coordinates of triangle vertices in the source image. dst Coordinates of the corresponding triangle vertices in the destination image.
warpAffine, transform

◆ getAffineTransform() [2/2]

 Mat cv::getAffineTransform ( InputArray src, InputArray dst )
Python:
retval=cv.getAffineTransform(src, dst)

#include <opencv2/imgproc.hpp>

◆ getPerspectiveTransform() [1/2]

 Mat cv::getPerspectiveTransform ( InputArray src, InputArray dst, int solveMethod = DECOMP_LU )
Python:
retval=cv.getPerspectiveTransform(src, dst[, solveMethod])

#include <opencv2/imgproc.hpp>

Calculates a perspective transform from four pairs of the corresponding points.

The function calculates the $$3 \times 3$$ matrix of a perspective transform so that:

$\begin{bmatrix} t_i x'_i \\ t_i y'_i \\ t_i \end{bmatrix} = \texttt{map_matrix} \cdot \begin{bmatrix} x_i \\ y_i \\ 1 \end{bmatrix}$

where

$dst(i)=(x'_i,y'_i), src(i)=(x_i, y_i), i=0,1,2,3$

Parameters
 src Coordinates of quadrangle vertices in the source image. dst Coordinates of the corresponding quadrangle vertices in the destination image. solveMethod method passed to cv::solve (DecompTypes)
findHomography, warpPerspective, perspectiveTransform
Examples:
samples/cpp/warpPerspective_demo.cpp, and samples/dnn/text_detection.cpp.

◆ getPerspectiveTransform() [2/2]

 Mat cv::getPerspectiveTransform ( const Point2f src[], const Point2f dst[], int solveMethod = DECOMP_LU )
Python:
retval=cv.getPerspectiveTransform(src, dst[, solveMethod])

#include <opencv2/imgproc.hpp>

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ getRectSubPix()

 void cv::getRectSubPix ( InputArray image, Size patchSize, Point2f center, OutputArray patch, int patchType = -1 )
Python:
patch=cv.getRectSubPix(image, patchSize, center[, patch[, patchType]])

#include <opencv2/imgproc.hpp>

Retrieves a pixel rectangle from an image with sub-pixel accuracy.

The function getRectSubPix extracts pixels from src:

$patch(x, y) = src(x + \texttt{center.x} - ( \texttt{dst.cols} -1)*0.5, y + \texttt{center.y} - ( \texttt{dst.rows} -1)*0.5)$

where the values of the pixels at non-integer coordinates are retrieved using bilinear interpolation. Every channel of multi-channel images is processed independently. Also the image should be a single channel or three channel image. While the center of the rectangle must be inside the image, parts of the rectangle may be outside.

Parameters
 image Source image. patchSize Size of the extracted patch. center Floating point coordinates of the center of the extracted rectangle within the source image. The center must be inside the image. patch Extracted patch that has the size patchSize and the same number of channels as src . patchType Depth of the extracted pixels. By default, they have the same depth as src .
warpAffine, warpPerspective

◆ getRotationMatrix2D()

 Mat cv::getRotationMatrix2D ( Point2f center, double angle, double scale )
inline
Python:
retval=cv.getRotationMatrix2D(center, angle, scale)

#include <opencv2/imgproc.hpp>

Calculates an affine matrix of 2D rotation.

The function calculates the following matrix:

$\begin{bmatrix} \alpha & \beta & (1- \alpha ) \cdot \texttt{center.x} - \beta \cdot \texttt{center.y} \\ - \beta & \alpha & \beta \cdot \texttt{center.x} + (1- \alpha ) \cdot \texttt{center.y} \end{bmatrix}$

where

$\begin{array}{l} \alpha = \texttt{scale} \cdot \cos \texttt{angle} , \\ \beta = \texttt{scale} \cdot \sin \texttt{angle} \end{array}$

The transformation maps the rotation center to itself. If this is not the target, adjust the shift.

Parameters
 center Center of the rotation in the source image. angle Rotation angle in degrees. Positive values mean counter-clockwise rotation (the coordinate origin is assumed to be the top-left corner). scale Isotropic scale factor.
getAffineTransform, warpAffine, transform

◆ getRotationMatrix2D_()

 Matx23d cv::getRotationMatrix2D_ ( Point2f center, double angle, double scale )

#include <opencv2/imgproc.hpp>

getRotationMatrix2D

◆ invertAffineTransform()

 void cv::invertAffineTransform ( InputArray M, OutputArray iM )
Python:
iM=cv.invertAffineTransform(M[, iM])

#include <opencv2/imgproc.hpp>

Inverts an affine transformation.

The function computes an inverse affine transformation represented by $$2 \times 3$$ matrix M:

$\begin{bmatrix} a_{11} & a_{12} & b_1 \\ a_{21} & a_{22} & b_2 \end{bmatrix}$

The result is also a $$2 \times 3$$ matrix of the same type as M.

Parameters
 M Original affine transformation. iM Output reverse affine transformation.

◆ linearPolar()

 void cv::linearPolar ( InputArray src, OutputArray dst, Point2f center, double maxRadius, int flags )
Python:

#include <opencv2/imgproc.hpp>

Remaps an image to polar coordinates space.

Deprecated:
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags)
Examples:
samples/cpp/polar_transforms.cpp.

◆ logPolar()

 void cv::logPolar ( InputArray src, OutputArray dst, Point2f center, double M, int flags )
Python:
dst=cv.logPolar(src, center, M, flags[, dst])

#include <opencv2/imgproc.hpp>

Remaps an image to semilog-polar coordinates space.

Deprecated:
This function produces same result as cv::warpPolar(src, dst, src.size(), center, maxRadius, flags+WARP_POLAR_LOG);
Examples:
samples/cpp/polar_transforms.cpp.

◆ remap()

 void cv::remap ( InputArray src, OutputArray dst, InputArray map1, InputArray map2, int interpolation, int borderMode = BORDER_CONSTANT, const Scalar & borderValue = Scalar() )
Python:
dst=cv.remap(src, map1, map2, interpolation[, dst[, borderMode[, borderValue]]])

#include <opencv2/imgproc.hpp>

Applies a generic geometrical transformation to an image.

The function remap transforms the source image using the specified map:

$\texttt{dst} (x,y) = \texttt{src} (map_x(x,y),map_y(x,y))$

where values of pixels with non-integer coordinates are computed using one of available interpolation methods. $$map_x$$ and $$map_y$$ can be encoded as separate floating-point maps in $$map_1$$ and $$map_2$$ respectively, or interleaved floating-point maps of $$(x,y)$$ in $$map_1$$, or fixed-point maps created by using convertMaps. The reason you might want to convert from floating to fixed-point representations of a map is that they can yield much faster (2x) remapping operations. In the converted case, $$map_1$$ contains pairs (cvFloor(x), cvFloor(y)) and $$map_2$$ contains indices in a table of interpolation coefficients.

This function cannot operate in-place.

Parameters
 src Source image. dst Destination image. It has the same size as map1 and the same type as src . map1 The first map of either (x,y) points or just x values having the type CV_16SC2 , CV_32FC1, or CV_32FC2. See convertMaps for details on converting a floating point representation to fixed-point for speed. map2 The second map of y values having the type CV_16UC1, CV_32FC1, or none (empty map if map1 is (x,y) points), respectively. interpolation Interpolation method (see InterpolationFlags). The method INTER_AREA is not supported by this function. borderMode Pixel extrapolation method (see BorderTypes). When borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image that corresponds to the "outliers" in the source image are not modified by the function. borderValue Value used in case of a constant border. By default, it is 0.
Note
Due to current implementation limitations the size of an input and output images should be less than 32767x32767.

◆ resize()

 void cv::resize ( InputArray src, OutputArray dst, Size dsize, double fx = 0, double fy = 0, int interpolation = INTER_LINEAR )
Python:
dst=cv.resize(src, dsize[, dst[, fx[, fy[, interpolation]]]])

#include <opencv2/imgproc.hpp>

Resizes an image.

The function resize resizes the image src down to or up to the specified size. Note that the initial dst type or size are not taken into account. Instead, the size and type are derived from the src,dsize,fx, and fy. If you want to resize src so that it fits the pre-created dst, you may call the function as follows:

// explicitly specify dsize=dst.size(); fx and fy will be computed from that.
resize(src, dst, dst.size(), 0, 0, interpolation);

If you want to decimate the image by factor of 2 in each direction, you can call the function this way:

// specify fx and fy and let the function compute the destination image size.
resize(src, dst, Size(), 0.5, 0.5, interpolation);

To shrink an image, it will generally look best with INTER_AREA interpolation, whereas to enlarge an image, it will generally look best with c::INTER_CUBIC (slow) or INTER_LINEAR (faster but still looks OK).

Parameters
 src input image. dst output image; it has the size dsize (when it is non-zero) or the size computed from src.size(), fx, and fy; the type of dst is the same as of src. dsize output image size; if it equals zero, it is computed as: $\texttt{dsize = Size(round(fx*src.cols), round(fy*src.rows))}$ Either dsize or both fx and fy must be non-zero. fx scale factor along the horizontal axis; when it equals 0, it is computed as $\texttt{(double)dsize.width/src.cols}$ fy scale factor along the vertical axis; when it equals 0, it is computed as $\texttt{(double)dsize.height/src.rows}$ interpolation interpolation method, see InterpolationFlags
warpAffine, warpPerspective, remap
Examples:
samples/cpp/image_alignment.cpp, samples/cpp/train_HOG.cpp, samples/dnn/colorization.cpp, samples/dnn/object_detection.cpp, and samples/dnn/segmentation.cpp.

◆ warpAffine()

 void cv::warpAffine ( InputArray src, OutputArray dst, InputArray M, Size dsize, int flags = INTER_LINEAR, int borderMode = BORDER_CONSTANT, const Scalar & borderValue = Scalar() )
Python:
dst=cv.warpAffine(src, M, dsize[, dst[, flags[, borderMode[, borderValue]]]])

#include <opencv2/imgproc.hpp>

Applies an affine transformation to an image.

The function warpAffine transforms the source image using the specified matrix:

$\texttt{dst} (x,y) = \texttt{src} ( \texttt{M} _{11} x + \texttt{M} _{12} y + \texttt{M} _{13}, \texttt{M} _{21} x + \texttt{M} _{22} y + \texttt{M} _{23})$

when the flag WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invertAffineTransform and then put in the formula above instead of M. The function cannot operate in-place.

Parameters
 src input image. dst output image that has the size dsize and the same type as src . M $$2\times 3$$ transformation matrix. dsize size of the output image. flags combination of interpolation methods (see InterpolationFlags) and the optional flag WARP_INVERSE_MAP that means that M is the inverse transformation ( $$\texttt{dst}\rightarrow\texttt{src}$$ ). borderMode pixel extrapolation method (see BorderTypes); when borderMode=BORDER_TRANSPARENT, it means that the pixels in the destination image corresponding to the "outliers" in the source image are not modified by the function. borderValue value used in case of a constant border; by default, it is 0.
warpPerspective, resize, remap, getRectSubPix, transform
Examples:
samples/cpp/image_alignment.cpp.

◆ warpPerspective()

 void cv::warpPerspective ( InputArray src, OutputArray dst, InputArray M, Size dsize, int flags = INTER_LINEAR, int borderMode = BORDER_CONSTANT, const Scalar & borderValue = Scalar() )
Python:
dst=cv.warpPerspective(src, M, dsize[, dst[, flags[, borderMode[, borderValue]]]])

#include <opencv2/imgproc.hpp>

Applies a perspective transformation to an image.

The function warpPerspective transforms the source image using the specified matrix:

$\texttt{dst} (x,y) = \texttt{src} \left ( \frac{M_{11} x + M_{12} y + M_{13}}{M_{31} x + M_{32} y + M_{33}} , \frac{M_{21} x + M_{22} y + M_{23}}{M_{31} x + M_{32} y + M_{33}} \right )$

when the flag WARP_INVERSE_MAP is set. Otherwise, the transformation is first inverted with invert and then put in the formula above instead of M. The function cannot operate in-place.

Parameters
 src input image. dst output image that has the size dsize and the same type as src . M $$3\times 3$$ transformation matrix. dsize size of the output image. flags combination of interpolation methods (INTER_LINEAR or INTER_NEAREST) and the optional flag WARP_INVERSE_MAP, that sets M as the inverse transformation ( $$\texttt{dst}\rightarrow\texttt{src}$$ ). borderMode pixel extrapolation method (BORDER_CONSTANT or BORDER_REPLICATE). borderValue value used in case of a constant border; by default, it equals 0.
warpAffine, resize, remap, getRectSubPix, perspectiveTransform
Examples:
samples/cpp/image_alignment.cpp, and samples/dnn/text_detection.cpp.

◆ warpPolar()

 void cv::warpPolar ( InputArray src, OutputArray dst, Size dsize, Point2f center, double maxRadius, int flags )
Python:
dst=cv.warpPolar(src, dsize, center, maxRadius, flags[, dst])

#include <opencv2/imgproc.hpp>

Remaps an image to polar or semilog-polar coordinates space.

Polar remaps reference

Transform the source image using the following transformation:

$dst(\rho , \phi ) = src(x,y)$

where

$\begin{array}{l} \vec{I} = (x - center.x, \;y - center.y) \\ \phi = Kangle \cdot \texttt{angle} (\vec{I}) \\ \rho = \left\{\begin{matrix} Klin \cdot \texttt{magnitude} (\vec{I}) & default \\ Klog \cdot log_e(\texttt{magnitude} (\vec{I})) & if \; semilog \\ \end{matrix}\right. \end{array}$

and

$\begin{array}{l} Kangle = dsize.height / 2\Pi \\ Klin = dsize.width / maxRadius \\ Klog = dsize.width / log_e(maxRadius) \\ \end{array}$

Linear vs semilog mapping

Polar mapping can be linear or semi-log. Add one of WarpPolarMode to flags to specify the polar mapping mode.

Linear is the default mode.

The semilog mapping emulates the human "foveal" vision that permit very high acuity on the line of sight (central vision) in contrast to peripheral vision where acuity is minor.

Option on dsize:
• if both values in dsize <=0 (default), the destination image will have (almost) same area of source bounding circle:

$\begin{array}{l} dsize.area \leftarrow (maxRadius^2 \cdot \Pi) \\ dsize.width = \texttt{cvRound}(maxRadius) \\ dsize.height = \texttt{cvRound}(maxRadius \cdot \Pi) \\ \end{array}$

• if only dsize.height <= 0, the destination image area will be proportional to the bounding circle area but scaled by Kx * Kx:

$\begin{array}{l} dsize.height = \texttt{cvRound}(dsize.width \cdot \Pi) \\ \end{array}$

• if both values in dsize > 0, the destination image will have the given size therefore the area of the bounding circle will be scaled to dsize.
Reverse mapping

You can get reverse mapping adding WARP_INVERSE_MAP to flags

// direct transform
warpPolar(src, lin_polar_img, Size(),center, maxRadius, flags); // linear Polar
warpPolar(src, log_polar_img, Size(),center, maxRadius, flags + WARP_POLAR_LOG); // semilog Polar
// inverse transform
warpPolar(lin_polar_img, recovered_lin_polar_img, src.size(), center, maxRadius, flags + WARP_INVERSE_MAP);
warpPolar(log_polar_img, recovered_log_polar, src.size(), center, maxRadius, flags + WARP_POLAR_LOG + WARP_INVERSE_MAP);

In addiction, to calculate the original coordinate from a polar mapped coordinate $$(rho, phi)->(x, y)$$:

double Kangle = dst.rows / CV_2PI;
if (flags & WARP_POLAR_LOG)
{
double Klog = dst.cols / std::log(maxRadius);
magnitude = std::exp(rho / Klog);
}
else
{
double Klin = dst.cols / maxRadius;
magnitude = rho / Klin;
}
int x = cvRound(center.x + magnitude * cos(angleRad));
int y = cvRound(center.y + magnitude * sin(angleRad));
Parameters
 src Source image. dst Destination image. It will have same type as src. dsize The destination image size (see description for valid options). center The transformation center. maxRadius The radius of the bounding circle to transform. It determines the inverse magnitude scale parameter too. flags A combination of interpolation methods, InterpolationFlags + WarpPolarMode. Add WARP_POLAR_LINEAR to select linear polar mapping (default) Add WARP_POLAR_LOG to select semilog polar mapping Add WARP_INVERSE_MAP for reverse mapping.
Note
• The function can not operate in-place.
• To calculate magnitude and angle in degrees cartToPolar is used internally thus angles are measured from 0 to 360 with accuracy about 0.3 degrees.
• This function uses remap. Due to current implementation limitations the size of an input and output images should be less than 32767x32767.