OpenCV
Open Source Computer Vision
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Properties Friends Macros Modules Pages
Affine Transformations

Prev Tutorial: Remapping
Next Tutorial: Histogram Equalization

Original author Ana Huamán
Compatibility OpenCV >= 3.0

Goal

In this tutorial you will learn how to:

Theory

What is an Affine Transformation?

  1. A transformation that can be expressed in the form of a matrix multiplication (linear transformation) followed by a vector addition (translation).
  2. From the above, we can use an Affine Transformation to express:

    1. Rotations (linear transformation)
    2. Translations (vector addition)
    3. Scale operations (linear transformation)

    you can see that, in essence, an Affine Transformation represents a relation between two images.

  3. The usual way to represent an Affine Transformation is by using a 2×3 matrix.

    A=[a00a01a10a11]2×2B=[b00b10]2×1

    M=[AB]=[a00a01b00a10a11b10]2×3

    Considering that we want to transform a 2D vector X=[xy] by using A and B, we can do the same with:

    T=A[xy]+B or T=M[x,y,1]T

    T=[a00x+a01y+b00a10x+a11y+b10]

How do we get an Affine Transformation?

  1. We mentioned that an Affine Transformation is basically a relation between two images. The information about this relation can come, roughly, in two ways:
    1. We know both X and T and we also know that they are related. Then our task is to find M
    2. We know M and X. To obtain T we only need to apply T=MX. Our information for M may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation between points.
  2. Let's explain this in a better way (b). Since M relates 2 images, we can analyze the simplest case in which it relates three points in both images. Look at the figure below:

the points 1, 2 and 3 (forming a triangle in image 1) are mapped into image 2, still forming a triangle, but now they have changed notoriously. If we find the Affine Transformation with these 3 points (you can choose them as you like), then we can apply this found relation to all the pixels in an image.

Code

  • What does this program do?
    • Loads an image
    • Applies an Affine Transform to the image. This transform is obtained from the relation between three points. We use the function cv::warpAffine for that purpose.
    • Applies a Rotation to the image after being transformed. This rotation is with respect to the image center
    • Waits until the user exits the program
  • The tutorial's code is shown below. You can also download it here
    #include <iostream>
    using namespace cv;
    using namespace std;
    int main( int argc, char** argv )
    {
    CommandLineParser parser( argc, argv, "{@input | lena.jpg | input image}" );
    Mat src = imread( samples::findFile( parser.get<String>( "@input" ) ) );
    if( src.empty() )
    {
    cout << "Could not open or find the image!\n" << endl;
    cout << "Usage: " << argv[0] << " <Input image>" << endl;
    return -1;
    }
    Point2f srcTri[3];
    srcTri[0] = Point2f( 0.f, 0.f );
    srcTri[1] = Point2f( src.cols - 1.f, 0.f );
    srcTri[2] = Point2f( 0.f, src.rows - 1.f );
    Point2f dstTri[3];
    dstTri[0] = Point2f( 0.f, src.rows*0.33f );
    dstTri[1] = Point2f( src.cols*0.85f, src.rows*0.25f );
    dstTri[2] = Point2f( src.cols*0.15f, src.rows*0.7f );
    Mat warp_mat = getAffineTransform( srcTri, dstTri );
    Mat warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
    warpAffine( src, warp_dst, warp_mat, warp_dst.size() );
    Point center = Point( warp_dst.cols/2, warp_dst.rows/2 );
    double angle = -50.0;
    double scale = 0.6;
    Mat rot_mat = getRotationMatrix2D( center, angle, scale );
    Mat warp_rotate_dst;
    warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
    imshow( "Source image", src );
    imshow( "Warp", warp_dst );
    imshow( "Warp + Rotate", warp_rotate_dst );
    waitKey();
    return 0;
    }
    Designed for command line parsing.
    Definition utility.hpp:890
    n-dimensional dense array class
    Definition mat.hpp:829
    MatSize size
    Definition mat.hpp:2177
    int cols
    Definition mat.hpp:2155
    bool empty() const
    Returns true if the array has no elements.
    int rows
    the number of rows and columns or (-1, -1) when the matrix has more than 2 dimensions
    Definition mat.hpp:2155
    int type() const
    Returns the type of a matrix element.
    std::string String
    Definition cvstd.hpp:151
    int main(int argc, char *argv[])
    Definition highgui_qt.cpp:3
    Definition core.hpp:107
    STL namespace.

Explanation

  • Load an image:

    CommandLineParser parser( argc, argv, "{@input | lena.jpg | input image}" );
    Mat src = imread( samples::findFile( parser.get<String>( "@input" ) ) );
    if( src.empty() )
    {
    cout << "Could not open or find the image!\n" << endl;
    cout << "Usage: " << argv[0] << " <Input image>" << endl;
    return -1;
    }
  • Affine Transform: As we explained in lines above, we need two sets of 3 points to derive the affine transform relation. Have a look:

    Point2f srcTri[3];
    srcTri[0] = Point2f( 0.f, 0.f );
    srcTri[1] = Point2f( src.cols - 1.f, 0.f );
    srcTri[2] = Point2f( 0.f, src.rows - 1.f );
    Point2f dstTri[3];
    dstTri[0] = Point2f( 0.f, src.rows*0.33f );
    dstTri[1] = Point2f( src.cols*0.85f, src.rows*0.25f );
    dstTri[2] = Point2f( src.cols*0.15f, src.rows*0.7f );

    You may want to draw these points to get a better idea on how they change. Their locations are approximately the same as the ones depicted in the example figure (in the Theory section). You may note that the size and orientation of the triangle defined by the 3 points change.

  • Armed with both sets of points, we calculate the Affine Transform by using OpenCV function cv::getAffineTransform :

    Mat warp_mat = getAffineTransform( srcTri, dstTri );

    We get a 2×3 matrix as an output (in this case warp_mat)

  • We then apply the Affine Transform just found to the src image

    Mat warp_dst = Mat::zeros( src.rows, src.cols, src.type() );
    warpAffine( src, warp_dst, warp_mat, warp_dst.size() );

    with the following arguments:

    • src: Input image
    • warp_dst: Output image
    • warp_mat: Affine transform
    • warp_dst.size(): The desired size of the output image

    We just got our first transformed image! We will display it in one bit. Before that, we also want to rotate it...

  • Rotate: To rotate an image, we need to know two things:

    1. The center with respect to which the image will rotate
    2. The angle to be rotated. In OpenCV a positive angle is counter-clockwise
    3. Optional: A scale factor

    We define these parameters with the following snippet:

    Point center = Point( warp_dst.cols/2, warp_dst.rows/2 );
    double angle = -50.0;
    double scale = 0.6;
  • We generate the rotation matrix with the OpenCV function cv::getRotationMatrix2D , which returns a 2×3 matrix (in this case rot_mat)

    Mat rot_mat = getRotationMatrix2D( center, angle, scale );
  • We now apply the found rotation to the output of our previous Transformation:

    Mat warp_rotate_dst;
    warpAffine( warp_dst, warp_rotate_dst, rot_mat, warp_dst.size() );
  • Finally, we display our results in two windows plus the original image for good measure:

    imshow( "Source image", src );
    imshow( "Warp", warp_dst );
    imshow( "Warp + Rotate", warp_rotate_dst );
  • We just have to wait until the user exits the program

Result

  • After compiling the code above, we can give it the path of an image as argument. For instance, for a picture like:

after applying the first Affine Transform we obtain:

and finally, after applying a negative rotation (remember negative means clockwise) and a scale factor, we get: