In this tutorial you will learn how to:
Note
The explanation below belongs to the book Learning OpenCV by Bradski and Kaehler.
In the last two tutorials we have seen applicative examples of convolutions. One of the most important convolutions is the computation of derivatives in an image (or an approximation to them).
Why may be important the calculus of the derivatives in an image? Let’s imagine we want to detect the edges present in the image. For instance:
You can easily notice that in an edge, the pixel intensity changes in a notorious way. A good way to express changes is by using derivatives. A high change in gradient indicates a major change in the image.
To be more graphical, let’s assume we have a 1D-image. An edge is shown by the “jump” in intensity in the plot below:
The edge “jump” can be seen more easily if we take the first derivative (actually, here appears as a maximum)
So, from the explanation above, we can deduce that a method to detect edges in an image can be performed by locating pixel locations where the gradient is higher than its neighbors (or to generalize, higher than a threshold).
More detailed explanation, please refer to Learning OpenCV by Bradski and Kaehler
Assuming that the image to be operated is :
We calculate two derivatives:
Horizontal changes: This is computed by convolving with a kernel with odd size. For example for a kernel size of 3, would be computed as:
Vertical changes: This is computed by convolving with a kernel with odd size. For example for a kernel size of 3, would be computed as:
At each point of the image we calculate an approximation of the gradient in that point by combining both results above:
Although sometimes the following simpler equation is used:
Note
When the size of the kernel is , the Sobel kernel shown above may produce noticeable inaccuracies (after all, Sobel is only an approximation of the derivative). OpenCV addresses this inaccuracy for kernels of size 3 by using the Scharr function. This is as fast but more accurate than the standar Sobel function. It implements the following kernels:
You can check out more information of this function in the OpenCV reference (Scharr). Also, in the sample code below, you will notice that above the code for Sobel function there is also code for the Scharr function commented. Uncommenting it (and obviously commenting the Sobel stuff) should give you an idea of how this function works.
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdlib.h>
#include <stdio.h>
using namespace cv;
/** @function main */
int main( int argc, char** argv )
{
Mat src, src_gray;
Mat grad;
char* window_name = "Sobel Demo - Simple Edge Detector";
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
int c;
/// Load an image
src = imread( argv[1] );
if( !src.data )
{ return -1; }
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
/// Convert it to gray
cvtColor( src, src_gray, CV_RGB2GRAY );
/// Create window
namedWindow( window_name, CV_WINDOW_AUTOSIZE );
/// Generate grad_x and grad_y
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
//Scharr( src_gray, grad_x, ddepth, 1, 0, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_x, abs_grad_x );
/// Gradient Y
//Scharr( src_gray, grad_y, ddepth, 0, 1, scale, delta, BORDER_DEFAULT );
Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT );
convertScaleAbs( grad_y, abs_grad_y );
/// Total Gradient (approximate)
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
imshow( window_name, grad );
waitKey(0);
return 0;
}
First we declare the variables we are going to use:
Mat src, src_gray;
Mat grad;
char* window_name = "Sobel Demo - Simple Edge Detector";
int scale = 1;
int delta = 0;
int ddepth = CV_16S;
As usual we load our source image src:
src = imread( argv[1] );
if( !src.data )
{ return -1; }
First, we apply a GaussianBlur to our image to reduce the noise ( kernel size = 3 )
GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT );
Now we convert our filtered image to grayscale:
cvtColor( src, src_gray, CV_RGB2GRAY );
Second, we calculate the “derivatives” in x and y directions. For this, we use the function Sobel as shown below:
Mat grad_x, grad_y;
Mat abs_grad_x, abs_grad_y;
/// Gradient X
Sobel( src_gray, grad_x, ddepth, 1, 0, 3, scale, delta, BORDER_DEFAULT );
/// Gradient Y
Sobel( src_gray, grad_y, ddepth, 0, 1, 3, scale, delta, BORDER_DEFAULT );
The function takes the following arguments:
Notice that to calculate the gradient in x direction we use: and . We do analogously for the y direction.
We convert our partial results back to CV_8U:
convertScaleAbs( grad_x, abs_grad_x );
convertScaleAbs( grad_y, abs_grad_y );
Finally, we try to approximate the gradient by adding both directional gradients (note that this is not an exact calculation at all! but it is good for our purposes).
addWeighted( abs_grad_x, 0.5, abs_grad_y, 0.5, 0, grad );
Finally, we show our result:
imshow( window_name, grad );
Here is the output of applying our basic detector to lena.jpg: