The scene text detection algorithm described below has been initially proposed by Lukás Neumann & Jiri Matas [Neumann12]. The main idea behind Classspecific Extremal Regions is similar to the MSER in that suitable Extremal Regions (ERs) are selected from the whole component tree of the image. However, this technique differs from MSER in that selection of suitable ERs is done by a sequential classifier trained for character detection, i.e. dropping the stability requirement of MSERs and selecting classspecific (not necessarily stable) regions.
The component tree of an image is constructed by thresholding by an increasing value stepbystep from 0 to 255 and then linking the obtained connected components from successive levels in a hierarchy by their inclusion relation:
The component tree may conatain a huge number of regions even for a very simple image as shown in the previous image. This number can easily reach the order of 1 x 10^6 regions for an average 1 Megapixel image. In order to efficiently select suitable regions among all the ERs the algorithm make use of a sequential classifier with two differentiated stages.
In the first stage incrementally computable descriptors (area, perimeter, bounding box, and euler number) are computed (in O(1)) for each region r and used as features for a classifier which estimates the classconditional probability p(rcharacter). Only the ERs which correspond to local maximum of the probability p(rcharacter) are selected (if their probability is above a global limit p_min and the difference between local maximum and local minimum is greater than a delta_min value).
In the second stage, the ERs that passed the first stage are classified into character and noncharacter classes using more informative but also more computationally expensive features. (Hole area ratio, convex hull ratio, and the number of outer boundary inflexion points).
This ER filtering process is done in different singlechannel projections of the input image in order to increase the character localization recall.
After the ER filtering is done on each input channel, character candidates must be grouped in highlevel text blocks (i.e. words, text lines, paragraphs, ...). The opencv_text module implements two different grouping algorithms: the Exhaustive Search algorithm proposed in [Neumann11] for grouping horizontally aligned text, and the method proposed by Lluis Gomez and Dimosthenis Karatzas in [Gomez13][Gomez14] for grouping arbitrary oriented text (see erGrouping()).
To see the text detector at work, have a look at the textdetection demo: https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp
[Neumann12]  Neumann L., Matas J.: RealTime Scene Text Localization and Recognition, CVPR 2012. The paper is available online at http://cmp.felk.cvut.cz/~neumalu1/neumanncvpr2012.pdf 
[Neumann11]  Neumann L., Matas J.: Text Localization in Realworld Images using Efficiently Pruned Exhaustive Search, ICDAR 2011. The paper is available online at http://cmp.felk.cvut.cz/~neumalu1/icdar2011_article.pdf 
[Gomez13]  Gomez L. and Karatzas D.: Multiscript Text Extraction from Natural Scenes, ICDAR 2013. The paper is available online at http://158.109.8.37/files/GoK2013.pdf 
[Gomez14]  Gomez L. and Karatzas D.: A Fast Hierarchical Method for Multiscript and Arbitrary Oriented Scene Text Extraction, arXiv:1407.7504 [cs.CV]. The paper is available online at http://arxiv.org/abs/1407.7504 
The ERStat structure represents a classspecific Extremal Region (ER).
An ER is a 4connected set of pixels with all its greylevel values smaller than the values in its outer boundary. A classspecific ER is selected (using a classifier) from all the ER’s in the component tree of the image.
struct CV_EXPORTS ERStat
{
public:
//! Constructor
explicit ERStat(int level = 256, int pixel = 0, int x = 0, int y = 0);
//! Destructor
~ERStat() { }
//! seed point and threshold (max greylevel value)
int pixel;
int level;
//! incrementally computable features
int area;
int perimeter;
int euler; //!< euler number
Rect rect; //!< bounding box
double raw_moments[2]; //!< order 1 raw moments to derive the centroid
double central_moments[3]; //!< order 2 central moments to construct the covariance matrix
std::deque<int> *crossings;//!< horizontal crossings
float med_crossings; //!< median of the crossings at three different height levels
//! 2nd stage features
float hole_area_ratio;
float convex_hull_ratio;
float num_inflexion_points;
//! probability that the ER belongs to the class we are looking for
double probability;
//! pointers preserving the tree structure of the component tree
ERStat* parent;
ERStat* child;
ERStat* next;
ERStat* prev;
};
Converts MSER contours (vector<Point>) to ERStat regions.
Parameters: 


It takes as input the contours provided by the OpenCV MSER feature detector and returns as output two vectors of ERStats. This is because MSER() output contains both MSER+ and MSER regions in a single vector<Point>, the function separates them in two different vectors (this is as if the ERStats where extracted from two different channels).
An example of MSERsToERStats in use can be found in the text detection webcam_demo: https://github.com/Itseez/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp
Compute the different channels to be processed independently in the N&M algorithm [Neumann12].
Parameters: 


In N&M algorithm, the combination of intensity (I), hue (H), saturation (S), and gradient magnitude channels (Grad) are used in order to obtain high localization recall. This implementation also provides an alternative combination of red (R), green (G), blue (B), lightness (L), and gradient magnitude (Grad).
Base class for 1st and 2nd stages of Neumann and Matas scene text detection algorithm [Neumann12].
class CV_EXPORTS ERFilter : public Algorithm
{
public:
//! callback with the classifier is made a class.
//! By doing it we hide SVM, Boost etc. Developers can provide their own classifiers
class CV_EXPORTS Callback
{
public:
virtual ~Callback() { }
//! The classifier must return probability measure for the region.
virtual double eval(const ERStat& stat) = 0;
};
/*!
the key method. Takes image on input and returns the selected regions in a vector of ERStat
only distinctive ERs which correspond to characters are selected by a sequential classifier
*/
virtual void run( InputArray image, std::vector<ERStat>& regions ) = 0;
(...)
};
Callback with the classifier is made a class. By doing it we hide SVM, Boost etc. Developers can provide their own classifiers to the ERFilter algorithm.
The classifier must return probability measure for the region.
Parameters: 


The key method of ERFilter algorithm. Takes image on input and returns the selected regions in a vector of ERStat only distinctive ERs which correspond to characters are selected by a sequential classifier
Parameters: 


Extracts the component tree (if needed) and filter the extremal regions (ER’s) by using a given classifier.
Create an Extremal Region Filter for the 1st stage classifier of N&M algorithm [Neumann12].
Parameters: 


The component tree of the image is extracted by a threshold increased step by step from 0 to 255, incrementally computable descriptors (aspect_ratio, compactness, number of holes, and number of horizontal crossings) are computed for each ER and used as features for a classifier which estimates the classconditional probability P(ercharacter). The value of P(ercharacter) is tracked using the inclusion relation of ER across all thresholds and only the ERs which correspond to local maximum of the probability P(ercharacter) are selected (if the local maximum of the probability is above a global limit pmin and the difference between local maximum and local minimum is greater than minProbabilityDiff).
Create an Extremal Region Filter for the 2nd stage classifier of N&M algorithm [Neumann12].
Parameters: 


In the second stage, the ERs that passed the first stage are classified into character and noncharacter classes using more informative but also more computationally expensive features. The classifier uses all the features calculated in the first stage and the following additional features: hole area ratio, convex hull ratio, and number of outer inflexion points.
Allow to implicitly load the default classifier when creating an ERFilter object.
Parameters: 


returns a pointer to ERFilter::Callback.
Allow to implicitly load the default classifier when creating an ERFilter object.
Parameters: 


returns a pointer to ERFilter::Callback.
Find groups of Extremal Regions that are organized as text blocks.
Parameters: 


This function implements two different grouping algorithms:
 ERGROUPING_ORIENTATION_HORIZ
Exhaustive Search algorithm proposed in [Neumann11] for grouping horizontally aligned text. The algorithm models a verification function for all the possible ER sequences. The verification fuction for ER pairs consists in a set of thresholdbased pairwise rules which compare measurements of two regions (height ratio, centroid angle, and region distance). The verification function for ER triplets creates a word text line estimate using Least MedianSquares fitting for a given triplet and then verifies that the estimate is valid (based on thresholds created during training). Verification functions for sequences larger than 3 are approximated by verifying that the text line parameters of all (sub)sequences of length 3 are consistent.
 ERGROUPING_ORIENTATION_ANY
Text grouping method proposed in [Gomez13][Gomez14] for grouping arbitrary oriented text. Regions are agglomerated by Single Linkage Clustering in a weighted feature space that combines proximity (x,y coordinates) and similarity measures (color, size, gradient magnitude, stroke width, etc.). SLC provides a dendrogram where each node represents a text group hypothesis. Then the algorithm finds the branches corresponding to text groups by traversing this dendrogram with a stopping rule that combines the output of a rotation invariant text group classifier and a probabilistic measure for hierarchical clustering validity assessment.