OpenCV 4.10.0-dev
Open Source Computer Vision
Loading...
Searching...
No Matches

Detailed Description

Class-specific Extremal Regions for Scene Text Detection

The scene text detection algorithm described below has been initially proposed by Lukás Neumann & Jiri Matas [204]. The main idea behind Class-specific Extremal Regions is similar to the MSER in that suitable Extremal Regions (ERs) are selected from the whole component tree of the image. However, this technique differs from MSER in that selection of suitable ERs is done by a sequential classifier trained for character detection, i.e. dropping the stability requirement of MSERs and selecting class-specific (not necessarily stable) regions.

The component tree of an image is constructed by thresholding by an increasing value step-by-step from 0 to 255 and then linking the obtained connected components from successive levels in a hierarchy by their inclusion relation:

image

The component tree may contain a huge number of regions even for a very simple image as shown in the previous image. This number can easily reach the order of 1 x 10\^6 regions for an average 1 Megapixel image. In order to efficiently select suitable regions among all the ERs the algorithm make use of a sequential classifier with two differentiated stages.

In the first stage incrementally computable descriptors (area, perimeter, bounding box, and Euler's number) are computed (in O(1)) for each region r and used as features for a classifier which estimates the class-conditional probability p(r|character). Only the ERs which correspond to local maximum of the probability p(r|character) are selected (if their probability is above a global limit p_min and the difference between local maximum and local minimum is greater than a delta_min value).

In the second stage, the ERs that passed the first stage are classified into character and non-character classes using more informative but also more computationally expensive features. (Hole area ratio, convex hull ratio, and the number of outer boundary inflexion points).

This ER filtering process is done in different single-channel projections of the input image in order to increase the character localization recall.

After the ER filtering is done on each input channel, character candidates must be grouped in high-level text blocks (i.e. words, text lines, paragraphs, ...). The opencv_text module implements two different grouping algorithms: the Exhaustive Search algorithm proposed in [205] for grouping horizontally aligned text, and the method proposed by Lluis Gomez and Dimosthenis Karatzas in [107] [133] for grouping arbitrary oriented text (see erGrouping).

To see the text detector at work, have a look at the textdetection demo: https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/textdetection.cpp

Classes

class  cv::text::ERFilter
 Base class for 1st and 2nd stages of Neumann and Matas scene text detection algorithm [205]. : More...
 
struct  cv::text::ERStat
 The ERStat structure represents a class-specific Extremal Region (ER). More...
 
class  cv::text::TextDetector
 An abstract class providing interface for text detection algorithms. More...
 
class  cv::text::TextDetectorCNN
 TextDetectorCNN class provides the functionallity of text bounding box detection. This class is representing to find bounding boxes of text words given an input image. This class uses OpenCV dnn module to load pre-trained model described in [166]. The original repository with the modified SSD Caffe version: https://github.com/MhLiao/TextBoxes. Model can be downloaded from DropBox. Modified .prototxt file with the model description can be found in opencv_contrib/modules/text/samples/textbox.prototxt. More...
 

Enumerations

enum  {
  cv::text::ERFILTER_NM_RGBLGrad ,
  cv::text::ERFILTER_NM_IHSGrad
}
 computeNMChannels operation modes More...
 
enum  cv::text::erGrouping_Modes {
  cv::text::ERGROUPING_ORIENTATION_HORIZ ,
  cv::text::ERGROUPING_ORIENTATION_ANY
}
 text::erGrouping operation modes More...
 

Functions

void cv::text::computeNMChannels (InputArray _src, OutputArrayOfArrays _channels, int _mode=ERFILTER_NM_RGBLGrad)
 Compute the different channels to be processed independently in the N&M algorithm [205].
 
Ptr< ERFiltercv::text::createERFilterNM1 (const Ptr< ERFilter::Callback > &cb, int thresholdDelta=1, float minArea=(float) 0.00025, float maxArea=(float) 0.13, float minProbability=(float) 0.4, bool nonMaxSuppression=true, float minProbabilityDiff=(float) 0.1)
 Create an Extremal Region Filter for the 1st stage classifier of N&M algorithm [205].
 
Ptr< ERFiltercv::text::createERFilterNM1 (const String &filename, int thresholdDelta=1, float minArea=(float) 0.00025, float maxArea=(float) 0.13, float minProbability=(float) 0.4, bool nonMaxSuppression=true, float minProbabilityDiff=(float) 0.1)
 Reads an Extremal Region Filter for the 1st stage classifier of N&M algorithm from the provided path e.g. /path/to/cpp/trained_classifierNM1.xml.
 
Ptr< ERFiltercv::text::createERFilterNM2 (const Ptr< ERFilter::Callback > &cb, float minProbability=(float) 0.3)
 Create an Extremal Region Filter for the 2nd stage classifier of N&M algorithm [205].
 
Ptr< ERFiltercv::text::createERFilterNM2 (const String &filename, float minProbability=(float) 0.3)
 Reads an Extremal Region Filter for the 2nd stage classifier of N&M algorithm from the provided path e.g. /path/to/cpp/trained_classifierNM2.xml.
 
void cv::text::detectRegions (InputArray image, const Ptr< ERFilter > &er_filter1, const Ptr< ERFilter > &er_filter2, std::vector< Rect > &groups_rects, int method=ERGROUPING_ORIENTATION_HORIZ, const String &filename=String(), float minProbability=(float) 0.5)
 Extracts text regions from image.
 
void cv::text::detectRegions (InputArray image, const Ptr< ERFilter > &er_filter1, const Ptr< ERFilter > &er_filter2, std::vector< std::vector< Point > > &regions)
 
void cv::text::erGrouping (InputArray image, InputArray channel, std::vector< std::vector< Point > > regions, std::vector< Rect > &groups_rects, int method=ERGROUPING_ORIENTATION_HORIZ, const String &filename=String(), float minProbablity=(float) 0.5)
 
void cv::text::erGrouping (InputArray img, InputArrayOfArrays channels, std::vector< std::vector< ERStat > > &regions, std::vector< std::vector< Vec2i > > &groups, std::vector< Rect > &groups_rects, int method=ERGROUPING_ORIENTATION_HORIZ, const std::string &filename=std::string(), float minProbablity=0.5)
 Find groups of Extremal Regions that are organized as text blocks.
 
Ptr< ERFilter::Callbackcv::text::loadClassifierNM1 (const String &filename)
 Allow to implicitly load the default classifier when creating an ERFilter object.
 
Ptr< ERFilter::Callbackcv::text::loadClassifierNM2 (const String &filename)
 Allow to implicitly load the default classifier when creating an ERFilter object.
 
void cv::text::MSERsToERStats (InputArray image, std::vector< std::vector< Point > > &contours, std::vector< std::vector< ERStat > > &regions)
 Converts MSER contours (vector<Point>) to ERStat regions.
 

Enumeration Type Documentation

◆ anonymous enum

anonymous enum

#include <opencv2/text/erfilter.hpp>

computeNMChannels operation modes

Enumerator
ERFILTER_NM_RGBLGrad 
Python: cv.text.ERFILTER_NM_RGBLGrad
ERFILTER_NM_IHSGrad 
Python: cv.text.ERFILTER_NM_IHSGrad

◆ erGrouping_Modes

#include <opencv2/text/erfilter.hpp>

text::erGrouping operation modes

Enumerator
ERGROUPING_ORIENTATION_HORIZ 
Python: cv.text.ERGROUPING_ORIENTATION_HORIZ

Exhaustive Search algorithm proposed in [204] for grouping horizontally aligned text. The algorithm models a verification function for all the possible ER sequences. The verification fuction for ER pairs consists in a set of threshold-based pairwise rules which compare measurements of two regions (height ratio, centroid angle, and region distance). The verification function for ER triplets creates a word text line estimate using Least Median-Squares fitting for a given triplet and then verifies that the estimate is valid (based on thresholds created during training). Verification functions for sequences larger than 3 are approximated by verifying that the text line parameters of all (sub)sequences of length 3 are consistent.

ERGROUPING_ORIENTATION_ANY 
Python: cv.text.ERGROUPING_ORIENTATION_ANY

Text grouping method proposed in [107] [133] for grouping arbitrary oriented text. Regions are agglomerated by Single Linkage Clustering in a weighted feature space that combines proximity (x,y coordinates) and similarity measures (color, size, gradient magnitude, stroke width, etc.). SLC provides a dendrogram where each node represents a text group hypothesis. Then the algorithm finds the branches corresponding to text groups by traversing this dendrogram with a stopping rule that combines the output of a rotation invariant text group classifier and a probabilistic measure for hierarchical clustering validity assessment.

Note
This mode is not supported due NFA code removal ( https://github.com/opencv/opencv_contrib/issues/2235 )

Function Documentation

◆ computeNMChannels()

void cv::text::computeNMChannels ( InputArray _src,
OutputArrayOfArrays _channels,
int _mode = ERFILTER_NM_RGBLGrad )
Python:
cv.text.computeNMChannels(_src[, _channels[, _mode]]) -> _channels

#include <opencv2/text/erfilter.hpp>

Compute the different channels to be processed independently in the N&M algorithm [205].

Parameters
_srcSource image. Must be RGB CV_8UC3.
_channelsOutput vector<Mat> where computed channels are stored.
_modeMode of operation. Currently the only available options are: ERFILTER_NM_RGBLGrad** (used by default) and ERFILTER_NM_IHSGrad.

In N&M algorithm, the combination of intensity (I), hue (H), saturation (S), and gradient magnitude channels (Grad) are used in order to obtain high localization recall. This implementation also provides an alternative combination of red (R), green (G), blue (B), lightness (L), and gradient magnitude (Grad).

◆ createERFilterNM1() [1/2]

Ptr< ERFilter > cv::text::createERFilterNM1 ( const Ptr< ERFilter::Callback > & cb,
int thresholdDelta = 1,
float minArea = (float) 0.00025,
float maxArea = (float) 0.13,
float minProbability = (float) 0.4,
bool nonMaxSuppression = true,
float minProbabilityDiff = (float) 0.1 )
Python:
cv.text.createERFilterNM1(cb[, thresholdDelta[, minArea[, maxArea[, minProbability[, nonMaxSuppression[, minProbabilityDiff]]]]]]) -> retval
cv.text.createERFilterNM1(filename[, thresholdDelta[, minArea[, maxArea[, minProbability[, nonMaxSuppression[, minProbabilityDiff]]]]]]) -> retval

#include <opencv2/text/erfilter.hpp>

Create an Extremal Region Filter for the 1st stage classifier of N&M algorithm [205].

Parameters
cb: Callback with the classifier. Default classifier can be implicitly load with function loadClassifierNM1, e.g. from file in samples/cpp/trained_classifierNM1.xml
thresholdDelta: Threshold step in subsequent thresholds when extracting the component tree
minArea: The minimum area (% of image size) allowed for retreived ER's
maxArea: The maximum area (% of image size) allowed for retreived ER's
minProbability: The minimum probability P(er|character) allowed for retreived ER's
nonMaxSuppression: Whenever non-maximum suppression is done over the branch probabilities
minProbabilityDiff: The minimum probability difference between local maxima and local minima ERs

The component tree of the image is extracted by a threshold increased step by step from 0 to 255, incrementally computable descriptors (aspect_ratio, compactness, number of holes, and number of horizontal crossings) are computed for each ER and used as features for a classifier which estimates the class-conditional probability P(er|character). The value of P(er|character) is tracked using the inclusion relation of ER across all thresholds and only the ERs which correspond to local maximum of the probability P(er|character) are selected (if the local maximum of the probability is above a global limit pmin and the difference between local maximum and local minimum is greater than minProbabilityDiff).

◆ createERFilterNM1() [2/2]

Ptr< ERFilter > cv::text::createERFilterNM1 ( const String & filename,
int thresholdDelta = 1,
float minArea = (float) 0.00025,
float maxArea = (float) 0.13,
float minProbability = (float) 0.4,
bool nonMaxSuppression = true,
float minProbabilityDiff = (float) 0.1 )
Python:
cv.text.createERFilterNM1(cb[, thresholdDelta[, minArea[, maxArea[, minProbability[, nonMaxSuppression[, minProbabilityDiff]]]]]]) -> retval
cv.text.createERFilterNM1(filename[, thresholdDelta[, minArea[, maxArea[, minProbability[, nonMaxSuppression[, minProbabilityDiff]]]]]]) -> retval

#include <opencv2/text/erfilter.hpp>

Reads an Extremal Region Filter for the 1st stage classifier of N&M algorithm from the provided path e.g. /path/to/cpp/trained_classifierNM1.xml.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ createERFilterNM2() [1/2]

Ptr< ERFilter > cv::text::createERFilterNM2 ( const Ptr< ERFilter::Callback > & cb,
float minProbability = (float) 0.3 )
Python:
cv.text.createERFilterNM2(cb[, minProbability]) -> retval
cv.text.createERFilterNM2(filename[, minProbability]) -> retval

#include <opencv2/text/erfilter.hpp>

Create an Extremal Region Filter for the 2nd stage classifier of N&M algorithm [205].

Parameters
cb: Callback with the classifier. Default classifier can be implicitly load with function loadClassifierNM2, e.g. from file in samples/cpp/trained_classifierNM2.xml
minProbability: The minimum probability P(er|character) allowed for retreived ER's

In the second stage, the ERs that passed the first stage are classified into character and non-character classes using more informative but also more computationally expensive features. The classifier uses all the features calculated in the first stage and the following additional features: hole area ratio, convex hull ratio, and number of outer inflexion points.

◆ createERFilterNM2() [2/2]

Ptr< ERFilter > cv::text::createERFilterNM2 ( const String & filename,
float minProbability = (float) 0.3 )
Python:
cv.text.createERFilterNM2(cb[, minProbability]) -> retval
cv.text.createERFilterNM2(filename[, minProbability]) -> retval

#include <opencv2/text/erfilter.hpp>

Reads an Extremal Region Filter for the 2nd stage classifier of N&M algorithm from the provided path e.g. /path/to/cpp/trained_classifierNM2.xml.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

◆ detectRegions() [1/2]

void cv::text::detectRegions ( InputArray image,
const Ptr< ERFilter > & er_filter1,
const Ptr< ERFilter > & er_filter2,
std::vector< Rect > & groups_rects,
int method = ERGROUPING_ORIENTATION_HORIZ,
const String & filename = String(),
float minProbability = (float) 0.5 )
Python:
cv.text.detectRegions(image, er_filter1, er_filter2) -> regions
cv.text.detectRegions(image, er_filter1, er_filter2[, method[, filename[, minProbability]]]) -> groups_rects

#include <opencv2/text/erfilter.hpp>

Extracts text regions from image.

Parameters
imageSource image where text blocks needs to be extracted from. Should be CV_8UC3 (color).
er_filter1Extremal Region Filter for the 1st stage classifier of N&M algorithm [205]
er_filter2Extremal Region Filter for the 2nd stage classifier of N&M algorithm [205]
groups_rectsOutput list of rectangle blocks with text
methodGrouping method (see text::erGrouping_Modes). Can be one of ERGROUPING_ORIENTATION_HORIZ, ERGROUPING_ORIENTATION_ANY.
filenameThe XML or YAML file with the classifier model (e.g. samples/trained_classifier_erGrouping.xml). Only to use when grouping method is ERGROUPING_ORIENTATION_ANY.
minProbabilityThe minimum probability for accepting a group. Only to use when grouping method is ERGROUPING_ORIENTATION_ANY.

◆ detectRegions() [2/2]

void cv::text::detectRegions ( InputArray image,
const Ptr< ERFilter > & er_filter1,
const Ptr< ERFilter > & er_filter2,
std::vector< std::vector< Point > > & regions )
Python:
cv.text.detectRegions(image, er_filter1, er_filter2) -> regions
cv.text.detectRegions(image, er_filter1, er_filter2[, method[, filename[, minProbability]]]) -> groups_rects

◆ erGrouping() [1/2]

void cv::text::erGrouping ( InputArray image,
InputArray channel,
std::vector< std::vector< Point > > regions,
std::vector< Rect > & groups_rects,
int method = ERGROUPING_ORIENTATION_HORIZ,
const String & filename = String(),
float minProbablity = (float) 0.5 )
Python:
cv.text.erGrouping(image, channel, regions[, method[, filename[, minProbablity]]]) -> groups_rects

◆ erGrouping() [2/2]

void cv::text::erGrouping ( InputArray img,
InputArrayOfArrays channels,
std::vector< std::vector< ERStat > > & regions,
std::vector< std::vector< Vec2i > > & groups,
std::vector< Rect > & groups_rects,
int method = ERGROUPING_ORIENTATION_HORIZ,
const std::string & filename = std::string(),
float minProbablity = 0.5 )
Python:
cv.text.erGrouping(image, channel, regions[, method[, filename[, minProbablity]]]) -> groups_rects

#include <opencv2/text/erfilter.hpp>

Find groups of Extremal Regions that are organized as text blocks.

Parameters
imgOriginal RGB or Greyscale image from wich the regions were extracted.
channelsVector of single channel images CV_8UC1 from wich the regions were extracted.
regionsVector of ER's retrieved from the ERFilter algorithm from each channel.
groupsThe output of the algorithm is stored in this parameter as set of lists of indexes to provided regions.
groups_rectsThe output of the algorithm are stored in this parameter as list of rectangles.
methodGrouping method (see text::erGrouping_Modes). Can be one of ERGROUPING_ORIENTATION_HORIZ, ERGROUPING_ORIENTATION_ANY.
filenameThe XML or YAML file with the classifier model (e.g. samples/trained_classifier_erGrouping.xml). Only to use when grouping method is ERGROUPING_ORIENTATION_ANY.
minProbablityThe minimum probability for accepting a group. Only to use when grouping method is ERGROUPING_ORIENTATION_ANY.

◆ loadClassifierNM1()

Ptr< ERFilter::Callback > cv::text::loadClassifierNM1 ( const String & filename)
Python:
cv.text.loadClassifierNM1(filename) -> retval

#include <opencv2/text/erfilter.hpp>

Allow to implicitly load the default classifier when creating an ERFilter object.

Parameters
filenameThe XML or YAML file with the classifier model (e.g. trained_classifierNM1.xml)

returns a pointer to ERFilter::Callback.

◆ loadClassifierNM2()

Ptr< ERFilter::Callback > cv::text::loadClassifierNM2 ( const String & filename)
Python:
cv.text.loadClassifierNM2(filename) -> retval

#include <opencv2/text/erfilter.hpp>

Allow to implicitly load the default classifier when creating an ERFilter object.

Parameters
filenameThe XML or YAML file with the classifier model (e.g. trained_classifierNM2.xml)

returns a pointer to ERFilter::Callback.

◆ MSERsToERStats()

void cv::text::MSERsToERStats ( InputArray image,
std::vector< std::vector< Point > > & contours,
std::vector< std::vector< ERStat > > & regions )

#include <opencv2/text/erfilter.hpp>

Converts MSER contours (vector<Point>) to ERStat regions.

Parameters
imageSource image CV_8UC1 from which the MSERs where extracted.
contoursInput vector with all the contours (vector<Point>).
regionsOutput where the ERStat regions are stored.

It takes as input the contours provided by the OpenCV MSER feature detector and returns as output two vectors of ERStats. This is because MSER() output contains both MSER+ and MSER- regions in a single vector<Point>, the function separates them in two different vectors (this is as if the ERStats where extracted from two different channels).

An example of MSERsToERStats in use can be found in the text detection webcam_demo: https://github.com/opencv/opencv_contrib/blob/master/modules/text/samples/webcam_demo.cpp