In this tutorial you will learn how to use the neural network to boost up the accuracy of the chart detection algorithm.
Building
When building OpenCV, run the following command to build all the contrib module:
cmake -D OPENCV_EXTRA_MODULES_PATH=<opencv_contrib>/modules/
Or only build the mcc module:
cmake -D OPENCV_EXTRA_MODULES_PATH=<opencv_contrib>/modules/mcc
Or make sure you check the mcc module in the GUI version of CMake: cmake-gui.
Source Code of the sample
You can run the sample code by doing
<path_of_your_opencv_build_directory>/bin/example_mcc_chart_detection_with_network -t=<type_of_chart> -m=<path_to_neural_network> -pb=<path_to_models_pbtxt> -v=<optional_path_to_video_if_not_provided_webcam_will_be_used.mp4> --ci=<optional_camera_id_needed_only_if_video_not_provided> --nc=<optional_maximum_number_of_charts_in_image> --use_gpu <optional_should_gpu_be_used>
``'
* -t=# is the chart type where 0 (Standard), 1 (DigitalSG), 2 (Vinyl)
* --ci=# is the camera ID where 0 (default is the main camera), 1 (secondary camera) etc
* --nc=# By default its values is 1 which means only the best chart will be detected
Example:
Simple run on CPU (GPU wont be used) /home/opencv/build/bin/example_mcc_chart_detection_with_network -t=0 -m=/home/model.pb –pb=/home/model.pbtxt -v=mcc24.mp4
To run on GPU /home/opencv/build/bin/example_mcc_chart_detection_with_network -t=0 -m=/home/model.pb –pb=/home/model.pbtxt -v=mcc24.mp4 –use_gpu
To run on GPU and detect the best 5 charts (Detections can be less than 5 but not more than 5) /home/opencv/build/bin/example_mcc_chart_detection_with_network -t=0 -m=/home/model.pb –pb=/home/model.pbtxt -v=mcc24.mp4 –use_gpu –nc=5 ```
12const char *about =
"Basic chart detection using neural network";
14 "{ help h usage ? | | show this message }"
15 "{t | 0 | chartType: 0-Standard, 1-DigitalSG, 2-Vinyl, default:0}"
16 "{m | | File path of model, if you don't have the model you can \
17 find the link in the documentation}"
18 "{pb | | File path of pbtxt file, available along with with the model \
20 "{v | | Input from video file, if ommited, input comes from camera }"
21 "{ci | 0 | Camera id if input doesnt come from video (-v) }"
22 "{nc | 1 | Maximum number of charts in the image }"
23 "{use_gpu | | Add this flag if you want to use gpu}"};
25int main(
int argc,
char *argv[])
34 if (parser.has(
"help"))
36 parser.printMessage();
40 int t = parser.get<
int>(
"t");
43 TYPECHART chartType = TYPECHART(t);
45 string model_path = parser.get<
string>(
"m");
46 string pbtxt_path = parser.get<
string>(
"pb");
48 int camId = parser.get<
int>(
"ci");
49 int nc = parser.get<
int>(
"nc");
54 video = parser.get<
String>(
"v");
56 bool use_gpu = parser.has(
"use_gpu");
68 inputVideo.
open(video);
73 inputVideo.
open(camId);
92 if (!detector->setNet(net))
94 cout <<
"Loading Model failed: Aborting" << endl;
98 while (inputVideo.
grab())
100 Mat image, imageCopy;
103 imageCopy = image.
clone();
106 if (!detector->process(image, chartType, nc,
true))
108 printf(
"ChartColor not detected \n");
114 std::vector<Ptr<mcc::CChecker>> checkers = detector->getListColorChecker();
124 imshow(
"image result | q or esc to quit", image);
125 imshow(
"original", imageCopy);
126 char key = (char)
waitKey(waitTime);
Designed for command line parsing.
Definition utility.hpp:890
n-dimensional dense array class
Definition mat.hpp:950
CV_NODISCARD_STD Mat clone() const
Creates a full copy of the array and the underlying data.
Class for video capturing from video files, image sequences or cameras.
Definition videoio.hpp:727
virtual bool open(const String &filename, int apiPreference=CAP_ANY)
Opens a video file or a capturing device or an IP video stream for video capturing.
virtual bool retrieve(OutputArray image, int flag=0)
Decodes and returns the grabbed video frame.
virtual bool grab()
Grabs the next frame from video file or capturing device.
This class allows to create and manipulate comprehensive artificial neural networks.
Definition dnn.hpp:535
void setPreferableBackend(int backendId)
Ask network to use specific computation backend where it supported.
void setPreferableTarget(int targetId)
Ask network to make computations on specific target device.
std::string String
Definition cvstd.hpp:151
std::shared_ptr< _Tp > Ptr
Definition cvstd_wrapper.hpp:23
#define CV_Assert(expr)
Checks a condition at runtime and throws exception if it fails.
Definition exception.hpp:198
Net readNetFromTensorflow(CV_WRAP_FILE_PATH const String &model, CV_WRAP_FILE_PATH const String &config=String())
Reads a network model stored in TensorFlow framework's format.
void imshow(const String &winname, InputArray mat)
Displays an image in the specified window.
int waitKey(int delay=0)
Waits for a pressed key.
int main(int argc, char *argv[])
Definition highgui_qt.cpp:3
Explanation
Set header and namespaces
using namespace std;
using namespace mcc;
If you want you can set the namespace like the code above.
Create the detector object
This is just to create the object.
Load the model
Load the model, here the model supplied with model was trained in tensorflow so we are loading it in tensorflow, but if you have some other model trained in some other framework you can use that also.
**(Optional) Set the dnn backend to CUDA**
Models run much faster on CUDA, so use CUDA if possible.
Run the detector
detector->process(image, chartType, max_number_charts_in_image, true);
If the detector successfully detects atleast one chart, it return true otherwise it returns false. In the above given code we print a failure message if no chart were detected. Otherwise if it were successful, the list of colorcharts is stored inside the detector itself, we will see in the next step on how to extract it. The fourth parameter is for deciding whether to use the net or not.
Get List of ColorCheckers
std::vector<cv::Ptr<mcc::CChecker>> checkers;
detector->getListColorChecker(checkers);
All the colorcheckers that were detected are now stored in the 'checkers' vector.
Draw the colorcheckers back to the image
Loop through all the checkers one by one and then draw them.