OpenCV  3.4.10
Open Source Computer Vision
Interactive Visual Debugging of Computer Vision applications

What is the most common way to debug computer vision applications? Usually the answer is temporary, hacked together, custom code that must be removed from the code for release compilation.

In this tutorial we will show how to use the visual debugging features of the cvv module (opencv2/cvv.hpp) instead.

Goals

In this tutorial you will learn how to:

Code

The example code

If the program is compiled without visual debugging (see CMakeLists.txt below) the only result is some information printed to the command line. We want to demonstrate how much debugging or development functionality is added by just a few lines of cvv commands.

1 // system includes
2 #include <iostream>
3 
4 // library includes
5 #include <opencv2/imgproc.hpp>
6 #include <opencv2/features2d.hpp>
8 #include <opencv2/videoio.hpp>
10 
11 #define CVVISUAL_DEBUGMODE
14 #include <opencv2/cvv/filter.hpp>
15 #include <opencv2/cvv/dmatch.hpp>
17 
18 using namespace std;
19 using namespace cv;
20 
21 template<class T> std::string toString(const T& p_arg)
22 {
23  std::stringstream ss;
24 
25  ss << p_arg;
26 
27  return ss.str();
28 }
29 
30 
31 
32 
33 int
34 main(int argc, char** argv)
35 {
36  cv::Size* resolution = nullptr;
37 
38  // parser keys
39  const char *keys =
40  "{ help h usage ? | | show this message }"
41  "{ width W | 0| camera resolution width. leave at 0 to use defaults }"
42  "{ height H | 0| camera resolution height. leave at 0 to use defaults }";
43 
44  CommandLineParser parser(argc, argv, keys);
45  if (parser.has("help")) {
46  parser.printMessage();
47  return 0;
48  }
49  int res_w = parser.get<int>("width");
50  int res_h = parser.get<int>("height");
51 
52  // setup video capture
53  cv::VideoCapture capture(0);
54  if (!capture.isOpened()) {
55  std::cout << "Could not open VideoCapture" << std::endl;
56  return 1;
57  }
58 
59  if (res_w>0 && res_h>0) {
60  printf("Setting resolution to %dx%d\n", res_w, res_h);
61  capture.set(CV_CAP_PROP_FRAME_WIDTH, res_w);
62  capture.set(CV_CAP_PROP_FRAME_HEIGHT, res_h);
63  }
64 
65 
66  cv::Mat prevImgGray;
67  std::vector<cv::KeyPoint> prevKeypoints;
68  cv::Mat prevDescriptors;
69 
70  int maxFeatureCount = 500;
71  Ptr<ORB> detector = ORB::create(maxFeatureCount);
72 
74 
75  for (int imgId = 0; imgId < 10; imgId++) {
76  // capture a frame
77  cv::Mat imgRead;
78  capture >> imgRead;
79  printf("%d: image captured\n", imgId);
80 
81  std::string imgIdString{"imgRead"};
82  imgIdString += toString(imgId);
83  cvv::showImage(imgRead, CVVISUAL_LOCATION, imgIdString.c_str());
84 
85  // convert to grayscale
86  cv::Mat imgGray;
87  cv::cvtColor(imgRead, imgGray, COLOR_BGR2GRAY);
88  cvv::debugFilter(imgRead, imgGray, CVVISUAL_LOCATION, "to gray");
89 
90  // detect ORB features
91  std::vector<cv::KeyPoint> keypoints;
92  cv::Mat descriptors;
93  detector->detectAndCompute(imgGray, cv::noArray(), keypoints, descriptors);
94  printf("%d: detected %zd keypoints\n", imgId, keypoints.size());
95 
96  // match them to previous image (if available)
97  if (!prevImgGray.empty()) {
98  std::vector<cv::DMatch> matches;
99  matcher.match(prevDescriptors, descriptors, matches);
100  printf("%d: all matches size=%zd\n", imgId, matches.size());
101  std::string allMatchIdString{"all matches "};
102  allMatchIdString += toString(imgId-1) + "<->" + toString(imgId);
103  cvv::debugDMatch(prevImgGray, prevKeypoints, imgGray, keypoints, matches, CVVISUAL_LOCATION, allMatchIdString.c_str());
104 
105  // remove worst (as defined by match distance) bestRatio quantile
106  double bestRatio = 0.8;
107  std::sort(matches.begin(), matches.end());
108  matches.resize(int(bestRatio * matches.size()));
109  printf("%d: best matches size=%zd\n", imgId, matches.size());
110  std::string bestMatchIdString{"best " + toString(bestRatio) + " matches "};
111  bestMatchIdString += toString(imgId-1) + "<->" + toString(imgId);
112  cvv::debugDMatch(prevImgGray, prevKeypoints, imgGray, keypoints, matches, CVVISUAL_LOCATION, bestMatchIdString.c_str());
113  }
114 
115  prevImgGray = imgGray;
116  prevKeypoints = keypoints;
117  prevDescriptors = descriptors;
118  }
119 
120  cvv::finalShow();
121 
122  return 0;
123 }
void sort(InputArray src, OutputArray dst, int flags)
Sorts each row or each column of a matrix.
convert between RGB/BGR and grayscale, color conversions
Definition: imgproc.hpp:544
Definition: base.hpp:199
STL namespace.
Definition: videoio_c.h:170
Definition: affine.hpp:51
static void showImage(cv::InputArray img, impl::CallMetaData metaData=impl::CallMetaData(), const char *description=nullptr, const char *view=nullptr)
Add a single image to debug GUI (similar to imshow <>).
Definition: show_image.hpp:38
Designed for command line parsing.
Definition: utility.hpp:820
Class for video capturing from video files, image sequences or cameras.
Definition: videoio.hpp:616
void cvtColor(InputArray src, OutputArray dst, int code, int dstCn=0)
Converts an image from one color space to another.
Template class for specifying the size of an image or rectangle.
Definition: types.hpp:315
void finalShow()
Passes the control to the debug-window for a last time.
Definition: final_show.hpp:23
static std::string toString(const MatShape &shape, const String &name="")
Definition: shape_utils.hpp:187
Brute-force descriptor matcher.
Definition: features2d.hpp:1086
InputOutputArray noArray()
static void debugFilter(cv::InputArray original, cv::InputArray result, impl::CallMetaData metaData=impl::CallMetaData(), const char *description=nullptr, const char *view=nullptr)
Use the debug-framework to compare two images (from which the second is intended to be the result of ...
Definition: filter.hpp:36
static void debugDMatch(cv::InputArray img1, std::vector< cv::KeyPoint > keypoints1, cv::InputArray img2, std::vector< cv::KeyPoint > keypoints2, std::vector< cv::DMatch > matches, const impl::CallMetaData &data, const char *description=nullptr, const char *view=nullptr, bool useTrainDescriptor=true)
Add a filled in DMatch <dmatch> to debug GUI.
Definition: dmatch.hpp:49
Definition: videoio_c.h:169
Template class for smart pointers with shared ownership.
Definition: cvstd.hpp:261
virtual void detectAndCompute(InputArray image, InputArray mask, std::vector< KeyPoint > &keypoints, OutputArray descriptors, bool useProvidedKeypoints=false)
n-dimensional dense array class
Definition: mat.hpp:804
#define CVVISUAL_LOCATION
Creates an instance of CallMetaData with the location of the macro as value.
Definition: call_meta_data.hpp:65
bool empty() const
Returns true if the array has no elements.
cmake_minimum_required(VERSION 2.8)
project(cvvisual_test)
SET(CMAKE_PREFIX_PATH ~/software/opencv/install)
SET(CMAKE_CXX_COMPILER "g++-4.8")
SET(CMAKE_CXX_FLAGS "-std=c++11 -O2 -pthread -Wall -Werror")
# (un)set: cmake -DCVV_DEBUG_MODE=OFF ..
OPTION(CVV_DEBUG_MODE "cvvisual-debug-mode" ON)
if(CVV_DEBUG_MODE MATCHES ON)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -DCVVISUAL_DEBUGMODE")
endif()
FIND_PACKAGE(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
add_executable(cvvt main.cpp)
target_link_libraries(cvvt
opencv_core opencv_videoio opencv_imgproc opencv_features2d
opencv_cvv
)

Explanation

  1. We compile the program either using the above CmakeLists.txt with Option CVV_DEBUG_MODE=ON (cmake -DCVV_DEBUG_MODE=ON) or by adding the corresponding define CVVISUAL_DEBUGMODE to our compiler (e.g. g++ -DCVVISUAL_DEBUGMODE).
  2. The first cvv call simply shows the image (similar to imshow) with the imgIdString as comment.

    cvv::showImage(imgRead, CVVISUAL_LOCATION, imgIdString.c_str());

    The image is added to the overview tab in the visual debug GUI and the cvv call blocks.

    01_overview_single.jpg
    image

    The image can then be selected and viewed

    02_single_image_view.jpg
    image

    Whenever you want to continue in the code, i.e. unblock the cvv call, you can either continue until the next cvv call (Step), continue until the last cvv call (*>>*) or run the application until it exists (Close).

    We decide to press the green Step button.

  3. The next cvv calls are used to debug all kinds of filter operations, i.e. operations that take a picture as input and return a picture as output.

    cvv::debugFilter(imgRead, imgGray, CVVISUAL_LOCATION, "to gray");

    As with every cvv call, you first end up in the overview.

    03_overview_two.jpg
    image

    We decide not to care about the conversion to gray scale and press Step.

    cvv::debugFilter(imgGray, imgGraySmooth, CVVISUAL_LOCATION, "smoothed");

    If you open the filter call, you will end up in the so called "DefaultFilterView". Both images are shown next to each other and you can (synchronized) zoom into them.

    04_default_filter_view.jpg
    image

    When you go to very high zoom levels, each pixel is annotated with its numeric values.

    05_default_filter_view_high_zoom.jpg
    image

    We press Step twice and have a look at the dilated image.

    cvv::debugFilter(imgEdges, imgEdgesDilated, CVVISUAL_LOCATION, "dilated edges");

    The DefaultFilterView showing both images

    06_default_filter_view_edges.jpg
    image

    Now we use the View selector in the top right and select the "DualFilterView". We select "Changed Pixels" as filter and apply it (middle image).

    07_dual_filter_view_edges.jpg
    image

    After we had a close look at these images, perhaps using different views, filters or other GUI features, we decide to let the program run through. Therefore we press the yellow *>>* button.

    The program will block at

    and display the overview with everything that was passed to cvv in the meantime.

    08_overview_all.jpg
    image
  4. The cvv debugDMatch call is used in a situation where there are two images each with a set of descriptors that are matched to each other.

    We pass both images, both sets of keypoints and their matching to the visual debug module.

    cvv::debugDMatch(prevImgGray, prevKeypoints, imgGray, keypoints, matches, CVVISUAL_LOCATION, allMatchIdString.c_str());

    Since we want to have a look at matches, we use the filter capabilities (*#type match*) in the overview to only show match calls.

    09_overview_filtered_type_match.jpg
    image

    We want to have a closer look at one of them, e.g. to tune our parameters that use the matching. The view has various settings how to display keypoints and matches. Furthermore, there is a mouseover tooltip.

    10_line_match_view.jpg
    image

    We see (visual debugging!) that there are many bad matches. We decide that only 70% of the matches should be shown - those 70% with the lowest match distance.

    11_line_match_view_portion_selector.jpg
    image

    Having successfully reduced the visual distraction, we want to see more clearly what changed between the two images. We select the "TranslationMatchView" that shows to where the keypoint was matched in a different way.

    12_translation_match_view_portion_selector.jpg
    image

    It is easy to see that the cup was moved to the left during the two images.

    Although, cvv is all about interactively seeing the computer vision bugs, this is complemented by a "RawView" that allows to have a look at the underlying numeric data.

    13_raw_view.jpg
    image
  5. There are many more useful features contained in the cvv GUI. For instance, one can group the overview tab.

    14_overview_group_by_line.jpg
    image

    Result

Enjoy computer vision!