OpenCV  4.5.3
Open Source Computer Vision
Using Orbbec Astra 3D cameras

Prev Tutorial: Using Kinect and other OpenNI compatible depth sensors

Next Tutorial: Using Creative Senz3D and other Intel RealSense SDK compatible depth sensors

Introduction

This tutorial is devoted to the Astra Series of Orbbec 3D cameras (https://orbbec3d.com/product-astra-pro/). That cameras have a depth sensor in addition to a common color sensor. The depth sensors can be read using the open source OpenNI API with cv::VideoCapture class. The video stream is provided through the regular camera interface.

Installation Instructions

In order to use the Astra camera's depth sensor with OpenCV you should do the following steps:

  1. Download the latest version of Orbbec OpenNI SDK (from here https://orbbec3d.com/develop/). Unzip the archive, choose the build according to your operating system and follow installation steps provided in the Readme file. For instance, if you use 64bit GNU/Linux run:
    $ cd Linux/OpenNI-Linux-x64-2.3.0.63/
    $ sudo ./install.sh
    When you are done with the installation, make sure to replug your device for udev rules to take effect. The camera should now work as a general camera device. Note that your current user should belong to group video to have access to the camera. Also, make sure to source OpenNIDevEnvironment file:
    $ source OpenNIDevEnvironment
  2. Run the following commands to verify that OpenNI library and header files can be found. You should see something similar in your terminal:
    $ echo $OPENNI2_INCLUDE
    /home/user/OpenNI_2.3.0.63/Linux/OpenNI-Linux-x64-2.3.0.63/Include
    $ echo $OPENNI2_REDIST
    /home/user/OpenNI_2.3.0.63/Linux/OpenNI-Linux-x64-2.3.0.63/Redist
    If the above two variables are empty, then you need to source OpenNIDevEnvironment again. Now you can configure OpenCV with OpenNI support enabled by setting the WITH_OPENNI2 flag in CMake. You may also like to enable the BUILD_EXAMPLES flag to get a code sample working with your Astra camera. Run the following commands in the directory containing OpenCV source code to enable OpenNI support:
    $ mkdir build
    $ cd build
    $ cmake -DWITH_OPENNI2=ON ..
    If the OpenNI library is found, OpenCV will be built with OpenNI2 support. You can see the status of OpenNI2 support in the CMake log:
    -- Video I/O:
    -- DC1394: YES (2.2.6)
    -- FFMPEG: YES
    -- avcodec: YES (58.91.100)
    -- avformat: YES (58.45.100)
    -- avutil: YES (56.51.100)
    -- swscale: YES (5.7.100)
    -- avresample: NO
    -- GStreamer: YES (1.18.1)
    -- OpenNI2: YES (2.3.0)
    -- v4l/v4l2: YES (linux/videodev2.h)
  3. Build OpenCV:
    $ make

Code

The Astra Pro camera has two sensors – a depth sensor and a color sensor. The depth sensor can be read using the OpenNI interface with cv::VideoCapture class. The video stream is not available through OpenNI API and is only provided via the regular camera interface. So, to get both depth and color frames, two cv::VideoCapture objects should be created:

28  // Open depth stream
29  VideoCapture depthStream(CAP_OPENNI2_ASTRA);
30  // Open color stream
31  VideoCapture colorStream(0, CAP_V4L2);
OpenNI2 (for Orbbec Astra)
Definition: videoio.hpp:116
Same as CAP_V4L.
Definition: videoio.hpp:94

The first object will use the OpenNI2 API to retrieve depth data. The second one uses the Video4Linux2 interface to access the color sensor. Note that the example above assumes that the Astra camera is the first camera in the system. If you have more than one camera connected, you may need to explicitly set the proper camera number.

Before using the created VideoCapture objects you may want to set up stream parameters by setting objects' properties. The most important parameters are frame width, frame height and fps. For this example, we’ll configure width and height of both streams to VGA resolution, which is the maximum resolution available for both sensors, and we’d like both stream parameters to be the same for easier color-to-depth data registration:

49  // Set color and depth stream parameters
50  colorStream.set(CAP_PROP_FRAME_WIDTH, 640);
51  colorStream.set(CAP_PROP_FRAME_HEIGHT, 480);
52  depthStream.set(CAP_PROP_FRAME_WIDTH, 640);
53  depthStream.set(CAP_PROP_FRAME_HEIGHT, 480);
54  depthStream.set(CAP_PROP_OPENNI2_MIRROR, 0);
Definition: videoio.hpp:286
Width of the frames in the video stream.
Definition: videoio.hpp:138
Height of the frames in the video stream.
Definition: videoio.hpp:139

For setting and retrieving some property of sensor data generators use cv::VideoCapture::set and cv::VideoCapture::get methods respectively, e.g. :

63  // Print depth stream parameters
64  cout << "Depth stream: "
65  << depthStream.get(CAP_PROP_FRAME_WIDTH) << "x" << depthStream.get(CAP_PROP_FRAME_HEIGHT)
66  << " @" << depthStream.get(CAP_PROP_FPS) << " fps" << endl;
Width of the frames in the video stream.
Definition: videoio.hpp:138
Height of the frames in the video stream.
Definition: videoio.hpp:139
Frame rate.
Definition: videoio.hpp:140

The following properties of cameras available through OpenNI interface are supported for the depth generator:

After the VideoCapture objects have been set up, you can start reading frames from them.

Note
OpenCV's VideoCapture provides synchronous API, so you have to grab frames in a new thread to avoid one stream blocking while another stream is being read. VideoCapture is not a thread-safe class, so you need to be careful to avoid any possible deadlocks or data races.

As there are two video sources that should be read simultaneously, it’s necessary to create two threads to avoid blocking. Example implementation that gets frames from each sensor in a new thread and stores them in a list along with their timestamps:

70  // Create two lists to store frames
71  std::list<Frame> depthFrames, colorFrames;
72  const std::size_t maxFrames = 64;
73 
74  // Synchronization objects
75  std::mutex mtx;
76  std::condition_variable dataReady;
77  std::atomic<bool> isFinish;
78 
79  isFinish = false;
80 
81  // Start depth reading thread
82  std::thread depthReader([&]
83  {
84  while (!isFinish)
85  {
86  // Grab and decode new frame
87  if (depthStream.grab())
88  {
89  Frame f;
90  f.timestamp = cv::getTickCount();
91  depthStream.retrieve(f.frame, CAP_OPENNI_DEPTH_MAP);
92  if (f.frame.empty())
93  {
94  cerr << "ERROR: Failed to decode frame from depth stream" << endl;
95  break;
96  }
97 
98  {
99  std::lock_guard<std::mutex> lk(mtx);
100  if (depthFrames.size() >= maxFrames)
101  depthFrames.pop_front();
102  depthFrames.push_back(f);
103  }
104  dataReady.notify_one();
105  }
106  }
107  });
108 
109  // Start color reading thread
110  std::thread colorReader([&]
111  {
112  while (!isFinish)
113  {
114  // Grab and decode new frame
115  if (colorStream.grab())
116  {
117  Frame f;
118  f.timestamp = cv::getTickCount();
119  colorStream.retrieve(f.frame);
120  if (f.frame.empty())
121  {
122  cerr << "ERROR: Failed to decode frame from color stream" << endl;
123  break;
124  }
125 
126  {
127  std::lock_guard<std::mutex> lk(mtx);
128  if (colorFrames.size() >= maxFrames)
129  colorFrames.pop_front();
130  colorFrames.push_back(f);
131  }
132  dataReady.notify_one();
133  }
134  }
135  });
Depth values in mm (CV_16UC1)
Definition: videoio.hpp:301
int64 getTickCount()
Returns the number of ticks.

VideoCapture can retrieve the following data:

  1. data given from the depth generator:
  2. data given from the color sensor is a regular BGR image (CV_8UC3).

When new data are available, each reading thread notifies the main thread using a condition variable. A frame is stored in the ordered list – the first frame in the list is the earliest captured, the last frame is the latest captured. As depth and color frames are read from independent sources two video streams may become out of sync even when both streams are set up for the same frame rate. A post-synchronization procedure can be applied to the streams to combine depth and color frames into pairs. The sample code below demonstrates this procedure:

139  // Pair depth and color frames
140  while (!isFinish)
141  {
142  std::unique_lock<std::mutex> lk(mtx);
143  while (!isFinish && (depthFrames.empty() || colorFrames.empty()))
144  dataReady.wait(lk);
145 
146  while (!depthFrames.empty() && !colorFrames.empty())
147  {
148  if (!lk.owns_lock())
149  lk.lock();
150 
151  // Get a frame from the list
152  Frame depthFrame = depthFrames.front();
153  int64 depthT = depthFrame.timestamp;
154 
155  // Get a frame from the list
156  Frame colorFrame = colorFrames.front();
157  int64 colorT = colorFrame.timestamp;
158 
159  // Half of frame period is a maximum time diff between frames
160  const int64 maxTdiff = int64(1000000000 / (2 * colorStream.get(CAP_PROP_FPS)));
161  if (depthT + maxTdiff < colorT)
162  {
163  depthFrames.pop_front();
164  continue;
165  }
166  else if (colorT + maxTdiff < depthT)
167  {
168  colorFrames.pop_front();
169  continue;
170  }
171  depthFrames.pop_front();
172  colorFrames.pop_front();
173  lk.unlock();
174 
176  // Show depth frame
177  Mat d8, dColor;
178  depthFrame.frame.convertTo(d8, CV_8U, 255.0 / 2500);
179  applyColorMap(d8, dColor, COLORMAP_OCEAN);
180  imshow("Depth (colored)", dColor);
181 
182  // Show color frame
183  imshow("Color", colorFrame.frame);
185 
186  // Exit on Esc key press
187  int key = waitKey(1);
188  if (key == 27) // ESC
189  {
190  isFinish = true;
191  break;
192  }
193  }
194  }
ocean
Definition: imgproc.hpp:4332
#define CV_8U
Definition: interface.h:73
void imshow(const String &winname, InputArray mat)
Displays an image in the specified window.
int64_t int64
Definition: interface.h:61
void applyColorMap(InputArray src, OutputArray dst, int colormap)
Applies a GNU Octave/MATLAB equivalent colormap on a given image.
Frame rate.
Definition: videoio.hpp:140
int waitKey(int delay=0)
Waits for a pressed key.

In the code snippet above the execution is blocked until there are some frames in both frame lists. When there are new frames, their timestamps are being checked – if they differ more than a half of the frame period then one of the frames is dropped. If timestamps are close enough, then two frames are paired. Now, we have two frames: one containing color information and another one – depth information. In the example above retrieved frames are simply shown with cv::imshow function, but you can insert any other processing code here.

In the sample images below you can see the color frame and the depth frame representing the same scene. Looking at the color frame it's hard to distinguish plant leaves from leaves painted on a wall, but the depth data makes it easy.

astra_color.jpg
Color frame
astra_depth.png
Depth frame

The complete implementation can be found in orbbec_astra.cpp in samples/cpp/tutorial_code/videoio directory.