OpenCV  5.0.0alpha
Open Source Computer Vision
Loading...
Searching...
No Matches
Use OpenCL in Android camera preview based CV application

Prev Tutorial: How to run deep networks on Android device
Next Tutorial: Installation in MacOS

Original author Andrey Pavlenko, Alexander Panov
Compatibility OpenCV >= 4.9

This guide was designed to help you in use of OpenCL ™ in Android camera preview based CV application. Tutorial was written for Android Studio 2022.2.1. It was tested with Ubuntu 22.04.

This tutorial assumes you have the following installed and configured:

  • Android Studio (2022.2.1.+)
  • JDK 17
  • Android SDK
  • Android NDK (25.2.9519653+)
  • download OpenCV source code from github or from releases and build by instruction on wiki.

It also assumes that you are familiar with Android Java and JNI programming basics. If you need help with anything of the above, you may refer to our Introduction into Android Development guide.

This tutorial also assumes you have an Android operated device with OpenCL enabled.

The related source code is located within OpenCV samples at opencv/samples/android/tutorial-4-opencl directory.

How to build custom OpenCV Android SDK with OpenCL

  1. Assemble and configure Android OpenCL SDK. The JNI part of the sample depends on standard Khornos OpenCL headers, and C++ wrapper for OpenCL and libOpenCL.so. The standard OpenCL headers may be copied from 3rdparty directory in OpenCV repository or you Linux distribution package. C++ wrapper is available in official Khronos reposiotry on Github. Copy the header files to didicated directory in the following way:
    cd your_path/ && mkdir ANDROID_OPENCL_SDK && mkdir ANDROID_OPENCL_SDK/include && cd ANDROID_OPENCL_SDK/include
    cp -r path_to_opencv/opencv/3rdparty/include/opencl/1.2/CL . && cd CL
    wget https://github.com/KhronosGroup/OpenCL-CLHPP/raw/main/include/CL/opencl.hpp
    wget https://github.com/KhronosGroup/OpenCL-CLHPP/raw/main/include/CL/cl2.hpp
    libOpenCL.so may be provided with BSP or just downloaded from any OpenCL-cabaple Android device with relevant arhitecture.
    cd your_path/ANDROID_OPENCL_SDK && mkdir lib && cd lib
    adb pull /system/vendor/lib64/libOpenCL.so
    System verison of libOpenCL.so may have a lot of platform specific dependencies. -Wl,--allow-shlib-undefined flag allows to ignore 3rdparty symbols if they are not used during the build. The following CMake line allows to link the JNI part against standard OpenCL, but not include the loadLibrary into application package. System OpenCL API is used in run-time.
    target_link_libraries(${target} -lOpenCL)
  2. Build custom OpenCV Android SDK with OpenCL. OpenCL support (T-API) is disabled in OpenCV builds for Android OS by default. but it's possible to rebuild locally OpenCV for Android with OpenCL/T-API enabled: use -DWITH_OPENCL=ON option for CMake. You also need to specify the path to the Android OpenCL SDK: use -DANDROID_OPENCL_SDK=path_to_your_Android_OpenCL_SDK option for CMake. If you are building OpenCV using build_sdk.py please follow instruction on wiki. Set these CMake parameters in your .config.py, e.g. ndk-18-api-level-21.config.py:
    ABI("3", "arm64-v8a", None, 21, cmake_vars=dict('WITH_OPENCL': 'ON', 'ANDROID_OPENCL_SDK': 'path_to_your_Android_OpenCL_SDK'))
    If you are building OpenCV using cmake/ninja, use this bash script (set your NDK_VERSION and your paths instead of examples of paths):
    cd path_to_opencv && mkdir build && cd build
    export NDK_VERSION=25.2.9519653
    export ANDROID_SDK=/home/user/Android/Sdk/
    export ANDROID_OPENCL_SDK=/path_to_ANDROID_OPENCL_SDK/
    export ANDROID_HOME=$ANDROID_SDK
    export ANDROID_NDK_HOME=$ANDROID_SDK/ndk/$NDK_VERSION/
    cmake -GNinja -DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK_HOME/build/cmake/android.toolchain.cmake -DANDROID_STL=c++_shared -DANDROID_NATIVE_API_LEVEL=24
    -DANDROID_SDK=$ANDROID_SDK -DANDROID_NDK=$ANDROID_NDK_HOME -DBUILD_JAVA=ON -DANDROID_HOME=$ANDROID_SDK -DBUILD_ANDROID_EXAMPLES=ON
    -DINSTALL_ANDROID_EXAMPLES=ON -DANDROID_ABI=arm64-v8a -DWITH_OPENCL=ON -DANDROID_OPENCL_SDK=$ANDROID_OPENCL_SDK ..

Preface

Using GPGPU via OpenCL for applications performance enhancements is quite a modern trend now. Some CV algo-s (e.g. image filtering) run much faster on a GPU than on a CPU. Recently it has become possible on Android OS.

The most popular CV application scenario for an Android operated device is starting camera in preview mode, applying some CV algo to every frame and displaying the preview frames modified by that CV algo.

Let's consider how we can use OpenCL in this scenario. In particular let's try two ways: direct calls to OpenCL API and recently introduced OpenCV T-API (aka Transparent API) - implicit OpenCL accelerations of some OpenCV algo-s.

Application structure

Starting Android API level 11 (Android 3.0) Camera API allows use of OpenGL texture as a target for preview frames. Android API level 21 brings a new Camera2 API that provides much more control over the camera settings and usage modes, it allows several targets for preview frames and OpenGL texture in particular.

Having a preview frame in an OpenGL texture is a good deal for using OpenCL because there is an OpenGL-OpenCL Interoperability API (cl_khr_gl_sharing), allowing sharing OpenGL texture data with OpenCL functions without copying (with some restrictions of course).

Let's create a base for our application that just configures Android camera to send preview frames to OpenGL texture and displays these frames on display without any processing.

A minimal Activity class for that purposes looks like following:

public class Tutorial4Activity extends Activity {
private MyGLSurfaceView mView;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
requestWindowFeature(Window.FEATURE_NO_TITLE);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN,
WindowManager.LayoutParams.FLAG_FULLSCREEN);
getWindow().setFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON,
WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON);
setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE);
mView = new MyGLSurfaceView(this);
setContentView(mView);
}
@Override
protected void onPause() {
mView.onPause();
super.onPause();
}
@Override
protected void onResume() {
super.onResume();
mView.onResume();
}
}

And a minimal View class respectively:

public class MyGLSurfaceView extends CameraGLSurfaceView implements CameraGLSurfaceView.CameraTextureListener {
static final String LOGTAG = "MyGLSurfaceView";
protected int procMode = NativePart.PROCESSING_MODE_NO_PROCESSING;
static final String[] procModeName = new String[] {"No Processing", "CPU", "OpenCL Direct", "OpenCL via OpenCV"};
protected int frameCounter;
protected long lastNanoTime;
TextView mFpsText = null;
public MyGLSurfaceView(Context context, AttributeSet attrs) {
super(context, attrs);
}
@Override
public boolean onTouchEvent(MotionEvent e) {
if(e.getAction() == MotionEvent.ACTION_DOWN)
((Activity)getContext()).openOptionsMenu();
return true;
}
@Override
public void surfaceCreated(SurfaceHolder holder) {
super.surfaceCreated(holder);
//NativePart.initCL();
}
@Override
public void surfaceDestroyed(SurfaceHolder holder) {
//NativePart.closeCL();
super.surfaceDestroyed(holder);
}
public void setProcessingMode(int newMode) {
if(newMode>=0 && newMode<procModeName.length)
procMode = newMode;
else
Log.e(LOGTAG, "Ignoring invalid processing mode: " + newMode);
((Activity) getContext()).runOnUiThread(new Runnable() {
public void run() {
Toast.makeText(getContext(), "Selected mode: " + procModeName[procMode], Toast.LENGTH_LONG).show();
}
});
}
@Override
public void onCameraViewStarted(int width, int height) {
((Activity) getContext()).runOnUiThread(new Runnable() {
public void run() {
Toast.makeText(getContext(), "onCameraViewStarted", Toast.LENGTH_SHORT).show();
}
});
if (NativePart.builtWithOpenCL())
NativePart.initCL();
frameCounter = 0;
lastNanoTime = System.nanoTime();
}
@Override
public void onCameraViewStopped() {
((Activity) getContext()).runOnUiThread(new Runnable() {
public void run() {
Toast.makeText(getContext(), "onCameraViewStopped", Toast.LENGTH_SHORT).show();
}
});
}
@Override
public boolean onCameraTexture(int texIn, int texOut, int width, int height) {
// FPS
frameCounter++;
if(frameCounter >= 30)
{
final int fps = (int) (frameCounter * 1e9 / (System.nanoTime() - lastNanoTime));
Log.i(LOGTAG, "drawFrame() FPS: "+fps);
if(mFpsText != null) {
Runnable fpsUpdater = new Runnable() {
public void run() {
mFpsText.setText("FPS: " + fps);
}
};
new Handler(Looper.getMainLooper()).post(fpsUpdater);
} else {
Log.d(LOGTAG, "mFpsText == null");
mFpsText = (TextView)((Activity) getContext()).findViewById(R.id.fps_text_view);
}
frameCounter = 0;
lastNanoTime = System.nanoTime();
}
if(procMode == NativePart.PROCESSING_MODE_NO_PROCESSING)
return false;
NativePart.processFrame(texIn, texOut, width, height, procMode);
return true;
}
}
Note
we use two renderer classes: one for legacy Camera API and another for modern Camera2.

A minimal Renderer class can be implemented in Java (OpenGL ES 2.0 available in Java), but since we are going to modify the preview texture with OpenCL let's move OpenGL stuff to JNI. Here is a simple Java wrapper for our JNI stuff:

public class NativePart {
static
{
System.loadLibrary("opencv_java5");
System.loadLibrary("JNIpart");
}
public static final int PROCESSING_MODE_NO_PROCESSING = 0;
public static final int PROCESSING_MODE_CPU = 1;
public static final int PROCESSING_MODE_OCL_DIRECT = 2;
public static final int PROCESSING_MODE_OCL_OCV = 3;
public static native boolean builtWithOpenCL();
public static native int initCL();
public static native void closeCL();
public static native void processFrame(int tex1, int tex2, int w, int h, int mode);
}

Since Camera and Camera2 APIs differ significantly in camera setup and control, let's create a base class for the two corresponding renderers:

public abstract class MyGLRendererBase implements GLSurfaceView.Renderer, SurfaceTexture.OnFrameAvailableListener {
protected final String LOGTAG = "MyGLRendererBase";
protected SurfaceTexture mSTex;
protected MyGLSurfaceView mView;
protected boolean mGLInit = false;
protected boolean mTexUpdate = false;
MyGLRendererBase(MyGLSurfaceView view) {
mView = view;
}
protected abstract void openCamera();
protected abstract void closeCamera();
protected abstract void setCameraPreviewSize(int width, int height);
public void onResume() {
Log.i(LOGTAG, "onResume");
}
public void onPause() {
Log.i(LOGTAG, "onPause");
mGLInit = false;
mTexUpdate = false;
closeCamera();
if(mSTex != null) {
mSTex.release();
mSTex = null;
NativeGLRenderer.closeGL();
}
}
@Override
public synchronized void onFrameAvailable(SurfaceTexture surfaceTexture) {
//Log.i(LOGTAG, "onFrameAvailable");
mTexUpdate = true;
mView.requestRender();
}
@Override
public void onDrawFrame(GL10 gl) {
//Log.i(LOGTAG, "onDrawFrame");
if (!mGLInit)
return;
synchronized (this) {
if (mTexUpdate) {
mSTex.updateTexImage();
mTexUpdate = false;
}
}
NativeGLRenderer.drawFrame();
}
@Override
public void onSurfaceChanged(GL10 gl, int surfaceWidth, int surfaceHeight) {
Log.i(LOGTAG, "onSurfaceChanged("+surfaceWidth+"x"+surfaceHeight+")");
NativeGLRenderer.changeSize(surfaceWidth, surfaceHeight);
setCameraPreviewSize(surfaceWidth, surfaceHeight);
}
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Log.i(LOGTAG, "onSurfaceCreated");
String strGLVersion = GLES20.glGetString(GLES20.GL_VERSION);
if (strGLVersion != null)
Log.i(LOGTAG, "OpenGL ES version: " + strGLVersion);
int hTex = NativeGLRenderer.initGL();
mSTex = new SurfaceTexture(hTex);
mSTex.setOnFrameAvailableListener(this);
openCamera();
mGLInit = true;
}
}
std::string String
Definition cvstd.hpp:151

As you can see, inheritors for Camera and Camera2 APIs should implement the following abstract methods:

protected abstract void openCamera();
protected abstract void closeCamera();
protected abstract void setCameraPreviewSize(int width, int height);

Let's leave the details of their implementation beyond of this tutorial, please refer the source code to see them.

Preview Frames modification

The details OpenGL ES 2.0 initialization are also quite straightforward and noisy to be quoted here, but the important point here is that the OpeGL texture to be the target for camera preview should be of type GL_TEXTURE_EXTERNAL_OES (not GL_TEXTURE_2D), internally it keeps picture data in YUV format. That makes unable sharing it via CL-GL interop (cl_khr_gl_sharing) and accessing its pixel data via C/C++ code. To overcome this restriction we have to perform an OpenGL rendering from this texture to another regular GL_TEXTURE_2D one using FrameBuffer Object (aka FBO).

C/C++ code

After that we can read (copy) pixel data from C/C++ via glReadPixels() and write them back to texture after modification via glTexSubImage2D().

Direct OpenCL calls

Also that GL_TEXTURE_2D texture can be shared with OpenCL without copying, but we have to create OpenCL context with special way for that:

int initCL()
{
dumpCLinfo();
LOGE("initCL: start initCL");
EGLDisplay mEglDisplay = eglGetCurrentDisplay();
if (mEglDisplay == EGL_NO_DISPLAY)
LOGE("initCL: eglGetCurrentDisplay() returned 'EGL_NO_DISPLAY', error = %x", eglGetError());
EGLContext mEglContext = eglGetCurrentContext();
if (mEglContext == EGL_NO_CONTEXT)
LOGE("initCL: eglGetCurrentContext() returned 'EGL_NO_CONTEXT', error = %x", eglGetError());
cl_context_properties props[] =
{ CL_GL_CONTEXT_KHR, (cl_context_properties) mEglContext,
CL_EGL_DISPLAY_KHR, (cl_context_properties) mEglDisplay,
CL_CONTEXT_PLATFORM, 0,
0 };
try
{
haveOpenCL = false;
cl::Platform p = cl::Platform::getDefault();
std::string ext = p.getInfo<CL_PLATFORM_EXTENSIONS>();
if(ext.find("cl_khr_gl_sharing") == std::string::npos)
LOGE("Warning: CL-GL sharing isn't supported by PLATFORM");
props[5] = (cl_context_properties) p();
theContext = cl::Context(CL_DEVICE_TYPE_GPU, props);
std::vector<cl::Device> devs = theContext.getInfo<CL_CONTEXT_DEVICES>();
LOGD("Context returned %d devices, taking the 1st one", devs.size());
ext = devs[0].getInfo<CL_DEVICE_EXTENSIONS>();
if(ext.find("cl_khr_gl_sharing") == std::string::npos)
LOGE("Warning: CL-GL sharing isn't supported by DEVICE");
theQueue = cl::CommandQueue(theContext, devs[0]);
cl::Program::Sources src(1, std::make_pair(oclProgI2I, sizeof(oclProgI2I)));
theProgI2I = cl::Program(theContext, src);
theProgI2I.build(devs);
cv::ocl::attachContext(p.getInfo<CL_PLATFORM_NAME>(), p(), theContext(), devs[0]());
LOGD("OpenCV+OpenCL works OK!");
else
LOGE("Can't init OpenCV with OpenCL TAPI");
haveOpenCL = true;
}
catch(const cl::Error& e)
{
LOGE("cl::Error: %s (%d)", e.what(), e.err());
return 1;
}
catch(const std::exception& e)
{
LOGE("std::exception: %s", e.what());
return 2;
}
catch(...)
{
LOGE( "OpenCL info: unknown error while initializing OpenCL stuff" );
return 3;
}
LOGD("initCL completed");
if (haveOpenCL)
return 0;
else
return 4;
}

Then the texture can be wrapped by a cl::ImageGL object and processed via OpenCL calls:

cl::ImageGL imgIn (theContext, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, texIn);
cl::ImageGL imgOut(theContext, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texOut);
std::vector < cl::Memory > images;
images.push_back(imgIn);
images.push_back(imgOut);
int64_t t = getTimeMs();
theQueue.enqueueAcquireGLObjects(&images);
theQueue.finish();
LOGD("enqueueAcquireGLObjects() costs %d ms", getTimeInterval(t));
t = getTimeMs();
cl::Kernel Laplacian(theProgI2I, "Laplacian"); //TODO: may be done once
Laplacian.setArg(0, imgIn);
Laplacian.setArg(1, imgOut);
theQueue.finish();
LOGD("Kernel() costs %d ms", getTimeInterval(t));
t = getTimeMs();
theQueue.enqueueNDRangeKernel(Laplacian, cl::NullRange, cl::NDRange(w, h), cl::NullRange);
theQueue.finish();
LOGD("enqueueNDRangeKernel() costs %d ms", getTimeInterval(t));
t = getTimeMs();
theQueue.enqueueReleaseGLObjects(&images);
theQueue.finish();
LOGD("enqueueReleaseGLObjects() costs %d ms", getTimeInterval(t));

OpenCV T-API

But instead of writing OpenCL code by yourselves you may want to use OpenCV T-API that calls OpenCL implicitly. All that you need is to pass the created OpenCL context to OpenCV (via cv::ocl::attachContext()) and somehow wrap OpenGL texture with cv::UMat. Unfortunately UMat keeps OpenCL buffer internally, that can't be wrapped over either OpenGL texture or OpenCL image - so we have to copy image data here:

int64_t t = getTimeMs();
cl::ImageGL imgIn (theContext, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 0, texIn);
std::vector < cl::Memory > images(1, imgIn);
theQueue.enqueueAcquireGLObjects(&images);
theQueue.finish();
cv::UMat uIn, uOut, uTmp;
LOGD("loading texture data to OpenCV UMat costs %d ms", getTimeInterval(t));
theQueue.enqueueReleaseGLObjects(&images);
t = getTimeMs();
//cv::blur(uIn, uOut, cv::Size(5, 5));
cv::Laplacian(uIn, uTmp, CV_8U);
cv:multiply(uTmp, 10, uOut);
LOGD("OpenCV processing costs %d ms", getTimeInterval(t));
t = getTimeMs();
cl::ImageGL imgOut(theContext, CL_MEM_WRITE_ONLY, GL_TEXTURE_2D, 0, texOut);
images.clear();
images.push_back(imgOut);
theQueue.enqueueAcquireGLObjects(&images);
cl_mem clBuffer = (cl_mem)uOut.handle(cv::ACCESS_READ);
cl_command_queue q = (cl_command_queue)cv::ocl::Queue::getDefault().ptr();
size_t offset = 0;
size_t origin[3] = { 0, 0, 0 };
size_t region[3] = { (size_t)w, (size_t)h, 1 };
CV_Assert(clEnqueueCopyBufferToImage (q, clBuffer, imgOut(), offset, origin, region, 0, NULL, NULL) == CL_SUCCESS);
theQueue.enqueueReleaseGLObjects(&images);
LOGD("uploading results to texture costs %d ms", getTimeInterval(t));
Note
We have to make one more image data copy when placing back the modified image to the original OpenGL texture via OpenCL image wrapper.

Performance notes

To compare the performance we measured FPS of the same preview frames modification (Laplacian) done by C/C++ code (call to cv::Laplacian with cv::Mat), by direct OpenCL calls (using OpenCL images for input and output), and by OpenCV T-API (call to cv::Laplacian with cv::UMat) on Sony Xperia Z3 with 720p camera resolution:

  • C/C++ version shows 3-4 fps
  • direct OpenCL calls shows 25-27 fps
  • OpenCV T-API shows 11-13 fps (due to extra copying from cl_image to cl_buffer and back)