最近我在考虑是否要改变XFace项目的技术方案,通过一番调研后我获得下面的结果。html
本文将介绍OpenCV,JavaCV以及OpenCV for Android(如下简称OpenCV4Android)之间的区别,并以一我的脸识别的Android应用为例,详细介绍能够采用的实践方案。java
OpenCV: http://docs.opencv.org/index.htmlandroid
OpenCV4Android: OpenCV4Android_SDK.htmlgit
JavaCV: https://github.com/bytedeco/javacvgithub
OpenCV是C++版本的开源计算机视觉库;JavaCV是对OpenCV的Java封装,开发团队和OpenCV开发团队没有关系;OpenCV4Android也是对OpenCV的封装以使其可以应用于Android平台,开发团队是OpenCV开发团队的一部分,也就是OpenCV4Android和JavaCV没有任何关系!算法
参考网址:https://groups.google.com/forum/#!topic/javacv/qJmBLvpV7cM数组
android-opencv has no relation to JavaCV, so you should ask somewhere else for questions about it.. The philosophy of android-opencv (and of the OpenCV team as general) is to make OpenCV run on Android, which forces them to use Java, but otherwise they prefer to use C++ or Python. With JavaCV, my hope is to have it run on as many platforms as possible, including Android, since it supports (some sort of) Java, so we can use sane(r) and more efficient languages such as the Java and Scala languages. Take your pick!android-studio
大多数时候二者性能相差不大,某些OpenCV函数可以并行化处理而JavaCV不行,可是JavaCV还绑定了不少其余的图像处理库,功能也足够强大。app
参考网址:http://stackoverflow.com/questions/21207755/opencv-javacv-vs-opencv-c-c-interfaceside
I'd like to add a couple of things to @ejbs's answer.
First of all, you concerned 2 separate issues:
Java vs. C++ performance
OpenCV vs JavaCV
Java vs. C++ performance is a long, long story. On one hand, C++ programs are compiled to a highly optimized native code. They start quickly and run fast all the time without pausing for garbage collection or other VM duties (as Java do). On other hand, once compiled, program in C++ can't change, no matter on what machine they are run, while Java bytecode is compiled "just-in-time" and is always optimized for processor architecture they run on. In modern world, with so many different devices (and processor architectures) this may be really significant. Moreover, some JVMs (e.g. Oracle Hotspot) can optimize even the code that is already compiled to native code! VM collect data about program execution and from time to time tries to rewrite code in such a way that it is optimized for this specific execution. So in such complicated circumstances the only real way to compare performance of implementations in different programming languages is to just run them and see the result.
OpenCV vs. JavaCV is another story. First you need to understand stack of technologies behind these libraries.
OpenCV was originally created in 1999 in Intel research labs and was written in C. Since that time, it changed the maintainer several times, became open source and reached 3rd version (upcoming release). At the moment, core of the library is written in C++ with popular interface in Python and a number of wrappers in other programming languages.
JavaCV is one of such wrappers. So in most cases when you run program with JavaCV you actually use OpenCV too, just call it via another interface. But JavaCV provides more than just one-to-one wrapper around OpenCV. In fact, it bundles the whole number of image processing libraries, including FFmpeg, OpenKinect and others. (Note, that in C++ you can bind these libraries too).
So, in general it doesn't matter what you are using - OpenCV or JavaCV, you will get just about same performance. It more depends on your main task - is it Java or C++ which is better suited for your needs.
There's one more important point about performance. Using OpenCV (directly or via wrapper) you will sometimes find that OpenCV functions overcome other implementations by several orders. This is because of heavy use of low-level optimizations in its core. For example, OpenCV's filter2D function is SIMD-accelerated and thus can process several sets of data in parallel. And when it comes to computer vision, such optimizations of common functions may easily lead to significant speedup.
目前OpenCV的最新版本是2.4.10,OpenCV4Android是2.4.9,JavaCV的版本是0.9
OpenCV天然支持人脸识别算法,详细的使用教程看这里
OpenCV4Android暂时不支持,可是能够经过创建一层简单的封装来实现,封装的方法看这里
JavaCV如今已经支持人脸识别算法了,在Samples中能够找到一份样例代码OpenCVFaceRecognizer.java
由于是移动应用,因此要可以从移动设备中获取摄像头返回的数据是关键!而这个偏偏是这类应用要考虑的一个重要因素,由于它直接决定了你的应用须要使用的技术方案!
关于摄像头的使用其实我已经在前面的博文Android Ndk and Opencv Development 3中详细介绍过了,这里我引用部份内容,若是想了解更多的话,不妨先看下前面的内容。 [下面提到的OpenCV library
是 OpenCV4Android SDK
的一部分]
[其实还有一种获取摄像头数据的方式,那就是直接在Native层操做摄像头,OpenCV4Android SDK的Samples中提供了一个样例native-activity
,这种方式实际上是极其不推荐使用的,一方面代码很差写,不便操做;另外一方面听说这部分的API常常变化,不便维护]
(1) 关于如何进行和OpenCV有关的摄像头开发
在没有OpenCV library的状况下,也就是咱们直接使用Android中的Camera API的话,获取获得的图像帧是YUV
格式的,咱们在处理以前每每要先转换成RGB(A)
格式的才行。
若是有了OpenCV library的话摄像头的开发就简单多了,能够参见OpenCV for Android中的三个Tutorial(CameraPreview
, MixingProcessing
和CameraControl
),源码都在OpenCV-Android sdk的samples目录下,这里简单介绍下:OpenCV Library中提供了两种摄像头,一种是Java摄像头-org.OpenCV.Android.JavaCameraView
,另外一种是Native摄像头-org.OpenCV.Android.NativeCameraView
(能够运行CameraPreview这个项目来体验下二者的不一样,其实差很少)。二者都继承自CameraBridgeViewBase
这个抽象类,可是JavaCamera使用的就是Android SDK中的Camera
,而NativeCamera使用的是OpenCV中的VideoCapture
。
(2) 关于如何传递摄像头预览的图像数据给Native层
这个很重要!我曾经试过不少的方式,大体思路有:
①传递图片路径:这是最差的方式,我使用过,速度很慢,实时性不好,主要用于前期开发的时候进行测试,测试Java层和Native层的互调是否正常。
②传递预览图像的字节数组到Native层,而后将字节数组处理成RGB
或者RGBA
的格式[具体哪一种格式要看你的图像处理函数可否处理RGBA
格式的,若是能够的话推荐转换成RGBA
格式,由于返回的也是RGBA
格式的]。网上有不少的文章讨论如何转换:一种方式是使用一个自定义的函数进行编码转换(能够搜索到这个函数,例如这篇文章Camera image->NDK->OpenGL texture),另外一个种方式是使用OpenCV中的Mat
和cvtColor
函数进行转换,接着调用图像处理函数,处理完成以后,将处理的结果保存在一个整形数组中(实际上就是RGB
或者RGBA
格式的图像数据),最后调用Bitmap的方法将其转换成bitmap返回。这种方法速度也比较慢,可是比第一种方案要快了很多,具体实现过程能够看推荐书籍《Mastering OpenCV with Practical Computer Vision Projects》,第一章Cartoonifer and Skin Changer for Android
就是一个Android的应用实例。
③使用OpenCV的摄像头:JavaCamera或者NativeCamera都行,好处是它进行了不少的封装,能够直接将预览图像的Mat
结构传递给Native层,这种传递是使用Mat
的内存地址(long
型),Native层只要根据这个地址将其封装成Mat
就能够进行处理了,另外,它的回调函数的返回值也是Mat
,很是方便!这种方式速度较快。具体过程能够参考OpenCV-Android sdk的samples项目中的Tutorial2-MixedProcessing
。
综上所述,咱们来总结下若是想要开发一我的脸识别的Android应用程序,大体会有哪些技术方案呢?
(1) 摄像头使用纯Android Camera API,将YUV
格式的数据传入到Native层,转换成RGB(A)
格式,而后调用OpenCV人脸识别算法进行处理,最后将处理结果RGB(A)
格式数据返回给Java层。优势是对其余内容的依赖较少,灵活性好,开发者甚至能够对内部算法进行修改,缺点天然是须要开发者具备很强的技术水平,要同时熟练OpenCV和Android NDK开发,在三星Galaxy I9000上测试比较慢,有明显卡顿延迟。
这种方式能够参考书籍《Mastering OpenCV with Practical Computer Vision Projects》 的第一章Cartoonifer and Skin Changer for Android
的实现方式。 >>> 我测试经过的源码下载
(2) 摄像头使用纯Android Camera API,将YUV
格式的数据直接在Java层转换成RGB(A)
格式,直接传给JavaCV人脸识别算法进行处理,而后返回识别结果便可。优势是只依赖了JavaCV,缺点是从OpenCV算法转成JavaCV实现须要些工做量。
这种方式我没有试验过,转换的方式能够参考这里 [我会尽快试验一下,若是可行我会将代码公开]
(3) 摄像头使用OpenCV4Android Library,将获得的数据Mat
的内存地址传给Native层,Native层经过地址还原成Mat
,而后调用OpenCV人脸识别算法进行处理,最后将处理结果RGB(A)
格式数据返回给Java层。优势是灵活性好,缺点是依赖了OpenCV4Android Library和OpenCV,因此须要掌握OpenCV和Android NDK开发,在三星Galaxy I9000上测试还行,若是算法处理比较慢的话会慢1-3s左右才返回结果。
这种方式能够参考OpenCV-Android sdk的samples项目中的Tutorial2-MixedProcessing
[个人开源项目XFace
采用的正是这种方案]
(4) 摄像头使用OpenCV4Android Library,Native层对OpenCV人脸识别算法类进行简单封装,而后将摄像头获得的数据Mat
直接传给OpenCV4Android Library的人脸识别算法,而后返回识别结果便可。优势是依赖还不算多并且可能要写的Native层代码也很少。
这种方式我试验过,利用前面提到过封装的方法,能够参考这里,注意按照答案的例子在加载facerec
库以前要记得加载opencv_java
库才行! >>>我测试经过的源码下载
(5) 摄像头使用OpenCV4Android Library,而后将摄像头获得的数据Mat
直接传给JavaCV的人脸识别算法,而后返回识别结果便可。优势是看起来方案很不错,只须要写Java代码就好了,Native层可能只须要导入一些*so
文件到jniLibs
目录中就好了,缺点是依赖太多了!
这种方式能够参考Github上的这个项目 >>> 我测试经过的源码下载
各类方案各有利弊,一方面要考虑技术方案是否可行,另外一方面还要考虑该技术方案是否便于开发!哎,码农真是伤不起啊!
补充部分
这里假设你是按照我上一篇文章Android NDK and OpenCV Development With Android Studio 的方式来建立的项目。
(1) 方案1中的部分代码
实现将YUV
格式数据转换成 RGBA
格式数据的Native层代码
// Just show the plain camera image without modifying it. JNIEXPORT void JNICALL Java_com_Cartoonifier_CartoonifierView_ShowPreview(JNIEnv* env, jobject, jint width, jint height, jbyteArray yuv, jintArray bgra) { // Get native access to the given Java arrays. jbyte* _yuv = env->GetByteArrayElements(yuv, 0); jint* _bgra = env->GetIntArrayElements(bgra, 0); // Prepare a cv::Mat that points to the YUV420sp data. Mat myuv(height + height/2, width, CV_8UC1, (uchar *)_yuv); // Prepare a cv::Mat that points to the BGRA output data. Mat mbgra(height, width, CV_8UC4, (uchar *)_bgra); // Convert the color format from the camera's // NV21 "YUV420sp" format to an Android BGRA color image. cvtColor(myuv, mbgra, CV_YUV420sp2BGRA); // OpenCV can now access/modify the BGRA image if we want ... // Release the native lock we placed on the Java arrays. env->ReleaseIntArrayElements(bgra, _bgra, 0); env->ReleaseByteArrayElements(yuv, _yuv, 0); }
(2) 方案4中的部分代码
Android.mk
文件
LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) #opencv OPENCVROOT:= /Volumes/hujiawei/Users/hujiawei/Android/opencv_sdk OPENCV_CAMERA_MODULES:=on OPENCV_INSTALL_MODULES:=on OPENCV_LIB_TYPE:=SHARED include ${OPENCVROOT}/sdk/native/jni/OpenCV.mk LOCAL_SRC_FILES := facerec.cpp LOCAL_LDLIBS += -llog LOCAL_MODULE := facerec include $(BUILD_SHARED_LIBRARY)
Application.mk
文件
APP_STL := gnustl_static APP_CPPFLAGS := -frtti -fexceptions APP_ABI := armeabi APP_PLATFORM := android-16
FisherFaceRecognizer
文件
package com.android.hacks.ndkdemo; import org.opencv.contrib.FaceRecognizer; public class FisherFaceRecognizer extends FaceRecognizer { static { System.loadLibrary("opencv_java");// System.loadLibrary("facerec");// } private static native long createFisherFaceRecognizer0(); private static native long createFisherFaceRecognizer1(int num_components); private static native long createFisherFaceRecognizer2(int num_components, double threshold); public FisherFaceRecognizer() { super(createFisherFaceRecognizer0()); } public FisherFaceRecognizer(int num_components) { super(createFisherFaceRecognizer1(num_components)); } public FisherFaceRecognizer(int num_components, double threshold) { super(createFisherFaceRecognizer2(num_components, threshold)); } }
以后你能够测试,固然你还能够作一个完整的例子来测试这个算法是否正确
facerec = new FisherFaceRecognizer(); textView.setText(String.valueOf(facerec.getDouble("threshold")));//1.7976xxxx