Android Camera 开发你该知道的秘密㊙️-新手入门必备

做者:@鱿鱼先生 本文为原创,转载请注明:juejin.im/user/5aff97…html

安卓相机相关开发的文章已经数不胜数,今天提笔想给开发者说说安卓相机开发的一些小秘密,固然也会进行一些基础知识的普及😄。若是尚未相机开发相关支持的小伙伴,建议打开谷歌的文档 CameraCamera Guide 进行相关的学习,而后再结合本文的内容,必定能够达到事倍功半的效果。java

这里提早附上参考代码的克隆地址: ps: 😊贴心的博主特意使用码云方便国内的小伙伴们高速访问代码。android

码云:Camera-Androidgit

本文主要是介绍安卓Camera1相关的介绍,Camera2的就等待个人更新吧:)😊github

1. 启动相机

从API文档和不少网络的资料通常的启动套路代码:算法

/** A safe way to get an instance of the Camera object. */
public static Camera getCameraInstance(){
    Camera c = null;
    try {
        c = Camera.open(); // attempt to get a Camera instance
    }
    catch (Exception e){
        // Camera is not available (in use or does not exist)
    }
    return c; // returns null if camera is unavailable
}
复制代码

可是调用该函数获取相机实例的时候,通常调用都是直接在 MainThread 中直接调用该函数:性能优化

@Override
 protected void onCreate(Bundle savedInstanceState) {
     // ... 
     Camera camera = getCameraInstance();
 }
复制代码

让咱们来看看安卓源码的是实现,Camera.java网络

/** * Creates a new Camera object to access the first back-facing camera on the * device. If the device does not have a back-facing camera, this returns * null. * @see #open(int) */
public static Camera open() {
    int numberOfCameras = getNumberOfCameras();
    CameraInfo cameraInfo = new CameraInfo();
    for (int i = 0; i < numberOfCameras; i++) {
        getCameraInfo(i, cameraInfo);
        if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
            return new Camera(i);
        }
    }
    return null;
}
    
Camera(int cameraId) {
    mShutterCallback = null;
    mRawImageCallback = null;
    mJpegCallback = null;
    mPreviewCallback = null;
    mPostviewCallback = null;
    mUsingPreviewAllocation = false;
    mZoomListener = null;
    Looper looper;
    if ((looper = Looper.myLooper()) != null) {
        mEventHandler = new EventHandler(this, looper);
    } else if ((looper = Looper.getMainLooper()) != null) {
        mEventHandler = new EventHandler(this, looper);
    } else {
        mEventHandler = null;
    }
    String packageName = ActivityThread.currentPackageName();
    native_setup(new WeakReference<Camera>(this), cameraId, packageName);
}
复制代码

注意mEventHandler若是当前的启动线程不带 Looper 则默认的 mEventHandler 使用UI线程的默认 Looper。从源码咱们能够看到 EventHandler 负责处理底层的消息的回调。正常状况下,咱们指望全部回调都在UI线程这样能够方便咱们直接操做相关的页面逻辑。可是针对一些特殊场景咱们能够作一些特殊的操做,目前能够把这个知识点记下,以便后续他用。app

2. 设置相机📷预览模式

2.1 使用 SurfaceHolder 预览

根据官方的 Guide 文章咱们直接使用 SurfaceView 做为预览的展现对象。ide

@Override
protected void onCreate(Bundle savedInstanceState) {
	// ...
    SurfaceView surfaceView = findViewById(R.id.camera_surface_view);
    surfaceView.getHolder().addCallback(this);
}

@Override
public void surfaceCreated(SurfaceHolder holder) {
    // TODO: Connect Camera.
    if (null != mCamera) {
        try {
            mCamera.setPreviewDisplay(holder);
            mCamera.startPreview();
            mHolder = holder;
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
}
复制代码

从新运行下程序,我相信你已经能够看到预览的画面,固然它可能有些方向的问题。可是咱们至少看到了相机的画面。

2.2 使用 SurfaceTexture 预览

该方式目前主要是针对须要利用 OpenGL ES 做为相机 GPU 预览的模式。此时使用的目标 View 也换成了 GLSurfaceView。在使用的时候⚠️注意3个小细节:

  1. 关于 GLSurfaceView 的基础设置
GLSurfaceView surfaceView = findViewById(R.id.gl_surfaceview);
surfaceView.setEGLContextClientVersion(2); // 开启 OpenGL ES 2.0 支持
surfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); // 启用被动刷新。
surfaceView.setRenderer(this);
复制代码

关于被动刷新的开启,第三点会详细介绍它的意思。 2. 建立纹理对应的 SurfaceTexture

@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
	// Init Camera
	int[] textureIds = new int[1];
   	GLES20.glGenTextures(1, textureIds, 0);
   	GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textureIds[0]);
   	// 超出纹理坐标范围,采用截断到边缘
   	GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
   	GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
   	//过滤(纹理像素映射到坐标点) (缩小、放大:GL_LINEAR线性)
   	GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
   	GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
   
   	mSurfaceTexture = new SurfaceTexture(textureIds[0]);
   	mCameraTexture = textureIds[0];
   
   	GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
    
    try {
    	// 建立的 SurfaceTexture 做为预览用的 Texture
   		mCamera.setPreviewTexture(mSurfaceTexture); 
   		mCamera.startPreview();
   	} catch (IOException e) {
   		e.printStackTrace();
   	}
}
复制代码

这里建立的纹理是一种特殊的来自 OpenGL ES 的扩展,GLES11Ext.GL_TEXTURE_EXTERNAL_OES 有且只有在使用此种类型纹理的时候,开发者才能经过本身的 GPU 代码进行摄像头内容的实时处理。 3. 数据驱动刷新

将原有的 GLSurfaceView 连续刷新的模式改为,只有当数据有变化的时候才刷新。

GLSurfaceView surfaceView = findViewById(R.id.gl_surfaceview);
surfaceView.setEGLContextClientVersion(2);
surfaceView.setRenderer(this);
// 添加如下设置,改为被动的 GL 渲染。
// Change SurfaceView render mode to RENDERMODE_WHEN_DIRTY. 
surfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
复制代码

当数据变化的时候咱们能够经过如下方式进行通知

mSurfaceTexture.setOnFrameAvailableListener(surfaceTexture -> {
	// 有数据能够进行展现,同时GL线程工做。
	 mSurfaceView.requestRender();
});
复制代码

其他的部分能够不变,这样的好处是刷新的帧率能够随着相机的帧率变化而变化。不是本身一直自动刷新形成没必要要的GPU功耗。

2.3 使用YUV-NV21 预览

本节将重点介绍如何使用YUV数据进行相机的画面的预览的技术实现。这个技术方案主要的落地场景是 人脸识别(Face Detection) 或是其余 CV 领域的实时算法数据加工。

2.3.1 设置回调 Camera 预览 YUV 数据回调 Buffer

本步骤利用旧版本的接口 Camera.setPreviewCallbackWithBuffer , 可是使用此函数须要作一个必要操做,就是往相机里面添加回调数据的 Buffer

// 设置目标的预览分辨率,能够直接使用 1280*720 目前的相机都会有该分辨率
parameters.setPreviewSize(previewSize.first, previewSize.second);
// 设置相机 NV21 数据回调使用用户设置的 buffer
mCamera.setPreviewCallbackWithBuffer(this);
mCamera.setParameters(parameters);
// 添加4个用于相机进行处理的 byte[] buffer 对象。
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
复制代码

这里须要注意⚠️,若是设置预览回调使用的是 Camera.setPreviewCallback 那么相机返回的数据 onPreviewFrame(byte[] data, Camera camera) 中的 data 是由相机内部建立。

@Override
public void onPreviewFrame(byte[] data, Camera camera) {
    // TODO: 预处理相机输入数据
    if (!bytesToByteBuffer.containsKey(data)) {
        Log.d(TAG, "Skipping frame. Could not find ByteBuffer associated with the image "
                        + "data from the camera.");
    } else {
        // 由于咱们使用的是 setPreviewCallbackWithBuffer 因此必须把data还回去
        mCamera.addCallbackBuffer(data);
    }
}
复制代码

若是不进行 mCamera.addCallbackBuffer(byte[]), 当回调 4 次以后,就不会再触发 onPreviewFrame 。能够发现次数恰好等于相机初始化时候添加的 Buffer 个数。

2.3.2 启动相机预览

咱们目的是使用 onPreviewFrame 返回数据进行渲染,因此设置 mCamera.setPreviewTexture 的逻辑代码须要去除,由于咱们不但愿相机还继续把预览的数据继续发送给以前设置的 SurfaceTexture 这个就系统浪费资源了。

😂支持注释相机 mCamera.setPreviewTexture(mSurfaceTexture); 的代码段:

try {
    // mCamera.setPreviewTexture(mSurfaceTexture);
    mCamera.startPreview();
} catch (Exception e) {
    e.printStackTrace();
}
复制代码

经过测试发现 onPreviewFrame 竟然不工做了,快速看下文档,里面提到如下信息:

/** * Starts capturing and drawing preview frames to the screen * Preview will not actually start until a surface is supplied * with {@link #setPreviewDisplay(SurfaceHolder)} or * {@link #setPreviewTexture(SurfaceTexture)}. * * <p>If {@link #setPreviewCallback(Camera.PreviewCallback)}, * {@link #setOneShotPreviewCallback(Camera.PreviewCallback)}, or * {@link #setPreviewCallbackWithBuffer(Camera.PreviewCallback)} were * called, {@link Camera.PreviewCallback#onPreviewFrame(byte[], Camera)} * will be called when preview data becomes available. * * @throws RuntimeException if starting preview fails; usually this would be * because of a hardware or other low-level error, or because release() * has been called on this Camera instance. */
public native final void startPreview();
复制代码

相机的有且仅有被设置的对应的 Surface 资源以后才能正确的启动预览。

下面是见证奇迹的时刻了:

/** * The dummy surface texture must be assigned a chosen name. Since we never use an OpenGL context, * we can choose any ID we want here. The dummy surface texture is not a crazy hack - it is * actually how the camera team recommends using the camera without a preview. */
private static final int DUMMY_TEXTURE_NAME = 100;


@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
    // ... codes
	SurfaceTexture dummySurfaceTexture = new SurfaceTexture(DUMMY_TEXTURE_NAME);
    mCamera.setPreviewTexture(dummySurfaceTexture);
    // ... codes
}
复制代码

这个操做以后,相机的 onPreviewFrame 又开始被触发了。这个虚拟的 SurfaceTexture 它可让相机工做起来,而且经过设置 :

dummySurfaceTexture.setOnFrameAvailableListener(surfaceTexture -> {
                Log.d(TAG, "dummySurfaceTexture working.");
            });
复制代码

咱们会发现系统是能本身判断出 SurfaceTexture 是否有效,接着 onFrameAvailable 也毫无反应。

2.3.3 渲染 YUV 数据绘制到 SurfaceView

目前安卓默认的YUV格式是 NV21. 因此须要使用 Shader 进行格式的转换。 在 OpenGL 中只能进行 RGB 的颜色进行绘制。具体脚本算法能够参考: nv21_to_rgba_fs.glsl

#ifdef GL_ES
precision highp float;
#endif
varying vec2 v_texCoord;
uniform sampler2D y_texture;
uniform sampler2D uv_texture;

void main (void) {
    float r, g, b, y, u, v;
    //We had put the Y values of each pixel to the R,G,B components by
    //GL_LUMINANCE, that's why we're pulling it from the R component,
    //we could also use G or B
    y = texture2D(y_texture, v_texCoord).r;
    //We had put the U and V values of each pixel to the A and R,G,B
    //components of the texture respectively using GL_LUMINANCE_ALPHA.
    //Since U,V bytes are interspread in the texture, this is probably
    //the fastest way to use them in the shader
    u = texture2D(uv_texture, v_texCoord).a - 0.5;
    v = texture2D(uv_texture, v_texCoord).r - 0.5;
    //The numbers are just YUV to RGB conversion constants
    r = y + 1.13983*v;
    g = y - 0.39465*u - 0.58060*v;
    b = y + 2.03211*u;
    //We finally set the RGB color of our pixel
    gl_FragColor = vec4(r, g, b, 1.0);
}
复制代码

主要思路是将N21的数据直接分离成2张纹理数据,fragment shader 里面进行颜色格式的计算,算回 RGBA。

mYTexture = new Texture();
created = mYTexture.create(mYuvBufferWidth, mYuvBufferHeight, GLES10.GL_LUMINANCE);
if (!created) {
	throw new RuntimeException("Create Y texture fail.");
}

mUVTexture = new Texture();
created = mUVTexture.create(mYuvBufferWidth/2, mYuvBufferHeight/2, GLES10.GL_LUMINANCE_ALPHA);	// uv 由于是两个通道因此数据的格式上选择 GL_LUMINANCE_ALPHA
if (!created) {
	throw new RuntimeException("Create UV texture fail.");
}

// ...省略部分逻辑代码

//Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel
yBuffer.put(data.array(), 0, mPreviewSize.first * mPreviewSize.second);
yBuffer.position(0);

//Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread
uvBuffer.put(data.array(), mPreviewSize.first * mPreviewSize.second, (mPreviewSize.first * mPreviewSize.second)/2);
uvBuffer.position(0);

mYTexture.load(yBuffer);
mUVTexture.load(uvBuffer);
复制代码

2.3.4 性能优化

相机的回调 YUV 的速度和 OpenGL ES 渲染相机预览画面的速度不必定是匹配的,因此咱们能够进行优化。既然是相机的预览咱们必须保证当前渲染的画面必定是最新的。咱们能够利用 pendingFrameData 一个公用资源进行渲染线程和相机数据回调线程的同步,保证画面的时效性。

synchronized (lock) {
	if (pendingFrameData != null) { // frame data tha has not been processed. Just return back to Camera.
		camera.addCallbackBuffer(pendingFrameData.array());
		pendingFrameData = null;
	}

	pendingFrameData = bytesToByteBuffer.get(data);
	// Notify the processor thread if it is waiting on the next frame (see below).
    // Demo 中是通知 GLThread 中渲染线程若是处理等待状态就是直接唤醒。
    lock.notifyAll();
}

// 通知 GLSurfaceView 能够刷新了
mSurfaceView.requestRender();
复制代码

最后还有一个优化的小技巧㊙️,须要结合在 启动相机 中提到的关于 Handler 的事情。若是咱们是在安卓的主线程或是不带有 Looper 的子线程中调用相机 Camera.open() 最终的结局都是全部相机的回调信息都会从主线程的 Looper.getMainLooper()Looper 进行信息处理。咱们能够想象若是目前 UI 的线程正在进行重的操做,势必将影响到相机预览的帧率问题,因此最好的方法就是开辟子线程进行相机的开启操做。

final ConditionVariable startDone = new ConditionVariable();

new Thread() {
	@Override
	public void run() {
		Log.v(TAG, "start loopRun");
        // Set up a looper to be used by camera.
        Looper.prepare();
        // Save the looper so that we can terminate this thread
        // after we are done with it.
        mLooper = Looper.myLooper();
        mCamera = Camera.open(cameraId);
        Log.v(TAG, "camera is opened");
        startDone.open();
        Looper.loop(); // Blocks forever until Looper.quit() is called.
        if (LOGV) Log.v(TAG, "initializeMessageLooper: quit.");
        }
}.start();

Log.v(TAG, "start waiting for looper");

if (!startDone.block(WAIT_FOR_COMMAND_TO_COMPLETE)) {
    Log.v(TAG, "initializeMessageLooper: start timeout");
    fail("initializeMessageLooper: start timeout");
}
复制代码

3. 摄像头角度问题

摄像头的数据预览是跟摄像头传感器的安装位置有关系的,相关的内容能够单独再写一篇文章进行讨论,我这边就直接上代码。

private void setRotation(Camera camera, Camera.Parameters parameters, int cameraId) {
	WindowManager windowManager = (WindowManager)getSystemService(Context.WINDOW_SERVICE);
    int degrees = 0;
    int rotation = windowManager.getDefaultDisplay().getRotation();
    switch (rotation) {
        case Surface.ROTATION_0:
            degrees = 0;
            break;
        case Surface.ROTATION_90:
            degrees = 90;
            break;
        case Surface.ROTATION_180:
            degrees = 180;
            break;
        case Surface.ROTATION_270:
            degrees = 270;
            break;
        default:
            Log.e(TAG, "Bad rotation value: " + rotation);
    }

    Camera.CameraInfo cameraInfo = new Camera.CameraInfo();
    Camera.getCameraInfo(cameraId, cameraInfo);

    int angle;
    int displayAngle;
    if (cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
        angle = (cameraInfo.orientation + degrees) % 360;
        displayAngle = (360 - angle) % 360; // compensate for it being mirrored
    } else { // back-facing
        angle = (cameraInfo.orientation - degrees + 360) % 360;
        displayAngle = angle;
    }

    // This corresponds to the rotation constants.
    mRotation = angle;

    camera.setDisplayOrientation(displayAngle);
    parameters.setRotation(angle);
}
复制代码

可是测试中你会发如今使用YUV数据预览模式的时候是不起做用的,这个是由于设置的角度参数不会直接影响 PreviewCallback#onPreviewFrame 返回的结果。咱们经过查看源码的注释后更加确信这点。

/** * Set the clockwise rotation of preview display in degrees. This affects * the preview frames and the picture displayed after snapshot. This method * is useful for portrait mode applications. Note that preview display of * front-facing cameras is flipped horizontally before the rotation, that * is, the image is reflected along the central vertical axis of the camera * sensor. So the users can see themselves as looking into a mirror. * * <p>This does not affect the order of byte array passed in {@link * PreviewCallback#onPreviewFrame}, JPEG pictures, or recorded videos. This * method is not allowed to be called during preview. * * <p>If you want to make the camera image show in the same orientation as * the display, you can use the following code. * <pre> * public static void setCameraDisplayOrientation(Activity activity, * int cameraId, android.hardware.Camera camera) { * android.hardware.Camera.CameraInfo info = * new android.hardware.Camera.CameraInfo(); * android.hardware.Camera.getCameraInfo(cameraId, info); * int rotation = activity.getWindowManager().getDefaultDisplay() * .getRotation(); * int degrees = 0; * switch (rotation) { * case Surface.ROTATION_0: degrees = 0; break; * case Surface.ROTATION_90: degrees = 90; break; * case Surface.ROTATION_180: degrees = 180; break; * case Surface.ROTATION_270: degrees = 270; break; * } * * int result; * if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) { * result = (info.orientation + degrees) % 360; * result = (360 - result) % 360; // compensate the mirror * } else { // back-facing * result = (info.orientation - degrees + 360) % 360; * } * camera.setDisplayOrientation(result); * } * </pre> * * <p>Starting from API level 14, this method can be called when preview is * active. * * <p><b>Note: </b>Before API level 24, the default value for orientation is 0. Starting in * API level 24, the default orientation will be such that applications in forced-landscape mode * will have correct preview orientation, which may be either a default of 0 or * 180. Applications that operate in portrait mode or allow for changing orientation must still * call this method after each orientation change to ensure correct preview display in all * cases.</p> * * @param degrees the angle that the picture will be rotated clockwise. * Valid values are 0, 90, 180, and 270. * @throws RuntimeException if setting orientation fails; usually this would * be because of a hardware or other low-level error, or because * release() has been called on this Camera instance. * @see #setPreviewDisplay(SurfaceHolder) */
  public native final void setDisplayOrientation(int degrees);
复制代码

为了获得正确的方向角度。咱们须要进行YUV渲染的是改变下坐标点。 这里我用了一个很暴力的手段,直接去调整下纹理的坐标

private static final float FULL_RECTANGLE_COORDS[] = {
            -1.0f, -1.0f,   // 0 bottom left
            1.0f, -1.0f,   // 1 bottom right
            -1.0f,  1.0f,   // 2 top left
            1.0f,  1.0f,   // 3 top right
    };
    // FIXME: 为了绘制正确的角度,将纹理坐标按90度进行计算,中间还包含了一次纹理数据的镜像处理
    private static final float FULL_RECTANGLE_TEX_COORDS[] = {
            1.0f, 1.0f,     // 0 bottom left
            1.0f, 0.0f,     // 1 bottom right
            0.0f, 1.0f,     // 2 top left
            0.0f, 0.0f      // 3 top right
    };
复制代码

重启程序 Perfect 搞定。

总结

关于安卓相机的开发,总结就是在踩坑中度过。建议正在学习的同窗,最好能结合我参考资料里面附加的内容以及相机源码进行学习。你将会获得很大的收获。 同时我也但愿本身写的经验文章能够帮到正在学习的你。🍻🍻🍻

参考资料

  1. Grafika
  2. Firbase Quick Start Samples
  3. Android Camera CTS
相关文章
相关标签/搜索