Camera 预览之SurfaceView、TextureView、GLSurfaceView(三)


今天介绍下GLSurfaceView如何使用。GLSurfaceView的包名是android.opengl,由此能够它是opengl的一个类,它也能够预览camera,并且在预览camera上有比SurfaceView独特的优点,能够作到数据和显示的分离,好比在没有屏幕的设备照样能够开预览实时直播。下面要介绍的这个例子是获取camera预览数据编码为视频流。但这篇文章只介绍如何使用GLSurfaceView去预览,关于获取预览数据编码视频流后续会介绍。java

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

import android.content.Context;
import android.graphics.SurfaceTexture;
import android.opengl.GLES11Ext;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.GLSurfaceView.Renderer;
import android.util.AttributeSet;
import android.util.Log;

public class MyGLSurfaceView extends GLSurfaceView implements
                                    Renderer, SurfaceTexture.OnFrameAvailableListener {
    private static final String TAG = "MyGLSurfaceView";
    private Context mContext;
    private SurfaceTexture mSurface;
    private int mTextureID = -1;
    private CameraDrawer mCameraDrawer;
    
    public CameraGLSurfaceView(Context context, AttributeSet attrs) {
        super(context, attrs);
        // TODO Auto-generated constructor stub
        mContext = context;
        setEGLContextClientVersion(2);
        setRenderer(this);
        setRenderMode(RENDERMODE_WHEN_DIRTY);
    }
    
    @Override
    public void onSurfaceCreated(GL10 gl, EGLConfig config) {
        // TODO Auto-generated method stub
        Log.i(TAG, "onSurfaceCreated...");
        mTextureID = createTextureID();
        mSurface = new SurfaceTexture(mTextureID);
        mSurface.setOnFrameAvailableListener(this);
        mCameraDrawer = new CameraDrawer(mTextureID);
        CameraWrapper.getInstance().doOpenCamera(null);

    }
    @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {
        // TODO Auto-generated method stub
        Log.i(TAG, "onSurfaceChanged..." + width + "/" + height);
        GLES20.glViewport(0, 0, width, height);
        if(!CameraWrapper.getInstance().isPreviewing()){
            CameraWrapper.getInstance().doStartPreview(mSurface);
        }
    }
    
    @Override
    public void onDrawFrame(GL10 gl) {
        // TODO Auto-generated method stub
        Log.i(TAG, "onDrawFrame...");
        GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
        GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
        mSurface.updateTexImage();
        float[] mtx = new float[16];
        mSurface.getTransformMatrix(mtx);
        mCameraDrawer.drawSelf(mtx);
    }
    
    @Override
    public void onPause() {
        // TODO Auto-generated method stub
        super.onPause();
        CameraWrapper.getInstance().doStopCamera();
    }
    
    private int createTextureID() {
        int[] texture = new int[1];

        GLES20.glGenTextures(1, texture, 0);
        GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, texture[0]);
        GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
                GL10.GL_TEXTURE_MIN_FILTER,GL10.GL_LINEAR);        
        GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
                GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
        GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
                GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE);
        GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,
                GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE);
        return texture[0];
    }
    
    public SurfaceTexture getSurfaceTexture() {
        return mSurface;
    }
    
    @Override
    public void onFrameAvailable(SurfaceTexture surfaceTexture) {
        // TODO Auto-generated method stub
        Log.i(TAG, "onFrameAvailable...");
        this.requestRender();
    }
}
MyGLSurfaceView继承至GLSurfaceView,并实现Render接口,看下构造函数中的实现:android

一、setEGLContextClientVersion 设置opengl的版本,这个必需要设置,若是不设置系统就不知道使用哪一个版本的api来渲染,界面上就什么都不会显示。api

二、setRenderer设置一个渲染器app

三、setRenderMode设置渲染模式,支持2中模式,RENDERMODE_CONTINUOUSLY ,这个模式会一直渲染比较耗费资源。RENDERMODE_WHEN_DIRTY Surface建立的时候会去渲染,或是有数据的时候,即调用requestRender,才会去渲染,camera预览比较适合第二种模式。ide

走完构造,个人画布已经准备好,渲染器也初始化完,如何进行数据渲染呢,接下就该Render接口出场了。函数

这里实现了3个回掉onSurfaceCreated() onSurfaceChanged() onDrawFrame()ui

一、onSurfaceCreated里建立了纹理并绑定了一个ID,设置SurfaceTexture Frame Available的一个监听,通知有Render有数据须要渲染。构造mCameraDrawer,CameraDrawer很重要它是负责绘制的数据的一个类,下面会介绍。this

二、onSurfaceChanged当Surface有变化的时候开启camera预览编码

三、onDrawFrame,调用updateTexImage,在画布上绘制当前帧数据。spa

到这里咱们的绘画大师CameraDrawer就要登场了。

import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import java.nio.ShortBuffer;

import android.opengl.GLES11Ext;
import android.opengl.GLES20;
import android.opengl.Matrix;

public class CameraDrawer {
    private final String vertexShaderCode =
            "attribute vec4 vPosition;" +
            "attribute vec2 inputTextureCoordinate;" +
            "varying vec2 textureCoordinate;" +
            "void main()" +
            "{"+
                "gl_Position = vPosition;"+
                "textureCoordinate = inputTextureCoordinate;" +
            "}";

    private final String fragmentShaderCode =
            "#extension GL_OES_EGL_image_external : require\n"+
            "precision mediump float;" +
            "varying vec2 textureCoordinate;\n" +
            "uniform samplerExternalOES s_texture;\n" +
            "void main() {" +
            "  gl_FragColor = texture2D( s_texture, textureCoordinate );\n" +
            "}";

    private FloatBuffer vertexBuffer, textureVerticesBuffer;
    private ShortBuffer drawListBuffer;
    private final int mProgram;
    private int mPositionHandle;
    private int mTextureCoordHandle;

    private short drawOrder[] = { 0, 1, 2, 0, 2, 3 }; // order to draw vertices

    // number of coordinates per vertex in this array
    private static final int COORDS_PER_VERTEX = 2;

    private final int vertexStride = COORDS_PER_VERTEX * 4; // 4 bytes per vertex

    static float squareCoords[] = {
       -1.0f,  1.0f,
       -1.0f, -1.0f,
        1.0f, -1.0f,
        1.0f,  1.0f,
    };

    static float textureVertices[] = {
        0.0f, 1.0f,
        1.0f, 1.0f,
        1.0f, 0.0f,
        0.0f, 0.0f,
    };

    private int texture;

    public CameraDrawer(int texture)
    {
        this.texture = texture;
        // initialize vertex byte buffer for shape coordinates
        ByteBuffer bb = ByteBuffer.allocateDirect(squareCoords.length * 4);
        bb.order(ByteOrder.nativeOrder());
        vertexBuffer = bb.asFloatBuffer();
        vertexBuffer.put(squareCoords);
        vertexBuffer.position(0);

        // initialize byte buffer for the draw list
        ByteBuffer dlb = ByteBuffer.allocateDirect(drawOrder.length * 2);
        dlb.order(ByteOrder.nativeOrder());
        drawListBuffer = dlb.asShortBuffer();
        drawListBuffer.put(drawOrder);
        drawListBuffer.position(0);

        ByteBuffer bb2 = ByteBuffer.allocateDirect(textureVertices.length * 4);
        bb2.order(ByteOrder.nativeOrder());
        textureVerticesBuffer = bb2.asFloatBuffer();
        textureVerticesBuffer.put(textureVertices);
        textureVerticesBuffer.position(0);

        int vertexShader    = loadShader(GLES20.GL_VERTEX_SHADER, vertexShaderCode);
        int fragmentShader  = loadShader(GLES20.GL_FRAGMENT_SHADER, fragmentShaderCode);

        mProgram = GLES20.glCreateProgram();             // create empty OpenGL ES Program
        GLES20.glAttachShader(mProgram, vertexShader);   // add the vertex shader to program
        GLES20.glAttachShader(mProgram, fragmentShader); // add the fragment shader to program
        GLES20.glLinkProgram(mProgram);                  // creates OpenGL ES program executables
    }

    public void drawSelf(float[] mtx)
    {
        GLES20.glUseProgram(mProgram);

        GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
        GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, texture);

        // get handle to vertex shader's vPosition member
        mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");

        // Enable a handle to the triangle vertices
        GLES20.glEnableVertexAttribArray(mPositionHandle);

        // Prepare the <insert shape here> coordinate data
        GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer);

        mTextureCoordHandle = GLES20.glGetAttribLocation(mProgram, "inputTextureCoordinate");
        GLES20.glEnableVertexAttribArray(mTextureCoordHandle);

        GLES20.glVertexAttribPointer(mTextureCoordHandle, COORDS_PER_VERTEX, GLES20.GL_FLOAT, false, vertexStride, textureVerticesBuffer);
        GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length, GLES20.GL_UNSIGNED_SHORT, drawListBuffer);

        // Disable vertex array
        GLES20.glDisableVertexAttribArray(mPositionHandle);
        GLES20.glDisableVertexAttribArray(mTextureCoordHandle);
    }
    
    private  int loadShader(int type, String shaderCode){

        // create a vertex shader type (GLES20.GL_VERTEX_SHADER)
        // or a fragment shader type (GLES20.GL_FRAGMENT_SHADER)
        int shader = GLES20.glCreateShader(type);

        // add the source code to the shader and compile it
        GLES20.glShaderSource(shader, shaderCode);
        GLES20.glCompileShader(shader);

        return shader;
    }
    
    private float[] transformTextureCoordinates( float[] coords, float[] matrix)
    {          
       float[] result = new float[ coords.length ];        
       float[] vt = new float[4];      

       for ( int i = 0 ; i < coords.length ; i += 2 ) {
           float[] v = { coords[i], coords[i+1], 0 , 1  };
           Matrix.multiplyMV(vt, 0, matrix, 0, v, 0);
           result[i] = vt[0];
           result[i+1] = vt[1];
       }
       return result;
    }
}
经过回调onFrameAvailable,一帧一帧画预览数据。接下来看下Activity部分是怎么使用这view的。和以前介绍的区别在于,以前camera的open和preview都是在activity中完成,如今放在MyGLSurfaceView当中。

public class CameraActivity extends Activity{
    private static final String TAG = "CameraActivity";
    CameraGLSurfaceView glSurfaceView = null;
    float previewRate = -1f;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_camera);
        initUI();
        initViewParams();
        
        shutterBtn.setOnClickListener(new BtnListeners());
    }

    private void initUI(){
        glSurfaceView = (CameraGLSurfaceView)findViewById(R.id.camera_textureview);
    }
    private void initViewParams(){
        LayoutParams params = glSurfaceView.getLayoutParams();
        Point p = DisplayUtil.getScreenMetrics(this);
        params.width = p.x;
        params.height = p.y;
        previewRate = DisplayUtil.getScreenRate(this); //默认全屏的比例预览
        glSurfaceView.setLayoutParams(params);

    }
    
    @Override
    protected void onResume() {
        // TODO Auto-generated method stub
        super.onResume();
        glSurfaceView.bringToFront();
    }

    @Override
    protected void onPause() {
        // TODO Auto-generated method stub
        super.onPause();
        glSurfaceView.onPause();
    }
}

下面是预览截图:


到这camera三种预览方式就分析完毕了,后续我会深刻到framework中给你们分析,camera的数据是如何渲染到GLSurfaceView上的。由于上述代码中并无很明显的对buffer进行的操做。

原创不易,若是您以为好,能够分享此公众号给你更多的人。