以前曾经写过用SurfaceView,TextureView+MediaPlayer 播放视频,和 ffmpeg avi 解码后SurfaceView播放视频 ,今天再给你们来一篇 OpenGL ES+MediaPlayer 来播放视频。java
当年也曾呆过camera开发组近一年时间,惋惜那时候没写博客的意识,没能给本身给你们留下多少干货分享。android
上个效果图吧:
用 OpenGL 着色器实现黑白(灰度图)效果。web
即 0.299,0.587,0.114 CRT中转灰度的模型算法
下面看具体实现的逻辑:
若是你曾用 OpenGL 实现过贴图,那么就容易理解多了。微信
和图片不一样的是,视频须要不断地刷新,每当有新的一帧来时,咱们都应该更新纹理,而后从新绘制。用 OpenGL 播放视频就是把视频贴到屏幕上。app
对openGL不熟的同窗先看这里:学 OpenGL 必知道的图形学知识编辑器
1.先写顶点着色器和片断着色器ide
顶点着色器:函数
attribute vec4 aPosition;//顶点位置
attribute vec4 aTexCoord;//S T 纹理坐标
varying vec2 vTexCoord;
uniform mat4 uMatrix;
uniform mat4 uSTMatrix;
void main() {
vTexCoord = (uSTMatrix * aTexCoord).xy;
gl_Position = uMatrix*aPosition;
}
片断着色器:oop
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 vTexCoord;
uniform samplerExternalOES sTexture;
void main() {
gl_FragColor=texture2D(sTexture, vTexCoord);
}
对着色器语言不懂的同窗看这里:http://blog.csdn.net/king1425/article/details/71425556
samplerExternalOES 代替贴图片时的 sampler2D ,做用就是和s urfaceTexture 配合进行纹理更新和格式转换。
2.MediaPlayer的输出
在 GLVideoRenderer 的构造函数中初始化 MediaPlayer :
mediaPlayer=new MediaPlayer();
try{
mediaPlayer.setDataSource(context, Uri.parse(videoPath));
}catch (IOException e){
e.printStackTrace();
}
mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
mediaPlayer.setLooping(true);
mediaPlayer.setOnVideoSizeChangedListener(this);
onSurfaceCreated函数中使用SurfaceTexture来设置MediaPlayer的输出。
咱们要用SurfaceTexture 建立一个Surface,而后将这个Surface做为MediaPlayer的输出表面。
SurfaceTexture的主要做用就是,从视频流和相机数据流获取新一帧的数据,获取新数据调用的方法是updateTexImage。
须要注意的是MediaPlayer的输出每每不是RGB格式(通常是YUV),而GLSurfaceView须要RGB格式才能正常显示。
因此咱们先在onSurfaceCreated中将生成纹理的代码改为这样:
textureId = textures[0];
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textureId);
ShaderUtils.checkGlError("ws-------glBindTexture mTextureID");
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES11Ext.GL_TEXTURE_EXTERNAL_OES的用处是什么?
以前提到视频解码的输出格式是YUV的(YUV420sp,应该是),那么这个扩展纹理的做用就是实现YUV格式到RGB的自动转化,咱们就不须要再为此写YUV转RGB的代码了。
而后在onSurfaceCreated的最后加上以下代码:
surfaceTexture = new SurfaceTexture(textureId);
surfaceTexture.setOnFrameAvailableListener(this);//监听是否有新的一帧数据到来
Surface surface = new Surface(surfaceTexture);
mediaPlayer.setSurface(surface);
surface.release();
if (!playerPrepared){
try {
mediaPlayer.prepare();
playerPrepared=true;
} catch (IOException t) {
Log.e(TAG, "media player prepare failed");
}
mediaPlayer.start();
playerPrepared=true;
}
用SurfaceTexture 建立一个Surface,而后将这个Surface做为MediaPlayer的输出表面.
在onDrawFrame中
synchronized (this){
if (updateSurface){
surfaceTexture.updateTexImage();//获取新数据
surfaceTexture.getTransformMatrix(mSTMatrix);//让新的纹理和纹理坐标系可以正确的对应,mSTMatrix的定义是和projectionMatrix彻底同样的。
updateSurface = false;
}
}
在有新数据时,用updateTexImage来更新纹理,这个getTransformMatrix的目的,是让新的纹理和纹理坐标系可以正确的对应, mSTMatrix 的定义是和 projectionMatrix 彻底同样的。
private final float[] vertexData = {
1f,-1f,0f,
-1f,-1f,0f,
1f,1f,0f,
-1f,1f,0f
};
private final float[] textureVertexData = {
1f,0f,
0f,0f,
1f,1f,
0f,1f
};
vertexData 表明要绘制的视口坐标。textureVertexData 表明视频纹理,与屏幕坐标对应
而后咱们读取坐标,在此本身咱们先与着色器映射。
在onSurfaceCreated映射
aPositionLocation= GLES20.glGetAttribLocation(programId,"aPosition");
uMatrixLocation=GLES20.glGetUniformLocation(programId,"uMatrix");
uSTMMatrixHandle = GLES20.glGetUniformLocation(programId, "uSTMatrix");
uTextureSamplerLocation=GLES20.glGetUniformLocation(programId,"sTexture");
aTextureCoordLocation=GLES20.glGetAttribLocation(programId,"aTexCoord");
onDrawFrame中读取:
GLES20.glUseProgram(programId);
GLES20.glUniformMatrix4fv(uMatrixLocation,1,false,projectionMatrix,0);
GLES20.glUniformMatrix4fv(uSTMMatrixHandle, 1, false, mSTMatrix, 0);
vertexBuffer.position(0);
GLES20.glEnableVertexAttribArray(aPositionLocation);
GLES20.glVertexAttribPointer(aPositionLocation, 3, GLES20.GL_FLOAT, false,
12, vertexBuffer);
textureVertexBuffer.position(0);
GLES20.glEnableVertexAttribArray(aTextureCoordLocation);
GLES20.glVertexAttribPointer(aTextureCoordLocation,2,GLES20.GL_FLOAT,false,8,textureVertexBuffer);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,textureId);
GLES20.glUniform1i(uTextureSamplerLocation,0);
GLES20.glViewport(0,0,screenWidth,screenHeight);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLVideoRenderer 所有代码以下:
package com.ws.openglvideoplayer;
import android.content.Context;
import android.graphics.SurfaceTexture;
import android.media.AudioManager;
import android.media.MediaPlayer;
import android.net.Uri;
import android.opengl.GLES11Ext;
import android.opengl.GLES20;
import android.opengl.GLSurfaceView;
import android.opengl.Matrix;
import android.util.Log;
import android.view.Surface;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.nio.FloatBuffer;
import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;
/**
* Created by Shuo.Wang on 2017/3/19.
*/
public class GLVideoRenderer implements GLSurfaceView.Renderer
, SurfaceTexture.OnFrameAvailableListener, MediaPlayer.OnVideoSizeChangedListener {
private static final String TAG = "GLRenderer";
private Context context;
private int aPositionLocation;
private int programId;
private FloatBuffer vertexBuffer;
private final float[] vertexData = {
1f,-1f,0f,
-1f,-1f,0f,
1f,1f,0f,
-1f,1f,0f
};
private final float[] projectionMatrix=new float[16];
private int uMatrixLocation;
private final float[] textureVertexData = {
1f,0f,
0f,0f,
1f,1f,
0f,1f
};
private FloatBuffer textureVertexBuffer;
private int uTextureSamplerLocation;
private int aTextureCoordLocation;
private int textureId;
private SurfaceTexture surfaceTexture;
private MediaPlayer mediaPlayer;
private float[] mSTMatrix = new float[16];
private int uSTMMatrixHandle;
private boolean updateSurface;
private boolean playerPrepared;
private int screenWidth,screenHeight;
public GLVideoRenderer(Context context,String videoPath) {
this.context = context;
playerPrepared=false;
synchronized(this) {
updateSurface = false;
}
vertexBuffer = ByteBuffer.allocateDirect(vertexData.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(vertexData);
vertexBuffer.position(0);
textureVertexBuffer = ByteBuffer.allocateDirect(textureVertexData.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(textureVertexData);
textureVertexBuffer.position(0);
mediaPlayer=new MediaPlayer();
try{
mediaPlayer.setDataSource(context, Uri.parse(videoPath));
}catch (IOException e){
e.printStackTrace();
}
mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
mediaPlayer.setLooping(true);
mediaPlayer.setOnVideoSizeChangedListener(this);
}
@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
String vertexShader = ShaderUtils.readRawTextFile(context, R.raw.simple_vertex_shader);
String fragmentShader= ShaderUtils.readRawTextFile(context, R.raw.simple_fragment_shader);
programId=ShaderUtils.createProgram(vertexShader,fragmentShader);
aPositionLocation= GLES20.glGetAttribLocation(programId,"aPosition");
uMatrixLocation=GLES20.glGetUniformLocation(programId,"uMatrix");
uSTMMatrixHandle = GLES20.glGetUniformLocation(programId, "uSTMatrix");
uTextureSamplerLocation=GLES20.glGetUniformLocation(programId,"sTexture");
aTextureCoordLocation=GLES20.glGetAttribLocation(programId,"aTexCoord");
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
textureId = textures[0];
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textureId);
ShaderUtils.checkGlError("glBindTexture mTextureID");
/*GLES11Ext.GL_TEXTURE_EXTERNAL_OES的用处?
以前提到视频解码的输出格式是YUV的(YUV420p,应该是),那么这个扩展纹理的做用就是实现YUV格式到RGB的自动转化,
咱们就不须要再为此写YUV转RGB的代码了*/
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
surfaceTexture = new SurfaceTexture(textureId);
surfaceTexture.setOnFrameAvailableListener(this);//监听是否有新的一帧数据到来
Surface surface = new Surface(surfaceTexture);
mediaPlayer.setSurface(surface);
surface.release();
if (!playerPrepared){
try {
mediaPlayer.prepare();
playerPrepared=true;
} catch (IOException t) {
Log.e(TAG, "media player prepare failed");
}
mediaPlayer.start();
playerPrepared=true;
}
}
@Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
Log.d(TAG, "onSurfaceChanged: "+width+" "+height);
screenWidth=width; screenHeight=height;
}
@Override
public void onDrawFrame(GL10 gl) {
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
synchronized (this){
if (updateSurface){
surfaceTexture.updateTexImage();//获取新数据
surfaceTexture.getTransformMatrix(mSTMatrix);//让新的纹理和纹理坐标系可以正确的对应,mSTMatrix的定义是和projectionMatrix彻底同样的。
updateSurface = false;
}
}
GLES20.glUseProgram(programId);
GLES20.glUniformMatrix4fv(uMatrixLocation,1,false,projectionMatrix,0);
GLES20.glUniformMatrix4fv(uSTMMatrixHandle, 1, false, mSTMatrix, 0);
vertexBuffer.position(0);
GLES20.glEnableVertexAttribArray(aPositionLocation);
GLES20.glVertexAttribPointer(aPositionLocation, 3, GLES20.GL_FLOAT, false,
12, vertexBuffer);
textureVertexBuffer.position(0);
GLES20.glEnableVertexAttribArray(aTextureCoordLocation);
GLES20.glVertexAttribPointer(aTextureCoordLocation,2,GLES20.GL_FLOAT,false,8,textureVertexBuffer);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES,textureId);
GLES20.glUniform1i(uTextureSamplerLocation,0);
GLES20.glViewport(0,0,screenWidth,screenHeight);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
}
@Override
synchronized public void onFrameAvailable(SurfaceTexture surface) {
updateSurface = true;
}
@Override
public void onVideoSizeChanged(MediaPlayer mp, int width, int height) {
Log.d(TAG, "onVideoSizeChanged: "+width+" "+height);
updateProjection(width,height);
}
private void updateProjection(int videoWidth, int videoHeight){
float screenRatio=(float)screenWidth/screenHeight;
float videoRatio=(float)videoWidth/videoHeight;
if (videoRatio>screenRatio){
Matrix.orthoM(projectionMatrix,0,-1f,1f,-videoRatio/screenRatio,videoRatio/screenRatio,-1f,1f);
}else Matrix.orthoM(projectionMatrix,0,-screenRatio/videoRatio,screenRatio/videoRatio,-1f,1f,-1f,1f);
}
public MediaPlayer getMediaPlayer() {
return mediaPlayer;
}
}
要实现上图中的滤镜视频效果,只需用0.299,0.587,0.114 CRT中转灰度的模型算法。(本身能够网上搜寻更多效果,这里只是抛砖引玉)
更改片断着色器便可:
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 vTexCoord;
uniform samplerExternalOES sTexture;
void main() {
//gl_FragColor=texture2D(sTexture, vTexCoord);
vec3 centralColor = texture2D(sTexture, vTexCoord).rgb;
gl_FragColor = vec4(0.299*centralColor.r+0.587*centralColor.g+0.114*centralColor.b);
}
到此结束,咱们已经实现了OpenGL ES+MediaPlayer 渲染播放视频+滤镜效果。后期将讲述全景视频原理及实现过程,敬请关注~
做者:小码哥_WS
连接:https://www.jianshu.com/p/13320a8549db
-- END --
进技术交流群,扫码添加个人微信:Byte-Flow
获取视频教程和源码
推荐:
FFmpeg + OpenGLES 实现视频解码播放和视频滤镜
FFmpeg + OpenGL ES 实现 3D 全景播放器
以为不错,点个在看呗~

本文分享自微信公众号 - 字节流动(google_developer)。
若有侵权,请联系 support@oschina.cn 删除。
本文参与“OSC源创计划”,欢迎正在阅读的你也加入,一块儿分享。