例子TFMediaPlayer这个项目里,是我按着ijkPlayer写的直播播放器,要运行须要编译ffmpeg的库,网盘里存了一份, 提取码:vjce。OpenGL ES播放相关的在在OpenGLES的文件夹里。html
learnOpenGL学到会使用纹理就能够了。git
播放视频,就是把画面一副一副的显示,跟帧动画那样。在解码视频帧数据以后获得的就是某种格式的一段内存,这段数据构成了一副画面所需的颜色信息,好比yuv420p。图文详解YUV420数据格式这篇写的很好。github
YUV和RGB这些都叫颜色空间,个人理解即是:它们是一种约定好的颜色值的排列方式。好比RGB,即是红绿蓝三种颜色份量依次排列,通常每一个颜色份量就占一个字节,值为0-255。数组
YUV420p, 是YUV三个份量分别三层,就像:YYYYUUVV。就是Y所有在一块儿,而RGB是RGBRGBRGB这样混合的。每一个份量各自在一块儿的就是有**平面(Plane)**的。而420样式是4个Y份量和一对UV份量组合,节省空间。bash
要显示YUV420p的图像,须要转化yuv到rgba,由于OpenGL输出只认rgba。多线程
OpenGL部分在各平台逻辑是一致的,不在iOS上的能够跳过这段。app
使用frameBuffer来显示:ide
CAEAGLLayer
:+(Class)layerClass{
return [CAEAGLLayer class];
}
复制代码
-(BOOL)setupOpenGLContext{
_renderLayer = (CAEAGLLayer *)self.layer;
_renderLayer.opaque = YES;
_renderLayer.contentsScale = [UIScreen mainScreen].scale;
_renderLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat,
nil];
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
//_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!_context) {
NSLog(@"alloc EAGLContext failed!");
return false;
}
EAGLContext *preContex = [EAGLContext currentContext];
if (![EAGLContext setCurrentContext:_context]) {
NSLog(@"set current EAGLContext failed!");
return false;
}
[self setupFrameBuffer];
[EAGLContext setCurrentContext:preContex];
return true;
}
复制代码
opaque
设为YES是为了避免作图层混合,去掉没必要要的性能消耗。contentsScale
保持跟手机主屏幕一致,在不一样手机上自适应。kEAGLDrawablePropertyRetainedBacking
为YES的时候会保存渲染以后数据不变,咱们不须要这个,一帧视频数据显示完就没用了,因此这个功能关闭,去掉没必要要的性能消耗。有了这个context,而且把它设为CurrentContext
,那么在绘制过程里的那些OpenGL代码才能在这个context生效,它才能把结果输出到须要的地方。布局
-(void)setupFrameBuffer{
glGenBuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glGenRenderbuffers(1, &_colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorBuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_renderLayer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBuffer);
GLint width,height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
_bufferSize.width = width;
_bufferSize.height = height;
glViewport(0, 0, _bufferSize.width, _bufferSize.height);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(@"failed to make complete framebuffer object %x", status);
}
}
复制代码
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_renderLayer];
这一句比较关键。由于它,renderBuffer、context和layer才联系到了一块儿。根据Apple文档,负责显示的layer和renderbuffer是共用内存的,这样输出到renderBuffer里的内容,layer才显示。分为两部分:第一次绘制开始前准备数据和每次绘制循环。性能
使用OpenGL显示的逻辑是:画一个正方形,而后把输出的视频帧数据制做成纹理(texture)给这个正方形,把纹理显示处理就OK里。
因此绘制的图形是不变的,那么shader和数据(AVO等)都是固定的,在第一次开始前搞定后面就不须要变了。
if (!_renderConfiged) {
[self configRenderData];
}
复制代码
-(BOOL)configRenderData{
if (_renderConfiged) {
return true;
}
GLfloat vertices[] = {
-1.0f, 1.0f, 0.0f, 0.0f, 0.0f, //left top
-1.0f, -1.0f, 0.0f, 0.0f, 1.0f, //left bottom
1.0f, 1.0f, 0.0f, 1.0f, 0.0f, //right top
1.0f, -1.0f, 0.0f, 1.0f, 1.0f, //right bottom
};
// NSString *vertexPath = [[NSBundle mainBundle] pathForResource:@"frameDisplay" ofType:@"vs"];
// NSString *fragmentPath = [[NSBundle mainBundle] pathForResource:@"frameDisplay" ofType:@"fs"];
//_frameProgram = new TFOPGLProgram(std::string([vertexPath UTF8String]), std::string([fragmentPath UTF8String]));
_frameProgram = new TFOPGLProgram(TFVideoDisplay_common_vs, TFVideoDisplay_yuv420_fs);
glGenVertexArrays(1, &VAO);
glBindVertexArray(VAO);
glGenBuffers(1, &VBO);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 5*sizeof(GL_FLOAT), 0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 5*sizeof(GL_FLOAT), (void*)(3*(sizeof(GL_FLOAT))));
glEnableVertexAttribArray(1);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindVertexArray(0);
//gen textures
glGenTextures(TFMAX_TEXTURE_COUNT, textures);
for (int i = 0; i<TFMAX_TEXTURE_COUNT; i++) {
glBindTexture(GL_TEXTURE_2D, textures[i]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
}
_renderConfiged = YES;
return YES;
}
复制代码
TFOPGLProgram
这个类里作了。先上shader:
const GLchar *TFVideoDisplay_common_vs =" \n\ #version 300 es \n\ \n\ layout (location = 0) in highp vec3 position; \n\ layout (location = 1) in highp vec2 inTexcoord; \n\ \n\ out highp vec2 texcoord; \n\ \n\ void main() \n\ { \n\ gl_Position = vec4(position, 1.0); \n\ texcoord = inTexcoord; \n\ } \n\ ";
复制代码
const GLchar *TFVideoDisplay_yuv420_fs =" \n\ #version 300 es \n\ precision highp float; \n\ \n\ in vec2 texcoord; \n\ out vec4 FragColor; \n\ uniform lowp sampler2D yPlaneTex; \n\ uniform lowp sampler2D uPlaneTex; \n\ uniform lowp sampler2D vPlaneTex; \n\ \n\ void main() \n\ { \n\ // (1) y - 16 (2) rgb * 1.164 \n\ vec3 yuv; \n\ yuv.x = texture(yPlaneTex, texcoord).r; \n\ yuv.y = texture(uPlaneTex, texcoord).r - 0.5f; \n\ yuv.z = texture(vPlaneTex, texcoord).r - 0.5f; \n\ \n\ mat3 trans = mat3(1, 1 ,1, \n\ 0, -0.34414, 1.772, \n\ 1.402, -0.71414, 0 \n\ ); \n\ \n\ FragColor = vec4(trans*yuv, 1.0); \n\ } \n\ ";
复制代码
vertex shader就是输出一下gl_Position而后把纹理坐标传给fragment shader。
fragment shader是重点,由于要在这里完成从yuv到rgb的转换。
由于yuv420p是yuv3个份量分层存放的,若是将整个yuv数据做为整个纹理加载进来,那么用一个纹理坐标想取到3个份量,计算起来就比较麻烦了,每一个fragment都须要计算。 YyYYYYYY YYYYYYYY uUUUvVVV yuv420p的样子是这样的,加入你要取(2,1)这个坐标的颜色信息,那么y在(2,1),u在(1,3),v在(5,3)。并且高宽比例会影响布局: YyYYYYYY YYYYYYYY YyYYYYYY YYYYYYYY uUUUuUUU vVVVvVVV 这样uv不在同一行了。
因此采用每一个份量单独的纹理。这样厉害的地方就是他们能够共用同一个纹理坐标:
glBindTexture(GL_TEXTURE_2D, textures[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width, height, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, overlay->pixels[0]);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textures[1]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width/2, height/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, overlay->pixels[1]);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textures[2]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, width/2, height/2, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, overlay->pixels[2]);
glGenerateMipmap(GL_TEXTURE_2D);
复制代码
overlay
只是用来打包视频帧数据的一个结构体,pixels的0、一、2分别就是yuv3个份量的平面的开始位置。GL_LUMINANCE
,也就是单颜色通道。看网上的例子,以前写的是GL_RED
的是不行的。最后用的把yuv转成rgb,用的公式:
R = Y + 1.402 (Cr-128)
G = Y - 0.34414 (Cb-128) - 0.71414 (Cr-128)
B = Y + 1.772 (Cb-128)
复制代码
这里还有一个注意的就是,YUV和YCrCb的区别: YCrCb是YUV的一个偏移版本,因此须要减去0.5(由于都映射到0-1范围了128就是0.5)。固然我以为这个公式仍是要看编码的时候设置了什么格式,视频拍摄的时候是怎么把rgb转成yuv的,二者配套就ok了!
glBindFramebuffer(GL_FRAMEBUFFER, self.frameBuffer);
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
_frameProgram->use();
_frameProgram->setTexture("yPlaneTex", GL_TEXTURE_2D, textures[0], 0);
_frameProgram->setTexture("uPlaneTex", GL_TEXTURE_2D, textures[1], 1);
_frameProgram->setTexture("vPlaneTex", GL_TEXTURE_2D, textures[2], 2);
glBindVertexArray(VAO);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindRenderbuffer(GL_RENDERBUFFER, self.colorBuffer);
[self.context presentRenderbuffer:GL_RENDERBUFFER];
复制代码
[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(catchAppResignActive) name:UIApplicationWillResignActiveNotification object:nil];
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(catchAppBecomeActive) name:UIApplicationDidBecomeActiveNotification object:nil];
......
-(void)catchAppResignActive{
_appIsUnactive = YES;
}
-(void)catchAppBecomeActive{
_appIsUnactive = NO;
}
.......
if (self.appIsUnactive) {
return; //绘制以前检查,直接取消
}
复制代码
把绘制移到副线程 iOS中OpenGL ES的的这些操纵是能够所有放到副线程处理的,包括最后的presentRenderbuffer
。关键是context构建、数组准备(VAO texture等)、渲染这些得在一个线程里,固然也能够多线程操做,但对于视屏播放而言没有必要,去除不必的性能消耗吧,锁都不用加了。
layer的frame改变处理
-(void)layoutSubviews{
[super layoutSubviews];
//If context has setuped and layer's size has changed, realloc renderBuffer. if (self.context && !CGSizeEqualToSize(self.layer.frame.size, self.bufferSize)) { _needReallocRenderBuffer = YES; } } ........... if (_needReallocRenderBuffer) { [self reallocRenderBuffer]; _needReallocRenderBuffer = NO; } ......... -(void)reallocRenderBuffer{ glBindRenderbuffer(GL_RENDERBUFFER, _colorBuffer); [_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_renderLayer]; glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBuffer); ...... } 复制代码
layoutSubviews
里从新分配render buffer,这里确定是主线程。因此只是作了个标记重点是fragment shader里对yuv份量的读取:
GL_LUMINANCE
, u、v纹理宽高相对y都减半。