1.配置相机基本环境(初始化AVCaptureSession,设置代理,开启),在示例代码中有,这里再也不重复。git
2.经过AVCaptureVideoDataOutputSampleBufferDelegate代理中拿到原始画面数据(CMSampleBufferRef)进行处理github
// Called whenever an AVCaptureVideoDataOutput instance outputs a new video frame. 每产生一帧视频帧时调用一次
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
CMSampleBufferRef cropSampleBuffer;
#warning 两种切割方式任选其一,GPU切割性能较好,CPU切割取决于设备,通常时间长会掉帧。
if (self.isOpenGPU) {
cropSampleBuffer = [self.cropView cropSampleBufferByHardware:sampleBuffer];
}else {
cropSampleBuffer = [self.cropView cropSampleBufferBySoftware:sampleBuffer];
}
// 使用完后必须显式release,不在iOS自动回收范围
CFRelease(cropSampleBuffer);
}
复制代码
- (CMSampleBufferRef)cropSampleBufferBySoftware:(CMSampleBufferRef)sampleBuffer {
OSStatus status;
// CVPixelBufferRef pixelBuffer = [self modifyImage:buffer];
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
// Lock the image buffer
CVPixelBufferLockBaseAddress(imageBuffer,0);
// Get information about the image
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// size_t height = CVPixelBufferGetHeight(imageBuffer);
NSInteger bytesPerPixel = bytesPerRow/width;
// YUV 420 Rule
if (_cropX % 2 != 0) _cropX += 1;
NSInteger baseAddressStart = _cropY*bytesPerRow+bytesPerPixel*_cropX;
static NSInteger lastAddressStart = 0;
lastAddressStart = baseAddressStart;
// pixbuffer 与 videoInfo 只有位置变换或者切换分辨率或者相机重启时须要更新,其他状况不须要,Demo里只写了位置更新,其他状况自行添加
// NSLog(@"demon pix first : %zu - %zu - %@ - %d - %d - %d -%d",width, height, self.currentResolution,_cropX,_cropY,self.currentResolutionW,self.currentResolutionH);
static CVPixelBufferRef pixbuffer = NULL;
static CMVideoFormatDescriptionRef videoInfo = NULL;
// x,y changed need to reset pixbuffer and videoinfo
if (lastAddressStart != baseAddressStart) {
if (pixbuffer != NULL) {
CVPixelBufferRelease(pixbuffer);
pixbuffer = NULL;
}
if (videoInfo != NULL) {
CFRelease(videoInfo);
videoInfo = NULL;
}
}
if (pixbuffer == NULL) {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool : YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool : YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
[NSNumber numberWithInt : g_width_size], kCVPixelBufferWidthKey,
[NSNumber numberWithInt : g_height_size], kCVPixelBufferHeightKey,
nil];
status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, g_width_size, g_height_size, kCVPixelFormatType_32BGRA, &baseAddress[baseAddressStart], bytesPerRow, NULL, NULL, (__bridge CFDictionaryRef)options, &pixbuffer);
if (status != 0) {
NSLog(@"Crop CVPixelBufferCreateWithBytes error %d",(int)status);
return NULL;
}
}
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(sampleBuffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
};
if (videoInfo == NULL) {
status = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixbuffer, &videoInfo);
if (status != 0) NSLog(@"Crop CMVideoFormatDescriptionCreateForImageBuffer error %d",(int)status);
}
CMSampleBufferRef cropBuffer = NULL;
status = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixbuffer, true, NULL, NULL, videoInfo, &sampleTime, &cropBuffer);
if (status != 0) NSLog(@"Crop CMSampleBufferCreateForImageBuffer error %d",(int)status);
lastAddressStart = baseAddressStart;
return cropBuffer;
}
复制代码
位置的计算bash
在软切中,咱们拿到一帧图片的数据,经过遍历其中的数据肯定真正要Crop的位置,利用以下公式可求出具体位置,具体切割原理在[YUV介绍]中有提到,计算时所需的变量在以上代码中都可获得。数据结构
`NSInteger baseAddressStart = _cropY*bytesPerRow+bytesPerPixel*_cropX;
`
复制代码
注意:iphone
// hardware crop
- (CMSampleBufferRef)cropSampleBufferByHardware:(CMSampleBufferRef)buffer {
// a CMSampleBuffer CVImageBuffer of media data.
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(buffer);
CGRect cropRect = CGRectMake(_cropX, _cropY, g_width_size, g_height_size);
// log4cplus_debug("Crop", "dropRect x: %f - y : %f - width : %zu - height : %zu", cropViewX, cropViewY, width, height);
/*
First, to render to a texture, you need an image that is compatible with the OpenGL texture cache. Images that were created with the camera API are already compatible and you can immediately map them for inputs. Suppose you want to create an image to render on and later read out for some other processing though. You have to have create the image with a special property. The attributes for the image must have kCVPixelBufferIOSurfacePropertiesKey as one of the keys to the dictionary.
若是要进行页面渲染,须要一个和OpenGL缓冲兼容的图像。用相机API建立的图像已经兼容,您能够立刻映射他们进行输入。假设你从已有画面中截取一个新的画面,用做其余处理,你必须建立一种特殊的属性用来建立图像。对于图像的属性必须有kCVPixelBufferIOSurfacePropertiesKey 做为字典的Key.所以如下步骤不可省略
*/
OSStatus status;
/* Only resolution has changed we need to reset pixBuffer and videoInfo so that reduce calculate count */
static CVPixelBufferRef pixbuffer = NULL;
static CMVideoFormatDescriptionRef videoInfo = NULL;
if (pixbuffer == NULL) {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:g_width_size], kCVPixelBufferWidthKey,
[NSNumber numberWithInt:g_height_size], kCVPixelBufferHeightKey, nil];
status = CVPixelBufferCreate(kCFAllocatorSystemDefault, g_width_size, g_height_size, kCVPixelFormatType_420YpCbCr8BiPlanarFullRange, (__bridge CFDictionaryRef)options, &pixbuffer);
// ensures that the CVPixelBuffer is accessible in system memory. This should only be called if the base address is going to be used and the pixel data will be accessed by the CPU
if (status != noErr) {
NSLog(@"Crop CVPixelBufferCreate error %d",(int)status);
return NULL;
}
}
CIImage *ciImage = [CIImage imageWithCVImageBuffer:imageBuffer];
ciImage = [ciImage imageByCroppingToRect:cropRect];
// Ciimage get real image is not in the original point after excute crop. So we need to pan.
ciImage = [ciImage imageByApplyingTransform:CGAffineTransformMakeTranslation(-_cropX, -_cropY)];
static CIContext *ciContext = nil;
if (ciContext == nil) {
// NSMutableDictionary *options = [[NSMutableDictionary alloc] init];
// [options setObject:[NSNull null] forKey:kCIContextWorkingColorSpace];
// [options setObject:@0 forKey:kCIContextUseSoftwareRenderer];
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
ciContext = [CIContext contextWithEAGLContext:eaglContext options:nil];
}
[ciContext render:ciImage toCVPixelBuffer:pixbuffer];
// [ciContext render:ciImage toCVPixelBuffer:pixbuffer bounds:cropRect colorSpace:nil];
CMSampleTimingInfo sampleTime = {
.duration = CMSampleBufferGetDuration(buffer),
.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(buffer),
.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(buffer)
};
if (videoInfo == NULL) {
status = CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixbuffer, &videoInfo);
if (status != 0) NSLog(@"Crop CMVideoFormatDescriptionCreateForImageBuffer error %d",(int)status);
}
CMSampleBufferRef cropBuffer;
status = CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixbuffer, true, NULL, NULL, videoInfo, &sampleTime, &cropBuffer);
if (status != 0) NSLog(@"Crop CMSampleBufferCreateForImageBuffer error %d",(int)status);
return cropBuffer;
}
复制代码
以上为硬件切割的方法,硬件切割利用GPU进行切割,主要利用CoreImage中CIContext 对象进行渲染。ide
CoreImage and UIKit coordinates (CoreImage 与 UIKit坐标系问题):我在开始作的时候跟正常同样用设定的位置对图像进行切割,可是发现,切出来的位置不对,经过上网查阅发现一个有趣的现象CoreImage 与 UIKit坐标系不相同 以下图: 正常UIKit坐标系是以左上角为原点:post
而CoreImage坐标系是以左下角为原点:(在CoreImage中,每一个图像的坐标系是独立于设备的)性能
因此切割的时候必定要注意转换Y,X的位置是正确的,Y是相反的。ui
ciImage = [ciImage imageByCroppingToRect:cropRect];
若是使用此行代码则渲染时用[ciContext render:ciImage toCVPixelBuffer:pixelBuffer];
[ciContext render:ciImage toCVPixelBuffer:pixelBuffer bounds:cropRect colorSpace:nil];
Point and pixel的区别 由于此类说明网上不少,这里就不作太多具体阐述,仅仅简述一下 Point 便是设备的逻辑分辨率,即[UIScreen mainScreen].bounds.size.width 获得的设备的宽高,因此点能够简单理解为iOS开发中的坐标系,方便对界面元素进行描述。spa
Pixel: 像素则是比点更精确的单位,在普通屏中1点=1像素,Retina屏中1点=2像素。
分辨率 分辨率须要根据不一样机型所支持的最大分辨率进行设置,例如iPhone 6S以上机型支持4k(3840 * 2160)分辨率拍摄视频。而当咱们进行Crop操做的时候调用的API正是经过像素来进行切割,因此咱们操做的单位是pixel而不是point.下面会有详细介绍。
CIContext 的初始化
首先应该将CIContext声明为全局变量或静态变量,由于CIContext初始化一次内部含有大量信息,比较耗内存,且只是渲染的时候使用,无需每次都初始化,而后以下若是在MRC中初始化完成后并未对ciContext发出retain的消息,因此须要手动retain,但在ARC下系统会自动完成此操做。
ARC:
static CIContext *ciContext = NULL;
ciContext = [CIContext contextWithOptions:nil];
复制代码
MRC:
static CIContext *ciContext = NULL;
ciContext = [CIContext contextWithOptions:nil];
[ciContext retain];
复制代码
#####1. 理解点与像素的对应关系 首先CropView须要在手机显示出来,因此坐标系仍是UIKit的坐标系,左上角为原点,宽高分别为不一样手机的宽高(如iPhone8 : 375*667, iPhone8P : 414 * 736, iPhoneX : 375 * 816),可是咱们须要算出实际分辨率下CropView的坐标,即咱们能够把当前获取的cropView的x,y点的位置转换成对应pixel的位置。
// 注意这里求的是X的像素坐标,以iPhone 8 为例 (点为375 * 667),分辨率为(1920 * 1080)
_cropX = (int)(_currentResolutionW / _screenWidth * (cropView.frame.origin.x);
即
_cropX = (int)(1920 / 375 * 当前cropView的x点坐标;
复制代码
#####2. CPU / GPU 两种方式切割时坐标系的位置不一样
原点位置
CPU : UIKit为坐标系,原点在左上角
GPU : CoreImage为坐标系,原点在左下角
_cropY = (int)(_currentResolutionH / _screenHeight * (_screenHeight - self.frame.origin.y - self.frame.size.height));
复制代码
#####3. 当手机屏幕不是16:9时,若是将视频设置为填充满屏幕则会出现误差
须要注意的是,由于部分手机或iPad屏幕尺寸并不为16:9(iPhone X, 全部iPad (4 : 3)),若是咱们在2k(1920 * 1080) , 4k (3840 * 2160 ) 分辨率下对显示的View设置了 captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
那么屏幕会牺牲一部分视频填充视图,即相机捕获的视频数据并无完整展示在手机视图里,因此再使用咱们的crop功能时,因为咱们使用的是UIKit的坐标系,也就是说原点(0,0)并非该帧图片真正像素的(0,0),而若是计算则须要写不少额外代码,因此咱们能够在Crop功能下设置captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspect;
这样的话video视图会根据分辨率调整为显示完整视频。可是设置后若是设备是iPhoneX (比例大于16:9,X轴会缩小,黑边填充),iPad(比例小于16:9,y轴缩小,黑边填充)。
这时,若是咱们经过不断更改cropView则代码量较大,因此我在这里定义了一个videoRect属性用来记录Video真正的Rect,由于当程序运行时咱们能够获得屏幕宽高比例,因此经过肯定宽高比能够拿到真正Video的rect,此时在后续代码中咱们只须要传入videoRect的尺寸进行计算,即时是原先正常16:9的手机后面API也无须更改。
#####4. 为何用int 在软切中,咱们在建立pixelBuffer时须要使用
CV_EXPORT CVReturn CVPixelBufferCreateWithBytes(
CFAllocatorRef CV_NULLABLE allocator,
size_t width,
size_t height,
OSType pixelFormatType,
void * CV_NONNULL baseAddress,
size_t bytesPerRow,
CVPixelBufferReleaseBytesCallback CV_NULLABLE releaseCallback,
void * CV_NULLABLE releaseRefCon,
CFDictionaryRef CV_NULLABLE pixelBufferAttributes,
CV_RETURNS_RETAINED_PARAMETER CVPixelBufferRef CV_NULLABLE * CV_NONNULL pixelBufferOut)
复制代码
这个API,咱们须要将x,y的点放入baseAddress中,这里又须要使用公式NSInteger baseAddressStart = _cropY*bytesPerRow+bytesPerPixel*_cropX;
,可是这里根据YUV 420的规则咱们咱们传入的X的点不能为奇数,因此咱们须要if (_cropX % 2 != 0) _cropX += 1;
,而只有整型才能求余,因此这里的点咱们均定义为int,在视图展现中忽略小数点的偏差。