在使用到AVCaptureSession获取视频流、照片时,因为硬件采集到的数据跟设备方向有关,致使直接从缓冲区读取到图片或视频,可能存在没法正常显示的问题。此时须要作相应的转换。html
具体关于概念上的解释,可参考http://www.cocoachina.com/ios/20150605/12021.html这篇博文。ios
调用硬件设备ide
- (AVCaptureVideoDataOutput *)captureOutput { if (!_captureOutput) { _captureOutput = [[AVCaptureVideoDataOutput alloc] init]; _captureOutput.alwaysDiscardsLateVideoFrames = YES; dispatch_queue_t queue; queue = dispatch_queue_create("cameraQueue", NULL); [_captureOutput setSampleBufferDelegate:self queue:queue]; NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; [_captureOutput setVideoSettings:videoSettings]; } return _captureOutput; } - (AVCaptureSession *)captureSession { if (!_captureSession) { AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; if ([device lockForConfiguration:nil]) { //设置帧率 device.activeVideoMinFrameDuration = CMTimeMake(2, 3); } AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:nil]; _captureSession = [[AVCaptureSession alloc] init]; [_captureSession addInput:captureInput]; [_captureSession addOutput:self.captureOutput]; } return _captureSession; } - (AVCaptureVideoPreviewLayer *)prevLayer { if (!_prevLayer) { _prevLayer = [AVCaptureVideoPreviewLayer layerWithSession:self.captureSession]; _prevLayer.frame = [self rectForPreviewLayer]; _prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; } return _prevLayer; }
如从缓冲区读取照片时,ui
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); CVPixelBufferLockBaseAddress(imageBuffer,0); uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer); size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); size_t width = CVPixelBufferGetWidth(imageBuffer); size_t height = CVPixelBufferGetHeight(imageBuffer); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); CGImageRef newImage = CGBitmapContextCreateImage(newContext); CGContextRelease(newContext); CGColorSpaceRelease(colorSpace); //因为这段代码中,设备是home在下进行录制,因此此处在生成image时,指定了方向 UIImage *image = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight]; CGImageRelease(newImage); CVPixelBufferUnlockBaseAddress(imageBuffer,0); }
上面只是正常从缓冲区获取到了图片,设置了正常显示的方向,如作矩形检测或人脸检测的时候,因为image的CGImage属性没有响应的调整方向,因此检测的时候可能会出现point转换的问题。为避免point转换这样比较繁琐的运算,因此能够在生成image的时候,调整好image避免后续的转换等问题spa
如在code
UIImage *image = [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight]; //此行保证image的imageOrientation为UIImageOrientationUp image = [image normalImage];
- (UIImage *)normalImage { if (self.imageOrientation == UIImageOrientationUp) return self; UIGraphicsBeginImageContextWithOptions(self.size, NO, self.scale); [self drawInRect:(CGRect){0, 0, self.size}]; UIImage *normalizedImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); return normalizedImage; }
经过以上方法,解决矩形检测、人脸识别时,point转换问题。orm
此为图片方向及检测的问题解决方法,视频暂未实践视频