不少应用都有扫描二维码的功能,在开发这些功能时你们均可能接触过 ZXing 或 ZBar 这类第三方扫码库,但从 iOS 7 开始系统原生 API 就支持经过扫描获取二维码的功能。今天就来讲说原生扫码的那些事。 php
也是从 iOS 7 开始,应用要使用相机功能首先要得到用户的受权,因此要先判断受权状况。 html
AVAuthorizationStatus authorizationStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
typedef NS_ENUM(NSInteger, AVAuthorizationStatus) { AVAuthorizationStatusNotDetermined = 0, AVAuthorizationStatusRestricted, // 受限,有可能开启了访问限制 AVAuthorizationStatusDenied, AVAuthorizationStatusAuthorized } NS_AVAILABLE_IOS(7_0);
[AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler: ^(BOOL granted) {
if (granted) { [self startCapture]; // 得到受权 } else { NSLog(@"%@", @"访问受限"); } }];
AVAuthorizationStatus authorizationStatus = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
switch (authorizationStatus) { case AVAuthorizationStatusNotDetermined: { [AVCaptureDevice requestAccessForMediaType:AVMediaTypeVideo completionHandler: ^(BOOL granted) { if (granted) { [self startCapture]; } else { NSLog(@"%@", @"访问受限"); } }]; break; } case AVAuthorizationStatusAuthorized: { [self startCapture]; break; } case AVAuthorizationStatusRestricted: case AVAuthorizationStatusDenied: { NSLog(@"%@", @"访问受限"); break; } default: { break; } }
扫码是一个从摄像头(input)到 解析出字符串(output) 的过程,用AVCaptureSession 来协调。其中是经过 AVCaptureConnection 来链接各个 input 和 output,还能够用它来控制 input 和 output 的 数据流向。它们的关系以下图:python
AVCaptureSession *session = [[AVCaptureSession alloc] init]; AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; NSError *error; AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (deviceInput) { [session addInput:deviceInput]; AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init]; [metadataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()]; [session addOutput:metadataOutput]; // 这行代码要在设置 metadataObjectTypes 前 metadataOutput.metadataObjectTypes = @[AVMetadataObjectTypeQRCode]; AVCaptureVideoPreviewLayer *previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session]; previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill; previewLayer.frame = self.view.frame; [self.view.layer insertSublayer:previewLayer atIndex:0]; [session startRunning]; } else { NSLog(@"%@", error); }
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputMetadataObjects:(NSArray *)metadataObjects fromConnection:(AVCaptureConnection *)connection { AVMetadataMachineReadableCodeObject *metadataObject = metadataObjects.firstObject; if ([metadataObject.type isEqualToString:AVMetadataObjectTypeQRCode] && !self.isQRCodeCaptured) { // 成功后系统不会中止扫描,能够用一个变量来控制。 self.isQRCodeCaptured = YES; NSLog(@"%@", metadataObject.stringValue); } }
从 iOS 8 开始你也能够从图片文件解析出二维码,用到 Core Image 的 CIDetector。ios
代码也很简单:git
CIDetector *detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{ CIDetectorAccuracy:CIDetectorAccuracyHigh }]; CIImage *image = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"foobar.png"]]; NSArray *features = [detector featuresInImage:image]; for (CIQRCodeFeature *feature in features) { NSLog(@"%@", feature.messageString); }
生成二维码用到了 CIQRCodeGenerator 这种 CIFilter。它有两个字段能够设置,inputMessage 和 inputCorrectionLevel。inputMessage 是一个 NSData 对象,能够是字符串也能够是一个 URL。inputCorrectionLevel 是一个单字母(@"L", @"M", @"Q", @"H" 中的一个),表示不一样级别的容错率,默认为 @"M"。 github
QR码有容错能力,QR码图形若是有破损,仍然能够被机器读取内容,最高能够到7%~30%面积破损仍可被读取。因此QR码能够被普遍使用在运输外箱上。session
相对而言,容错率愈高,QR码图形面积愈大。因此通常折衷使用15%容错能力。app
错误修正容量 L水平 7%的字码可被修正ide
M水平 15%的字码可被修正测试
Q水平 25%的字码可被修正
H水平 30%的字码可被修正
因此不少二维码的中间都有头像之类的图片但仍然能够识别出来就是这个缘由。
代码以下,应该注意的是:
// return image as PNG. May return nil if image has no CGImageRef or invalid bitmap format NSData * UIImagePNGRepresentation(UIImage *image);
NSString *urlString = @"http://weibo.com/u/2255024877"; NSData *data = [urlString dataUsingEncoding:NSISOLatin1StringEncoding]; // NSISOLatin1StringEncoding 编码 CIFilter *filter = [CIFilter filterWithName:@"CIQRCodeGenerator"]; [filter setValue:data forKey:@"inputMessage"]; CIImage *outputImage = filter.outputImage; NSLog(@"%@", NSStringFromCGSize(outputImage.extent.size)); CGAffineTransform transform = CGAffineTransformMakeScale(scale, scale); // scale 为放大倍数 CIImage *transformImage = [outputImage imageByApplyingTransform:transform]; // 保存 CIContext *context = [CIContext contextWithOptions:nil]; CGImageRef imageRef = [context createCGImage:transformImage fromRect:transformImage.extent]; UIImage *qrCodeImage = [UIImage imageWithCGImage:imageRef]; [UIImagePNGRepresentation(qrCodeImage) writeToFile:path atomically:NO]; CGImageRelease(imageRef);
扫码时 previewLayer 的扫描范围是整个可视范围的,但有些需求可能须要指定扫描的区域,虽然我以为这样很没有必要,由于整个屏幕均可以扫又何须指定到某个框呢?但若是真的须要这么作能够设定 metadataOutput 的 rectOfInterest。
须要注意的是:
metadataOutput.rectOfInterest = [previewLayer metadataOutputRectOfInterestForRect:CGRectMake(80, 80, 160, 160)]; // 假设扫码框的 Rect 是 (80, 80, 160, 160)
[[NSNotificationCenter defaultCenter] addObserverForName:AVCaptureInputPortFormatDescriptionDidChangeNotification
object:nil queue:[NSOperationQueue currentQueue] usingBlock: ^(NSNotification *_Nonnull note) { metadataOutput.rectOfInterest = [previewLayer metadataOutputRectOfInterestForRect:CGRectMake(80, 80, 160, 160)]; }];
扫码框四周通常都是半透明的黑色,而它里面是没有颜色的。
你能够在扫码框四周各添加视图,但更简单的方法是自定义一个视图,在 drawRect: 画扫码框的 path。代码以下:
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGFloat width = CGRectGetWidth([UIScreen mainScreen].bounds);
[[[UIColor blackColor] colorWithAlphaComponent:0.5] setFill]; CGMutablePathRef screenPath = CGPathCreateMutable(); CGPathAddRect(screenPath, NULL, self.bounds); CGMutablePathRef scanPath = CGPathCreateMutable(); CGPathAddRect(scanPath, NULL, CGRectMake(80, 80, 160, 160); CGMutablePathRef path = CGPathCreateMutable(); CGPathAddPath(path, NULL, screenPath); CGPathAddPath(path, NULL, scanPath); CGContextAddPath(ctx, path); CGContextDrawPath(ctx, kCGPathEOFill); // kCGPathEOFill 方式 CGPathRelease(screenPath); CGPathRelease(scanPath); CGPathRelease(path);