前面对Camera2的初始化以及预览的相关流程进行了详细分析,本文将会对Camera2的capture(拍照)流程进行分析。java
前面分析preview的时候,当预览成功后,会使能ShutterButton,便可以进行拍照,定位到ShutterButton的监听事件为onShutterButtonClick方法:session
//CaptureModule.java @Override public void onShutterButtonClick() { //Camera未打开 if (mCamera == null) { return; } int countDownDuration = mSettingsManager.getInteger(SettingsManager .SCOPE_GLOBAL,Keys.KEY_COUNTDOWN_DURATION); if (countDownDuration > 0) { // 开始倒计时 mAppController.getCameraAppUI().transitionToCancel(); mAppController.getCameraAppUI().hideModeOptions(); mUI.setCountdownFinishedListener(this); mUI.startCountdown(countDownDuration); // Will take picture later via listener callback. } else { //即刻拍照 takePictureNow(); } }
首先,读取Camera的配置,判断配置是否须要延时拍照,此处分析不需延时的状况,即调用takePictureNow方法:异步
//CaptureModule.java private void takePictureNow() { if (mCamera == null) { Log.i(TAG, "Not taking picture since Camera is closed."); return; } //建立Capture会话并开启会话 CaptureSession session = createAndStartCaptureSession(); //获取Camera的方向 int orientation = mAppController.getOrientationManager() .getDeviceOrientation().getDegrees(); //初始化图片参数,其中this(即CaptureModule)为PictureCallback的实现 PhotoCaptureParameters params = new PhotoCaptureParameters( session.getTitle(), orientation, session.getLocation(), mContext.getExternalCacheDir(), this, mPictureSaverCallback, mHeadingSensor.getCurrentHeading(), mZoomValue, 0); //装配Session decorateSessionAtCaptureTime(session); //拍照 mCamera.takePicture(params, session); }
它首先调用createAndStartCaptureSession来建立一个CaptureSession而且启动会话,这里而且会进行初始参数的设置,譬如设置CaptureModule(此处实参ide
为this)为图片处理的回调(后面再分析):函数
//CaptureModule.java private CaptureSession createAndStartCaptureSession() { //获取会话时间 long sessionTime = getSessionTime(); //当前位置 Location location = mLocationManager.getCurrentLocation(); //设置picture name String title = CameraUtil.instance().createJpegName(sessionTime); //建立会话 CaptureSession session = getServices().getCaptureSessionManager() .createNewSession(title, sessionTime, location); //开启会话 session.startEmpty(new CaptureStats(mHdrPlusEnabled),new Size( (int) mPreviewArea.width(), (int) mPreviewArea.height())); return session; }
首先,获取会话的相关参数,包括会话时间,拍照的照片名字以及位置信息等,而后调用Session管理来建立CaptureSession,最后将此CaptureSessionui
启动。到这里,会话就建立并启动了,因此接着分析上面的拍照流程,它会调用OneCameraImpl的takePicture方法来进行拍照:this
//OneCameraImpl.java @Override public void takePicture(final PhotoCaptureParameters params, final CaptureSession session) { ... // 除非拍照已经返回,不然就广播一个未准备好状态的广播,即等待本次拍照结束 broadcastReadyState(false); //建立一个线程 mTakePictureRunnable = new Runnable() { @Override public void run() { //拍照 takePictureNow(params, session); } }; //设置回调,此回调后面将分析,它其实就是CaptureModule,它实现了PictureCallback mLastPictureCallback = params.callback; mTakePictureStartMillis = SystemClock.uptimeMillis(); //若是须要自动聚焦 if (mLastResultAFState == AutoFocusState.ACTIVE_SCAN) { mTakePictureWhenLensIsStopped = true; } else { //拍照 takePictureNow(params, session); } }
在拍照里,首先广播一个未准备好的状态广播,而后进行拍照的回调设置,而且判断是否有自动聚焦,若是是则将mTakePictureWhenLensIsStopped 设为ture,spa
即即刻拍照被中止了,不然则调用OneCameraImpl的takePictureNow方法来发起拍照请求:线程
//OneCameraImpl.java public void takePictureNow(PhotoCaptureParameters params, CaptureSession session) { long dt = SystemClock.uptimeMillis() - mTakePictureStartMillis; try { // 构造JPEG图片拍照的请求 CaptureRequest.Builder builder = mDevice.createCaptureRequest( CameraDevice.TEMPLATE_STILL_CAPTURE); builder.setTag(RequestTag.CAPTURE); addBaselineCaptureKeysToRequest(builder); // Enable lens-shading correction for even better DNGs. if (sCaptureImageFormat == ImageFormat.RAW_SENSOR) { builder.set(CaptureRequest.STATISTICS_LENS_SHADING_MAP_MODE, CaptureRequest.STATISTICS_LENS_SHADING_MAP_MODE_ON); } else if (sCaptureImageFormat == ImageFormat.JPEG) { builder.set(CaptureRequest.JPEG_QUALITY, JPEG_QUALITY); .getJpegRotation(params.orientation, mCharacteristics)); } //用于preview的控件 builder.addTarget(mPreviewSurface); //用于图片显示的控件 builder.addTarget(mCaptureImageReader.getSurface()); CaptureRequest request = builder.build(); if (DEBUG_WRITE_CAPTURE_DATA) { final String debugDataDir = makeDebugDir(params.debugDataFolder, "normal_capture_debug"); Log.i(TAG, "Writing capture data to: " + debugDataDir); CaptureDataSerializer.toFile("Normal Capture", request, new File(debugDataDir,"capture.txt")); } //拍照,mCaptureCallback为回调 mCaptureSession.capture(request, mCaptureCallback, mCameraHandler); } catch (CameraAccessException e) { Log.e(TAG, "Could not access camera for still image capture."); broadcastReadyState(true); params.callback.onPictureTakingFailed(); return; } synchronized (mCaptureQueue) { mCaptureQueue.add(new InFlightCapture(params, session)); } }
与preview相似,都是经过CaptureRequest来与Camera进行通讯的,经过session的capture来进行拍照,debug
并设置拍照的回调函数为mCaptureCallback:
//CameraCaptureSessionImpl.java @Override public synchronized int capture(CaptureRequest request,CaptureCallback callback,Handler handler)throws CameraAccessException{ ... handler = checkHandler(handler,callback); return addPendingSequence(mDeviceImpl.capture(request,createCaptureCallbackProxy(handler,callback),mDeviceHandler)); }
代码与preview中的相似,都是将请求加入到待处理的请求集,如今看CaptureCallback回调:
//OneCameraImpl.java private final CameraCaptureSession.CaptureCallback mCaptureCallback = new CameraCaptureSession.CaptureCallback(){ @Override public void onCaptureStarted(CameraCaptureSession session,CaptureRequest request,long timestamp,long frameNumber){ //与preview相似 if(request.getTag() == RequestTag.CAPTURE&&mLastPictureCallback!=null){ mLastPictureCallback.onQuickExpose(); } } ... @Override public void onCaptureCompleted(CameraCaptureSession session,CaptureRequest request,TotalCaptureResult result){ autofocusStateChangeDispatcher(result); if(result.get(CaptureResult.CONTROL_AF_STATE) == null){ //检查自动聚焦的状态 AutoFocusHelper.checkControlAfState(result); } ... if(request.getTag() == RequestTag.CAPTURE){ synchronized(mCaptureQueue){ if(mCaptureQueue.getFirst().setCaptureResult(result).isCaptureComplete()){ capture = mCaptureQueue.removeFirst(); } } if(capture != null){ //拍照结束 OneCameraImpl.this.onCaptureCompleted(capture); } } super.onCaptureCompleted(session,request,result); } ... }
这是Native层在处理请求时,会调用相应的回调,如capture开始时,会回调onCaptureStarted,具体的在preview中有过度析,当拍照结束时,会回调
onCaptureCompleted方法,其中会根据CaptureResult来检查自动聚焦的状态,并经过TAG判断其是Capture动做时,再来看它是不是队列中的第一个请求,
若是是,则将请求移除,由于请求已经处理成功,最后再调用OneCameraImpl的onCaptureCompleted方法来进行处理:
//OneCameraImpl.java private void onCaptureCompleted(InFlightCapture capture){ if(isCaptureImageFormat == ImageFormat.RAW_SENSOR){ ... File dngFile = new File(RAW_DIRECTORY,capture.session.getTitle()+".dng"); writeDngBytesAndClose(capture.image,capture.totalCaptureResult,mCharacteristics,dngFile); }else{ //解析result中的图片数据 byte[] imageBytes = acquireJpegBytesAndClose(capture.image); //保存Jpeg图片 saveJpegPicture(imageBytes,capture.parameters,capture.session,capture.totalCaptureResult); } broadcastReadyState(true); //调用回调 capture.parameters.callback.onPictureTaken(capture.session); }
如代码所示,首先,对result中的图片数据进行了解析,而后调用saveJpegPicture方法将解析获得的图片数据进行保存,最后再调用
里面的回调(即CaptureModule,前面在初始化Parameters时说明了,它实现了PictureCallbak接口)的onPictureTaken方法,因此,
接下来先分析saveJpegPicture方法:
//OneCameraImpl.java private void saveJpegPicture(byte[] jpegData,final PhotoCaptureParameters captureParams,CaptureSession session,CaptureResult result){ ... ListenableFuture<Optional<Uri>> futureUri = session.saveAndFinish(jpegData,width,height,rotation,exif); Futures.addCallback(futureUri,new FutureCallback<Optional<Uri>>(){ @Override public void onSuccess(Optional<Uri> uriOptional){ captureParams.callback.onPictureSaved(mOptional.orNull()); } @Override public void onFailure(Throwable throwable){ captureParams.callback.onPictureSaved(null); } }); }
它最后会回调onPictureSaved方法来对图片进行保存,因此须要分析CaptureModule的onPictureSaved方法:
//CaptureModule.java @Override public void onPictureSaved(Uri uri){ mAppController.notifyNewMedia(uri); }
mAppController的实现为CameraActivity,因此分析notifyNewMedia方法:
//CameraActivity.java @Override public void notifyNewMedia(Uri uri){ ... if(FilmstripItemUtils.isMimeTypeVideo(mimeType)){ //若是拍摄的是video sendBroadcast(new Intent(CameraUtil.ACTION_NEW_VIDEO,uri)); newData = mVideoItemFactory.queryContentUri(uri); ... }else if(FilmstripItemUtils.isMimeTypeImage(mimeType)){ //若是是拍摄图片 CameraUtil.broadcastNewPicture(mAppContext,uri); newData = mPhotoItemFactory.queryCotentUri(uri); ... }else{ return; } new AsyncTask<FilmstripItem,Void,FilmstripItem>(){ @Override protected FilmstripItem doInBackground(FilmstripItem... Params){ FilmstripItem data = params[0]; MetadataLoader.loadMetadata(getAndroidContet(),data); return data; } ... } }
由代码可知,这里有两种数据的处理,一种是video,另外一种是image。而咱们这里分析的是capture图片数据,因此首先会根据在回调函数
传入的参数Uri和PhotoItemFactory来查询到相应的拍照数据,而后再开启一个异步的Task来对此数据进行处理,即经过MetadataLoader的
loadMetadata来加载数据,并返回。至此,capture的流程就基本分析结束了,下面将给出capture流程的整个过程当中的时序图: