本文转载请注明出处 —— polobymulberry-博客园html
在【AR实验室】mulberryAR : ORBSLAM2+VVSION末尾说起了iPhone5s真机测试结果,其中ExtractORB函数,也就是提取图像的ORB特征这一块耗时很可观。因此这也是目前须要优化的重中之重。此处,我使用【AR实验室】mulberryAR :添加连续图像做为输入中添加的连续图像做为输入。这样的好处有两个,一个就是保证输入一致,那么单线程提取特征和并行提取特征两种方法优化对比就比较有可信度,另外一个是能够使用iOS模拟器来跑程序了,由于不须要打开摄像头的,测试起来至关方便,更有多种机型任你选。数组
目前对特征提取这部分优化就只有两个想法:安全
第二种方法很容易,只须要在配置文件中更改提取特征点的数目便可,此处不赘述。本文主要集中第一种方法,初步尝试将特征提取并行化。性能优化
ORB-SLAM2中特征提取函数叫作ExtractORB,是Frame类的一个成员函数。用来提取当前Frame的ORB特征点。多线程
// flag是给双目相机用的,单目相机默认flag为0 // 提取im上的ORB特征点 void Frame::ExtractORB(int flag, const cv::Mat &im) { if(flag==0) // mpORBextractorLeft是ORBextractor对象,由于ORBextractor重载了() // 因此才会有下面这种用法 (*mpORBextractorLeft)(im,cv::Mat(),mvKeys,mDescriptors); else (*mpORBextractorRight)(im,cv::Mat(),mvKeysRight,mDescriptorsRight); }
从上面代码能够看出ORB-SLAM2特征提取主要调用的是ORBextractor重载的()函数。咱们给该函数重要的几个部分打点,测试每一个部分的耗时。函数
重要提示-测试代码执行时间:性能
测试某段代码执行的时间有不少种方法,好比:测试
clock_t begin = clock(); //... clock_t end = clock(); cout << "execute time = " << (end - begin) / CLOCKS_PER_SEC << "s" << endl;
不过我以前在多线程求和【原】C++11并行计算 — 数组求和中使用上述方法计时,发现这个方法对于多线程计算存在bug。由于目前我是基于iOS平台,因此此处我使用了iOS中计算时间的方式。另外又由于在C++文件中不能直接使用Foundation组件,因此采用对应的CoreFoundation。优化
CFAbsoluteTime beginTime = CFAbsoluteTimeGetCurrent(); CFDateRef beginDate = CFDateCreate(kCFAllocatorDefault, beginTime); // ... CFAbsoluteTime endime = CFAbsoluteTimeGetCurrent(); CFDateRef endDate = CFDateCreate(kCFAllocatorDefault, endTime); CFTimeInterval timeInterval = CFDateGetTimeIntervalSinceDate(endDate, beginDate); cout << "execure time = " << (double)(timeInterval) * 1000.0 << "ms" << endl;
将上述计时代码插入到operator()函数中,目前函数总体看起来以下,主要是对三个部分进行计时,分别为ComputePyramid、ComputeKeyPointsOctTree和ComputeDescriptors:this
void ORBextractor::operator()( InputArray _image, InputArray _mask, vector<KeyPoint>& _keypoints, OutputArray _descriptors) { if(_image.empty()) return; Mat image = _image.getMat(); assert(image.type() == CV_8UC1 ); // 1.计算图像金字塔的时间 CFAbsoluteTime beginComputePyramidTime = CFAbsoluteTimeGetCurrent(); CFDateRef computePyramidBeginDate = CFDateCreate(kCFAllocatorDefault, beginComputePyramidTime); // Pre-compute the scale pyramid ComputePyramid(image); CFAbsoluteTime endComputePyramidTime = CFAbsoluteTimeGetCurrent(); CFDateRef computePyramidEndDate = CFDateCreate(kCFAllocatorDefault, endComputePyramidTime); CFTimeInterval computePyramidTimeInterval = CFDateGetTimeIntervalSinceDate(computePyramidEndDate, computePyramidBeginDate); cout << "ComputePyramid time = " << (double)(computePyramidTimeInterval) * 1000.0 << endl; vector < vector<KeyPoint> > allKeypoints; // 2.计算关键点KeyPoint的时间 CFAbsoluteTime beginComputeKeyPointsTime = CFAbsoluteTimeGetCurrent(); CFDateRef computeKeyPointsBeginDate = CFDateCreate(kCFAllocatorDefault, beginComputeKeyPointsTime); ComputeKeyPointsOctTree(allKeypoints); //ComputeKeyPointsOld(allKeypoints); CFAbsoluteTime endComputeKeyPointsTime = CFAbsoluteTimeGetCurrent(); CFDateRef computeKeyPointsEndDate = CFDateCreate(kCFAllocatorDefault, endComputeKeyPointsTime); CFTimeInterval computeKeyPointsTimeInterval = CFDateGetTimeIntervalSinceDate(computeKeyPointsEndDate, computeKeyPointsBeginDate); cout << "ComputeKeyPointsOctTree time = " << (double)(computeKeyPointsTimeInterval) * 1000.0 << endl; Mat descriptors; int nkeypoints = 0; for (int level = 0; level < nlevels; ++level) nkeypoints += (int)allKeypoints[level].size(); if( nkeypoints == 0 ) _descriptors.release(); else { _descriptors.create(nkeypoints, 32, CV_8U); descriptors = _descriptors.getMat(); } _keypoints.clear(); _keypoints.reserve(nkeypoints); int offset = 0; // 3.计算描述子的时间 CFAbsoluteTime beginComputeDescriptorsTime = CFAbsoluteTimeGetCurrent(); CFDateRef computeDescriptorsBeginDate = CFDateCreate(kCFAllocatorDefault, beginComputeDescriptorsTime); for (int level = 0; level < nlevels; ++level) { vector<KeyPoint>& keypoints = allKeypoints[level]; int nkeypointsLevel = (int)keypoints.size(); if(nkeypointsLevel==0) continue; // preprocess the resized image Mat workingMat = mvImagePyramid[level].clone(); GaussianBlur(workingMat, workingMat, cv::Size(7, 7), 2, 2, BORDER_REFLECT_101); // Compute the descriptors Mat desc = descriptors.rowRange(offset, offset + nkeypointsLevel); computeDescriptors(workingMat, keypoints, desc, pattern); offset += nkeypointsLevel; // Scale keypoint coordinates if (level != 0) { float scale = mvScaleFactor[level]; //getScale(level, firstLevel, scaleFactor); for (vector<KeyPoint>::iterator keypoint = keypoints.begin(), keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint) keypoint->pt *= scale; } // And add the keypoints to the output _keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end()); } CFAbsoluteTime endComputeDescriptorsTime = CFAbsoluteTimeGetCurrent(); CFDateRef computeDescriptorsEndDate = CFDateCreate(kCFAllocatorDefault, endComputeDescriptorsTime); CFTimeInterval computeDescriptorsTimeInterval = CFDateGetTimeIntervalSinceDate(computeDescriptorsEndDate, computeDescriptorsBeginDate); cout << "ComputeDescriptors time = " << (double)(computeDescriptorsTimeInterval) * 1000.0 << endl; }
此时,使用iPhone7模拟器运行mulberryAR,而且运行我以前录制的一段连续图像帧,获得结果以下(此处我只截取前三帧的结果):
能够看出优化的重点在于ComputeKeyPointsOctTree、ComputeDescriptiors。
ComputePyramid、ComputeKeyPointsOctTree和ComputeDescriptors函数中都会根据图像金字塔的不一样层级作一样的操做,因此此处能够将图像金字塔不一样层级的操做并行化。按照这个思路,对三个部分的代码进行了修改。
该函数暂时没法进行并行化处理,由于里面在计算图像金字塔中第n层图像的时候,依赖第n-1层的图像,另外此函数在整个特征提取的部分占比不是很大,相对来讲并行化意义不是很大。
该函数的并行化过程很容易,只须要将其中的for(int i = 0; i < nlevels; ++i)里面的函数作成单独函数,并添加到各自的thread中便可。不废话,直接上代码:
void ORBextractor::ComputeKeyPointsOctTree(vector<vector<KeyPoint> >& allKeypoints) { allKeypoints.resize(nlevels); vector<thread> computeKeyPointsThreads; for (int i = 0; i < nlevels; ++i) { computeKeyPointsThreads.push_back(thread(&ORBextractor::ComputeKeyPointsOctTreeEveryLevel, this, i, std::ref(allKeypoints))); } for (int i = 0; i < nlevels; ++i) { computeKeyPointsThreads[i].join(); } // compute orientations vector<thread> computeOriThreads; for (int level = 0; level < nlevels; ++level) { computeOriThreads.push_back(thread(computeOrientation, mvImagePyramid[level], std::ref(allKeypoints[level]), umax)); } for (int level = 0; level < nlevels; ++level) { computeOriThreads[level].join(); } }
其中ComputeKeyPointsOctTreeEveryLevel函数以下:
void ORBextractor::ComputeKeyPointsOctTreeEveryLevel(int level, vector<vector<KeyPoint> >& allKeypoints) { const float W = 30; const int minBorderX = EDGE_THRESHOLD-3; const int minBorderY = minBorderX; const int maxBorderX = mvImagePyramid[level].cols-EDGE_THRESHOLD+3; const int maxBorderY = mvImagePyramid[level].rows-EDGE_THRESHOLD+3; vector<cv::KeyPoint> vToDistributeKeys; vToDistributeKeys.reserve(nfeatures*10); const float width = (maxBorderX-minBorderX); const float height = (maxBorderY-minBorderY); const int nCols = width/W; const int nRows = height/W; const int wCell = ceil(width/nCols); const int hCell = ceil(height/nRows); for(int i=0; i<nRows; i++) { const float iniY =minBorderY+i*hCell; float maxY = iniY+hCell+6; if(iniY>=maxBorderY-3) continue; if(maxY>maxBorderY) maxY = maxBorderY; for(int j=0; j<nCols; j++) { const float iniX =minBorderX+j*wCell; float maxX = iniX+wCell+6; if(iniX>=maxBorderX-6) continue; if(maxX>maxBorderX) maxX = maxBorderX; vector<cv::KeyPoint> vKeysCell; FAST(mvImagePyramid[level].rowRange(iniY,maxY).colRange(iniX,maxX), vKeysCell,iniThFAST,true); if(vKeysCell.empty()) { FAST(mvImagePyramid[level].rowRange(iniY,maxY).colRange(iniX,maxX), vKeysCell,minThFAST,true); } if(!vKeysCell.empty()) { for(vector<cv::KeyPoint>::iterator vit=vKeysCell.begin(); vit!=vKeysCell.end();vit++) { (*vit).pt.x+=j*wCell; (*vit).pt.y+=i*hCell; vToDistributeKeys.push_back(*vit); } } } } vector<KeyPoint> & keypoints = allKeypoints[level]; keypoints.reserve(nfeatures); keypoints = DistributeOctTree(vToDistributeKeys, minBorderX, maxBorderX, minBorderY, maxBorderY,mnFeaturesPerLevel[level], level); const int scaledPatchSize = PATCH_SIZE*mvScaleFactor[level]; // Add border to coordinates and scale information const int nkps = keypoints.size(); for(int i=0; i<nkps ; i++) { keypoints[i].pt.x+=minBorderX; keypoints[i].pt.y+=minBorderY; keypoints[i].octave=level; keypoints[i].size = scaledPatchSize; } }
在iPhone7模拟器上测试,获得以下结果(取前5帧图像测试):
能够看到经过并行处理,ComputeKeyPointsOctTree得到了2~3倍的提速。
之因此这一部分叫作“部分”,而非“函数”是由于这部分涉及的函数相对于ComputeKeyPointsOctTree比较复杂,涉及的变量比较多。只有理清之间的关系才能安全地并行化。
此处也不赘述,直接贴出修改后的并行化代码:
vector<thread> computeDescThreads; vector<vector<KeyPoint> > keypointsEveryLevel; keypointsEveryLevel.resize(nlevels); // 图像金字塔每层的offset与前面每层的offset有关,因此不能直接放在ComputeDescriptorsEveryLevel计算 for (int level = 0; level < nlevels; ++level) { computeDescThreads.push_back(thread(&ORBextractor::ComputeDescriptorsEveryLevel, this, level, std::ref(allKeypoints), descriptors, offset, std::ref(keypointsEveryLevel[level]))); int keypointsNum = (int)allKeypoints[level].size(); offset += keypointsNum; } for (int level = 0; level < nlevels; ++level) { computeDescThreads[level].join(); } // _keypoints要按照顺序进行插入,因此不能直接放在ComputeDescriptorsEveryLevel计算 for (int level = 0; level < nlevels; ++level) { _keypoints.insert(_keypoints.end(), keypointsEveryLevel[level].begin(), keypointsEveryLevel[level].end()); } // 其中ComputeDescriptorsEveryLevel函数以下 void ORBextractor::ComputeDescriptorsEveryLevel(int level, std::vector<std::vector<KeyPoint> > &allKeypoints, const Mat& descriptors, int offset, vector<KeyPoint>& _keypoints) { vector<KeyPoint>& keypoints = allKeypoints[level]; int nkeypointsLevel = (int)keypoints.size(); if(nkeypointsLevel==0) return; // preprocess the resized image Mat workingMat = mvImagePyramid[level].clone(); GaussianBlur(workingMat, workingMat, cv::Size(7, 7), 2, 2, BORDER_REFLECT_101); // Compute the descriptors Mat desc = descriptors.rowRange(offset, offset + nkeypointsLevel); computeDescriptors(workingMat, keypoints, desc, pattern); // offset += nkeypointsLevel; // Scale keypoint coordinates if (level != 0) { float scale = mvScaleFactor[level]; //getScale(level, firstLevel, scaleFactor); for (vector<KeyPoint>::iterator keypoint = keypoints.begin(), keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint) keypoint->pt *= scale; } // And add the keypoints to the output // _keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end()); _keypoints = keypoints; }
在iPhone7模拟器上测试,获得以下结果(取前5帧图像测试):
能够看到经过并行处理,ComputeDescriptors得到了2~3倍的提速。
0x02小节已经对比了每步优化的结果。此处从总体的角度对结果进行简单的分析。使用iPhone7模拟器跑了前5帧的对比结果:
从结果中能够看出,ORB特征提取速度有了2~3倍的提高,在TrackMonocular部分占比也降低了很多,暂时ORB特征提取不用做为性能优化的重点。后面将会从其余方面对ORB-SLAM2进行优化。