上次咱们成功训练了手掌识别器http://www.cnblogs.com/take-fetter/p/8438747.html,能够成功获得识别的结果如图html
接下来须要使用opencv来获取手掌,去除背景部分,这里就须要用到掩膜(mask)、ROI(region of interest)等相关知识,具体的概念仍是不讲了,网上不少。git
首先从图中根据上次的程序画框部分提取手掌(固然本身截图再保存也能够-.-)以下github
算法思想:根据黑白图片,基于距离变换获得手掌中心,并根据最大半径画出手掌的内切圆如图算法
代码以下ide
distance = cv2.distanceTransform(black_and_white, cv2.DIST_L2, 5, cv2.CV_32F) # Calculates the distance to the closest zero pixel for each pixel of the source image. maxdist = 0 # rows,cols = img.shape for i in range(distance.shape[0]): for j in range(distance.shape[1]): dist = distance[i][j] if maxdist < dist: x = j y = i maxdist = dist
cv2.circle(original, (x, y), maxdist, (255, 100, 255), 1, 8, 0)
如今咱们已知了圆的半径和圆心坐标,所以能够根据ROI提取出内切正方形(虽然内切正方形会损失不少的信息,可是目前我尚未想到其余的更好的办法),做出正方形以下this
做正方形并提取的代码以下spa
final_img = original.copy()
#cv2.circle() this line half_slide = maxdist * math.cos(math.pi / 4) (left, right, top, bottom) = ((x - half_slide), (x + half_slide), (y - half_slide), (y + half_slide)) p1 = (int(left), int(top)) p2 = (int(right), int(bottom)) cv2.rectangle(original, p1, p2, (77, 255, 9), 1, 1) final_img = final_img[int(top):int(bottom),int(left):int(right)]
运行截图3d
能够看到出现了灰色部分,按理说是不会存在的,使用cv2.imwrite发现没有出现任何问题,如图rest
感受是cv2.imshow对于输出图片的像素大小有必定限制,进行了自动填充或者是默认有灰色做为背景色且比在这里咱们提取出的图片要大code
代码地址:https://github.com/takefetter/Get_PalmPrint/blob/master/process_palm.py
1.https://github.com/dev-td7/Automatic-Hand-Detection-using-Wrist-localisation 这位老哥的repo,基于肤色的提取和造成近似椭圆给个人启发很大(虽而后半部分彻底没有用.....)
2.http://answers.opencv.org/question/180668/how-to-find-the-center-of-one-palm-in-the-picture/ 虽然基于距离变化参考至这里的回答,不过也算是完成了提问者的需求。
转载请注明出处http://www.cnblogs.com/take-fetter/p/8453589.html