在咱们学习的这个项目中,模型主要分为两种状态,即进行推断用的inference模式和进行训练用的training模式。所谓推断模式就是已经训练好的的模型,咱们传入一张图片,网络将其分析结果计算出来的模式。python
本节咱们从demo.ipynb入手,一窥已经训练好的Mask-RCNN模型如何根据一张输入图片进行推断,获得相关信息,即inference模式的工做原理。git
首先进行配置设定,设定项都被集成进class config中了,自建新的设定只要基础改class并更新属性便可,在demo中咱们直接使用COCO的预训练模型因此使用其设置便可,但因为咱们想检测单张图片,因此须要更新几个相关数目设定:github
# 父类继承了Config类,目的就是记录配置,并在其基础上添加了几个新的属性 class InferenceConfig(coco.CocoConfig): # Set batch size to 1 since we'll be running inference on # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU GPU_COUNT = 1 IMAGES_PER_GPU = 1 config = InferenceConfig() config.display()
打印出配置以下:网络
Configurations: BACKBONE resnet101 BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 1 BBOX_STD_DEV [ 0.1 0.1 0.2 0.2] COMPUTE_BACKBONE_SHAPE None DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.7 DETECTION_NMS_THRESHOLD 0.3 FPN_CLASSIF_FC_LAYERS_SIZE 1024 GPU_COUNT 1 GRADIENT_CLIP_NORM 5.0 IMAGES_PER_GPU 1 IMAGE_CHANNEL_COUNT 3 IMAGE_MAX_DIM 1024 IMAGE_META_SIZE 93 IMAGE_MIN_DIM 800 IMAGE_MIN_SCALE 0 IMAGE_RESIZE_MODE square IMAGE_SHAPE [1024 1024 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0} MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [ 123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME coco NUM_CLASSES 81 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 PRE_NMS_LIMIT 6000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (32, 64, 128, 256, 512) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [ 0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 1000 TOP_DOWN_PYRAMID_SIZE 256 TRAIN_BN False TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001
首先初始化模型,而后载入预训练参数文件,在末尾我可视化了模型,不过真的太长了,因此注释掉了。在第一步初始化时就会根据mode参数的具体值创建计算图,本节介绍的推断网络就是在mode参数设定为"inference"时创建的计算网络。dom
# Create model object in inference mode. model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) # Load weights trained on MS-COCO model.load_weights(COCO_MODEL_PATH, by_name=True) # model.keras_model.summary()
# Load a random image from the images folder file_names = next(os.walk(IMAGE_DIR))[2] # 只要是迭代器调用next方法获取值,学习了 image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names))) print(image.shape) # Run detection results = model.detect([image], verbose=1) # Visualize results r = results[0] visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], class_names, r['scores'])
读取一张图片,调用model的detect方法,便可输出结果,最后使用辅助方法可视化结果:源码分析
inference的前向逻辑以下图所示,咱们简单的看一下其计算流程是怎样的,学习
rpn_class:[batch, num_rois, 2]
rpn_bbox:[batch, num_rois, (dy, dx, log(dh), log(dw))]
rpn_rois:[IMAGES_PER_GPU, num_rois, (y1, x1, y2, x2)]
mrcnn_class_logits: [batch, num_rois, NUM_CLASSES] classifier logits (before softmax)
mrcnn_class: [batch, num_rois, NUM_CLASSES] classifier probabilities
mrcnn_bbox(deltas): [batch, num_rois, NUM_CLASSES, (dy, dx, log(dh), log(dw))]
最后,咱们但愿网络输出下面的张量:spa
# num_anchors, 每张图片上生成的锚框数量
# num_rois, 每张图片上由锚框筛选出的推荐区数量,
# # 由 POST_NMS_ROIS_TRAINING 或 POST_NMS_ROIS_INFERENCE 规定
# num_detections, 每张图片上最终检测输出框,
# # 由 DETECTION_MAX_INSTANCES 规定
# detections, [batch, num_detections, (y1, x1, y2, x2, class_id, score)]
# mrcnn_class, [batch, num_rois, NUM_CLASSES] classifier probabilities
# mrcnn_bbox, [batch, num_rois, NUM_CLASSES, (dy, dx, log(dh), log(dw))]
# mrcnn_mask, [batch, num_detections, MASK_POOL_SIZE, MASK_POOL_SIZE, NUM_CLASSES]
# rpn_rois, [batch, num_rois, (y1, x1, y2, x2, class_id, score)]
# rpn_class, [batch, num_anchors, 2]
# rpn_bbox [batch, num_anchors, 4]
具体每种张量的意义咱们会在源码分析中一一介绍。blog