cloud执行:https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_pets.mdpython
本地执行:https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_locally.mdgit
1. 获取数据Oxford-IIIT Pets Dataset github
# From tensorflow/models/research/ wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz wget http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz # 解压 tar -xvf images.tar.gz tar -xvf annotations.tar.gz
最后tensorflow/models/research/下文件结构api
images/ annotations/ object_detection/ others |
2. 对数据进行转换bash
Tensorflow Object Detection API但愿数据是TFRecode格式,因此先执行create_pet_tf_record脚原本将Oxford-IIIT pet数据集进行转换
dom
注:要提早安装好须要的库,否则这一步会有很多错post
#From tensorflow/models/research/ python object_detection/dataset_tools/create_pet_tf_record.py \ --label_map_path=object_detection/data/pet_label_map.pbtxt \ --data_dir=`pwd` \ --output_dir=`pwd` # 在tensorflow/models/research/会生成10个标准的TFRecord文件:pet_faces_train.record-* pet_faces_val.record-* cp pet_faces_train.record-* /tensorflow/models/research/object_detection/data cp pet_faces_val.record-* /tensorflow/models/research/object_detection/data cp object_detection/data/pet_label_map.pbtxt ${YOUR_DIRECTORY}/data/pet_label_map.pbtxt
最后结果:google
两个TFRecode文件将会在tensorflow/models/research/下生成,分别为pet_train_with_mask.record和pet_val_with_mask.record(和例子中给出的不同)spa
遇到的问题:.net
protobuf原来用的3.6.1版本,改为3.5.1就对了
能够在https://github.com/google/protobuf/releases下载exe文件,而后在系统变量中配置其路径
文件的路径写错了,没有找到相应的文件
3. 下载已经训练好的COCO模型
下载训练好的模型,且放到data目录下
wget http://storage.googleapis.com/download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_coco_11_06_2017.tar.gz tar -xvf faster_rcnn_resnet101_coco_11_06_2017.tar.gz cp faster_rcnn_resnet101_coco_11_06_2017/model.ckpt.* ${YOUR_DIRECTORY}/data/
4. 配置对象检测pipeline
Tensorflow Object Detection API中模型参数、训练参数、评估参数都是在一个config文件中配置
object_detection/samples/configs下式一些object_detection配置文件的结构。这里用faster_rcnn_resnet101_pets.config做为配置的开始。搜索文件中的PATH_TO_BE_CONFIGURED,并修改,主要是数据存放的路径
5. object dectection代码进行打包
调用.sh文件,后面的/tmp/pycocotools是输出目录
.sh文件作的事情:
# From tensorflow/models/research/ # 下载pycocotools-2.0.tar到/tmp/pycocotools下 bash object_detection/dataset_tools/create_pycocotools_package.sh /tmp/pycocotools # 而后解压到object_detection/下 tar -xvf faster_rcnn_resnet101_coco_11_06_2017.tar.gz /object_detection # 进入PythonAPI,调用setup.py python setup.py
问题:
https://blog.csdn.net/heiheiya/article/details/81128749
能够把这个项目下载下来,而后在PythonAPI中执行set up
6. 开始训练和评估
为了开始训练和执行,在tensorflow/models/research/ 目录下执行以下命令
# From tensorflow/models/research/ python object_detection/model_main.py --pipeline_config_path=${YOUR_DIRECTORY}\object_detection\samples\configs\faster_rcnn_resnet101_pets.config --model_dir=${YOUR_DIRECTORY}\object_detection\data --num_train_steps=50000 --num_eval_steps=2000 --alsologtostderr
问题:
由于个人目录中nets是在slim下的,只要到py文件中改下路径就行了
post_processing.py中把multiclass_non_max_suppression的参数删除就能够了
7. tensorboard对过程进行监视
tensorboard --logdir=${YOUR_DIRECTORY}/model_dir
8. 导出tensorflow图
文件保存在${YOUR_DIRECTORY}/model_dir,通常包括以下三个文件
找到一个要导出的checkpoint,执行命令
# From tensorflow/models/research/cp ${YOUR_DIRECTORY}/model_dir/model.ckpt-${CHECKPOINT_NUMBER}.* . python object_detection/export_inference_graph.py \ --input_type image_tensor \ --pipeline_config_path object_detection/samples/configs/faster_rcnn_resnet101_pets.config \ --trained_checkpoint_prefix model.ckpt-${CHECKPOINT_NUMBER} \ --output_directory exported_graphs
最后exported_graphs中包含保存的模型和图
9. 一些小坑