1:神经网络中,咱们经过最小化神经网络来训练网络,因此在训练时最后一层是损失函数层(LOSS),css
在测试时咱们经过准确率来评价该网络的优劣,所以最后一层是准确率层(ACCURACY)。html
可是当咱们真正要使用训练好的数据时,咱们须要的是网络给咱们输入结果,对于分类问题,咱们须要得到分类结果,以下右图最后一层咱们获得python
的是几率,咱们不须要训练及测试阶段的LOSS,ACCURACY层了。网络
下图是能过$CAFFE_ROOT/python/draw_net.py绘制$CAFFE_ROOT/models/caffe_reference_caffnet/train_val.prototxt , $CAFFE_ROOT/models/caffe_reference_caffnet/deploy.prototxt,分别表明训练时与最后使用时的网络结构。ide
咱们通常将train与test放在同一个.prototxt中,须要在data层输入数据的source,函数
而在使用时.prototxt只须要定义输入图片的大小通道数据参数便可,以下图所示,分别是学习
$CAFFE_ROOT/models/caffe_reference_caffnet/train_val.prototxt , $CAFFE_ROOT/models/caffe_reference_caffnet/deploy.prototxt的data层测试
训练时, solver.prototxt中使用的是rain_val.prototxtui
1
|
./build/tools/caffe/train -solver ./models/bvlc_reference_caffenet/solver.prototxt
|
使用上面训练的网络提取特征,使用的网络模型是deploy.prototxtgoogle
1
|
./build/tools/extract_features.bin models/bvlc_refrence_caffenet.caffemodel models/bvlc_refrence_caffenet/deploy.prototxt
|
。。
2:
(1)介绍 *_train_test.prototxt文件与 *_deploy.prototxt文件的不http://blog.csdn.net/sunshine_in_moon/article/details/49472901
(2)生成deploy文件的Python代码:http://www.cnblogs.com/denny402/p/5685818.html
*_train_test.prototxt文件:这是训练与测试网络配置文件
在博文http://www.cnblogs.com/denny402/p/5685818.html 中给出了生成 deploy.prototxt文件的Python源代码,可是每一个网络不一样,修改起来比较麻烦,下面给出该博文中以mnist为例生成deploy文件的源代码,可根据本身网络的设置作出相应修改:(下方代码未测试)
# -*- coding: utf-8 -*- from caffe import layers as L,params as P,to_proto root='/home/xxx/' deploy=root+'mnist/deploy.prototxt' #文件保存路径 def create_deploy(): #少了第一层,data层 conv1=L.Convolution(bottom='data', kernel_size=5, stride=1,num_output=20, pad=0,weight_filler=dict(type='xavier')) pool1=L.Pooling(conv1, pool=P.Pooling.MAX, kernel_size=2, stride=2) conv2=L.Convolution(pool1, kernel_size=5, stride=1,num_output=50, pad=0,weight_filler=dict(type='xavier')) pool2=L.Pooling(conv2, pool=P.Pooling.MAX, kernel_size=2, stride=2) fc3=L.InnerProduct(pool2, num_output=500,weight_filler=dict(type='xavier')) relu3=L.ReLU(fc3, in_place=True) fc4 = L.InnerProduct(relu3, num_output=10,weight_filler=dict(type='xavier')) #最后没有accuracy层,但有一个Softmax层 prob=L.Softmax(fc4) return to_proto(prob) def write_deploy(): with open(deploy, 'w') as f: f.write('name:"Lenet"\n') f.write('input:"data"\n') f.write('input_dim:1\n') f.write('input_dim:3\n') f.write('input_dim:28\n') f.write('input_dim:28\n') f.write(str(create_deploy())) if __name__ == '__main__': write_deploy()
用代码生成deploy文件仍是比较麻烦。咱们在构建深度学习网络时,确定会先定义好训练与测试网络的配置文件——*_train_test.prototxt文件,咱们能够经过修改*_train_test.prototxt文件 来生成 deploy 文件。以cifar10为例先简单介绍一下二者的区别。
(1)deploy 文件中的数据层更为简单,即将*_train_test.prototxt文件中的输入训练数据lmdb与输入测试数据lmdb这两层删除,取而代之的是,
shape { dim: 1 #num,可自行定义 dim: 3 #通道数,表示RGB三个通道 dim: 32 #图像的长和宽,经过 *_train_test.prototxt文件中数据输入层的crop_size获取 dim: 32
(2)卷积层和全链接层中weight_filler{}与bias_filler{}两个参数不用再填写,由于这两个参数的值,由已经训练好的模型*.caffemodel文件提供。以下所示代码,将*_train_test.prototxt文件中的weight_filler、bias_filler所有删除。
layer { # weight_filler、bias_filler删除
name: "ip2"
type: "InnerProduct"
bottom: "ip1" top: "ip2"
param {
lr_mult: 1 #权重w的学习率倍数
}
param { lr_mult: 2 #偏置b的学习率倍数
}
inner_product_param { num_output: 10
weight_filler { type: "gaussian" std: 0.1 }
bias_filler { type: "constant" }
}
}
删除后变为
2) 输出层
*_train_test.prototxt文件
注意在两个文件中输出层的类型都发生了变化一个是SoftmaxWithLoss,另外一个是Softmax。另外为了方便区分训练与应用输出,训练是输出时是loss,应用时是prob。
下面给出CIFAR10中的配置文件cifar10_quick_train_test.prototxt与其模型构造文件 cifar10_quick.prototxt 直观展现二者的区别。
cifar10_quick_train_test.prototxt文件代码
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
|
cifar10_quick_train_test.prototxt文件代码
name:
"CIFAR10_quick"
layer { #该层去掉
name:
"cifar"
type:
"Data"
top:
"data"
top:
"label"
include {
phase: TRAIN
}
transform_param {
mean_file:
"examples/cifar10/mean.binaryproto"
}
data_param {
source:
"examples/cifar10/cifar10_train_lmdb"
batch_size: 100
backend: LMDB
}
}
layer { #该层去掉
name:
"cifar"
type:
"Data"
top:
"data"
top:
"label"
include {
phase: TEST
}
transform_param {
mean_file:
"examples/cifar10/mean.binaryproto"
}
data_param {
source:
"examples/cifar10/cifar10_test_lmdb"
batch_size: 100
backend: LMDB
}
}
layer { #将下方的weight_filler、bias_filler所有删除
name:
"conv1"
type:
"Convolution"
bottom:
"data"
top:
"conv1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type:
"gaussian"
std: 0.0001
}
bias_filler {
type:
"constant"
}
}
}
layer {
name:
"pool1"
type:
"Pooling"
bottom:
"conv1"
top:
"pool1"
pooling_param {
pool: MAX
kernel_size: 3
stride: 2
}
}
layer {
name:
"relu1"
type:
"ReLU"
bottom:
"pool1"
top:
"pool1"
}
layer { #weight_filler、bias_filler删除
name:
"conv2"
type:
"Convolution"
bottom:
"pool1"
top:
"conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type:
"gaussian"
std: 0.01
}
bias_filler {
type:
"constant"
}
}
}
layer {
name:
"relu2"
type:
"ReLU"
bottom:
"conv2"
top:
"conv2"
}
layer {
name:
"pool2"
type:
"Pooling"
bottom:
"conv2"
top:
"pool2"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer { #weight_filler、bias_filler删除
name:
"conv3"
type:
"Convolution"
bottom:
"pool2"
top:
"conv3"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
stride: 1
weight_filler {
type:
"gaussian"
std: 0.01
}
bias_filler {
type:
"constant"
}
}
}
layer {
name:
"relu3"
type:
"ReLU"
bottom:
"conv3"
top:
"conv3"
}
layer {
name:
"pool3"
type:
"Pooling"
bottom:
"conv3"
top:
"pool3"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer { #weight_filler、bias_filler删除
name:
"ip1"
type:
"InnerProduct"
bottom:
"pool3"
top:
"ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 64
weight_filler {
type:
"gaussian"
std: 0.1
}
bias_filler {
type:
"constant"
}
}
}
layer { # weight_filler、bias_filler删除
name:
"ip2"
type:
"InnerProduct"
bottom:
"ip1"
top:
"ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
weight_filler {
type:
"gaussian"
std: 0.1
}
bias_filler {
type:
"constant"
}
}
}
layer { #将该层删除
name:
"accuracy"
type:
"Accuracy"
bottom:
"ip2"
bottom:
"label"
top:
"accuracy"
include {
phase: TEST
}
}
layer { #修改
name:
"loss"
#---loss 修改成 prob
type:
"SoftmaxWithLoss"
# SoftmaxWithLoss 修改成 softmax
bottom:
"ip2"
bottom:
"label"
#去掉
top:
"loss"
}
如下为cifar10_quick.prototxt
layer { #将两个输入层修改成该层
name:
"data"
type:
"Input"
top:
"data"
input_param { shape: { dim: 1 dim: 3 dim: 32 dim: 32 } } #注意shape中变量值的修改,CIFAR10中的 *_train_test.protxt文件中没有 crop_size
}
layer {
name:
"conv1"
type:
"Convolution"
bottom:
"data"
top:
"conv1"
param {
lr_mult: 1 #权重W的学习率倍数
}
param {
lr_mult: 2 #偏置b的学习率倍数
}
convolution_param {
num_output: 32
pad: 2 #加边为2
kernel_size: 5
stride: 1
}
}
layer {
name:
"pool1"
type:
"Pooling"
bottom:
"conv1"
top:
"pool1"
pooling_param {
pool: MAX #Max Pooling
kernel_size: 3
stride: 2
}
}
layer {
name:
"relu1"
type:
"ReLU"
bottom:
"pool1"
top:
"pool1"
}
layer {
name:
"conv2"
type:
"Convolution"
bottom:
"pool1"
top:
"conv2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 32
pad: 2
kernel_size: 5
stride: 1
}
}
layer {
name:
"relu2"
type:
"ReLU"
bottom:
"conv2"
top:
"conv2"
}
layer {
name:
"pool2"
type:
"Pooling"
bottom:
"conv2"
top:
"pool2"
pooling_param {
pool: AVE #均值池化
kernel_size: 3
stride: 2
}
}
layer {
name:
"conv3"
type:
"Convolution"
bottom:
"pool2"
top:
"conv3"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
convolution_param {
num_output: 64
pad: 2
kernel_size: 5
stride: 1
}
}
layer {
name:
"relu3"
type:
"ReLU"
#使用ReLU激励函数,这里须要注意的是,本层的bottom和top都是conv3>
bottom:
"conv3"
top:
"conv3"
}
layer {
name:
"pool3"
type:
"Pooling"
bottom:
"conv3"
top:
"pool3"
pooling_param {
pool: AVE
kernel_size: 3
stride: 2
}
}
layer {
name:
"ip1"
type:
"InnerProduct"
bottom:
"pool3"
top:
"ip1"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 64
}
}
layer {
name:
"ip2"
type:
"InnerProduct"
bottom:
"ip1"
top:
"ip2"
param {
lr_mult: 1
}
param {
lr_mult: 2
}
inner_product_param {
num_output: 10
}
}
layer {
name:
"prob"
type:
"Softmax"
bottom:
"ip2"
top:
"prob"
}
|
3:
将train_val.prototxt 转换成deploy.prototxt
1.删除输入数据(如:type:data...inckude{phase: TRAIN}),而后添加一个数据维度描述。
2.移除最后的<span style="line-height: 24px; color: rgb(68, 68, 68); font-family: "Open Sans", Helvetica, Arial, sans-serif; font-size: 14px;">“loss” 和“accuracy” 层,加入“prob”层。</span>
[plain]
若是train_val文件中还有其余的预处理层,就稍微复杂点。以下,在'data'层,在‘data’层和‘conv1’层<span style="line-height: 24px; color: rgb(68, 68, 68); font-family: "Open Sans", Helvetica, Arial, sans-serif; font-size: 14px;">(with <span style="margin: 0px; padding: 0px; border: 0px currentcolor; vertical-align: baseline;">bottom:”data” / top:”conv1″). 插入一个层来计算输入数据的均值。</span></span>
<span style="line-height: 1.5; margin: 0px; padding: 0px; border: 0px currentcolor; vertical-align: baseline;">在deploy.prototxt文件中,“mean” 层必须保留,只是容器改变,相应的‘conv1’也要改变<span style="line-height: 24px; color: rgb(68, 68, 68); font-family: "Open Sans", Helvetica, Arial, sans-serif; font-size: 14px;"> ( <span style="margin: 0px; padding: 0px; border: 0px currentcolor; vertical-align: baseline;"><span style="line-height: 1.5; margin: 0px; padding: 0px; border: 0px currentcolor; vertical-align: baseline;">bottom:”mean”/ <span style="line-height: 24px; margin: 0px; padding: 0px; border: 0px currentcolor; vertical-align: baseline;">top:”conv1″ )。</span></span></span></span></span>
[plain]