转载请注明做者:梦里风林
Github工程地址:https://github.com/ahangchen/GDLnotes
欢迎star,有问题能够到Issue区讨论
官方教程地址
视频/字幕下载html
辅助阅读:TensorFlow中文社区教程 - 英文官方教程node
代码见:full_connect.pypython
def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...] labels = (np.arange(num_labels) == labels[:, None]).astype(np.float32) return dataset, labels
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
上面这些变量都是一种Tensor的概念,它们是一个个的计算单元,咱们在Graph中设置了这些计算单元,规定了它们的组合方式,就好像把一个个门电路串起来那样git
Session用来执行Graph里规定的计算,就好像给一个个门电路通上电,咱们在Session里,给计算单元冲上数据,That’s Flow.github
with tf.Session(graph=graph) as session: tf.initialize_all_variables().run() for step in range(num_steps): _, l, predictions = session.run([optimizer, loss, train_prediction])
valid_prediction.eval()
这样训练的准确度为83.2%算法
offset = (step * batch_size) % (train_labels.shape[0] - batch_size) batch_data = train_dataset[offset:(offset + batch_size), :] batch_labels = train_labels[offset:(offset + batch_size), :]
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size)) tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
准确率提升到86.5%,并且准确率随训练次数增长而提升的速度变快了api
Y = W2 * RELU(W1*X + b1) + b2
[n * 10] = RELU([n * 784] · [784 * N] + [n * N]) · [N * 10] + [n * 10]
weights1 = tf.Variable( tf.truncated_normal([image_size * image_size, hidden_node_count])) biases1 = tf.Variable(tf.zeros([hidden_node_count])) weights2 = tf.Variable( tf.truncated_normal([hidden_node_count, num_labels])) biases2 = tf.Variable(tf.zeros([num_labels]))
ys = tf.matmul(tf_train_dataset, weights1) + biases1 hidden = tf.nn.relu(ys) logits = tf.matmul(hidden, weights2) + biases2
代码见nn_overfit.py网络
在前面实现的RELU链接的两层神经网络中,加Regularization进行约束,采用加l2 norm的方法,进行调节:session
代码实现上,只须要对tf_sgd_relu_nn中train_loss作修改便可:app
l2_loss = tf.nn.l2_loss(weights1) + tf.nn.l2_loss(weights2)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels)) + 0.001 * l2_loss
在训练数据不多的时候,会出现训练结果准确率高,但测试结果准确率低的状况
offset_range = 1000 offset = (step * batch_size) % offset_range
采起Dropout方式强迫神经网络学习更多知识
参考aymericdamien/TensorFlow-Examples中dropout的使用
keep_prob = tf.placeholder(tf.float32) if drop_out: hidden_drop = tf.nn.dropout(hidden, keep_prob) h_fc = hidden_drop
if drop_out: hidden_drop = tf.nn.dropout(hidden, 0.5) h_fc = hidden_drop
随着训练次数增长,自动调整步长
增长神经网络层数,增长训练次数到20000
# middle layer for i in range(layer_cnt - 2): y1 = tf.matmul(hidden_drop, weights[i]) + biases[i] hidden_drop = tf.nn.relu(y1) if drop_out: keep_prob += 0.5 * i / (layer_cnt + 1) hidden_drop = tf.nn.dropout(hidden_drop, keep_prob)
for i in range(layer_cnt - 2): if hidden_cur_cnt > 2: hidden_next_cnt = int(hidden_cur_cnt / 2) else: hidden_next_cnt = 2 hidden_stddev = np.sqrt(2.0 / hidden_cur_cnt) weights.append(tf.Variable(tf.truncated_normal([hidden_cur_cnt, hidden_next_cnt], stddev=hidden_stddev))) biases.append(tf.Variable(tf.zeros([hidden_next_cnt]))) hidden_cur_cnt = hidden_next_cnt
stddev = np.sqrt(2.0 / n)
keep_prob += 0.5 * i / (layer_cnt + 1)
以为个人文章对您有帮助的话,给个star可好?
土豪能够打赏支持,一分也是爱: