tensorflow教程笔记

import tensorflow as tf
import numpy as np
import pylab as pl
from PIL import Image
from numpy import *
from tensorflow.contrib.learn.python.learn.datasets.mnist import read_data_sets
mnist=read_data_sets("MNIST/",one_hot=True)
# y=wx+b
x=tf.placeholder(tf.float32, [None,784],name="x")
w=tf.Variable(tf.zeros([784,10], name="w"))
b=tf.Variable(tf.zeros([10], tf.float32, name="b"))
y=tf.nn.softmax(tf.matmul(x,w) + b)     #结果矩阵
y_=tf.placeholder(tf.float32 , [None,10], name="y_")    #定义labels矩阵

cross_entropy = -tf.reduce_sum(y_*tf.log(y))    #交叉熵函数
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
init=tf.initialize_all_variables()
sess=tf.Session()
sess.run(init)
#Train
for i in range(10000):
    batch_xs,batch_ys=mnist.train.next_batch(10)
    g=sess.run(train_step,feed_dict={x:batch_xs,y_:batch_ys})
    if i%1000==0:
        print('训练了',i/1000+1,'')
    pass

correct_prediction=tf.equal(tf.argmax(y,1), tf.argmax(y_,1), name="validate")
print("正确率预测",correct_prediction)
accuracy=tf.reduce_mean(tf.cast(correct_prediction,tf.float32))
print("正确率Tensor对象",accuracy)
set_printoptions(threhold=10000)
print(sess.run(correct_prediction,feed_dict={x:mnist.test.images,y_:mnist.test.labels}))
print("w",w,"b",b)

各种函数:python

(1)tf.nn.softmax()使用softmax函数将[0,0,0,1,0,0,0,0,0,0]平均分配《使用多项分布》使得预测为其余数据的几率不等于0
算法

(2)tf.reduce_sum(arg1,arg2)使用函数求的数组的和,arg2为None,表示在数组全部元素上面求和数组

(3)tf.train.GradientDescentOptimizer(arg1)使用梯度降低算法《须要深刻学习》函数

(4)mnist.train.next_batch(arg1) arg1指定随机多少个图片数组进入训练,返回batch_xs的shape为[arg1,784]的数组,batch_ys的shape为[arg1,10]的数组学习

(5)tf.argmax(arg1,arg2) 求取arg1数组上面的最大值,arg2可取0《列上面最大值》、1《行上面最大值》,因为arg1为[0.6,0.03,···,0.02]共10列1行,因此应该取行上面最大值可获得当前预测的结果是数字几。spa

(6)tf.equal(arg1,arg2)比较两个参数的值是否相同,相同返回True,不一样返回Falsecode

(7)tf.cast(arg1,dtype)将bool类型转化为float32类型《可逆》对象

(8)ft.reduce_mean(arg1)求取arg1的平均值,即[1,0,1,0,0,0,0,0,0,0],20%正确率blog

<!--   以上就是对tensorflow初级mnist的理解 。 -->图片