做者:chen_h
微信号 & QQ:862251340
微信公众号:coderpai
简书地址:https://www.jianshu.com/p/d94...git
这篇教程是翻译 Peter Roelants写的神经网络教程,做者已经受权翻译,这是 原文。
该教程将介绍如何入门神经网络,一共包含五部分。你能够在如下连接找到完整内容。github
这部分教程将介绍一部分:算法
咱们在上次的教程中给出了一个很简单的模型,只有一个输入和一个输出。在这篇教程中,咱们将构建一个二分类模型,输入参数是两个变量。这个模型在统计上被称为Logistic回归模型,网络结构能够被描述以下:微信
咱们先导入教程须要使用的软件包。网络
import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import colorConverter, ListedColormap from matplotlib import cm
在教程中,目标分类t
将从两个独立分布中产生,当t=1
时,用蓝色表示。当t=0
时,用红色表示。输入参数X
是一个N*2
的矩阵,目标分类t
是一个N * 1
的向量。更直观的表现,见下图。app
# Define and generate the samples nb_of_samples_per_class = 20 # The number of sample in each class red_mean = [-1,0] # The mean of the red class blue_mean = [1,0] # The mean of the blue class std_dev = 1.2 # standard deviation of both classes # Generate samples from both classes x_red = np.random.randn(nb_of_samples_per_class, 2) * std_dev + red_mean x_blue = np.random.randn(nb_of_samples_per_class, 2) * std_dev + blue_mean # Merge samples in set of input variables x, and corresponding set of output variables t X = np.vstack((x_red, x_blue)) t = np.vstack((np.zeros((nb_of_samples_per_class,1)), np.ones((nb_of_samples_per_class,1))))
# Plot both classes on the x1, x2 plane plt.plot(x_red[:,0], x_red[:,1], 'ro', label='class red') plt.plot(x_blue[:,0], x_blue[:,1], 'bo', label='class blue') plt.grid() plt.legend(loc=2) plt.xlabel('$x_1$', fontsize=15) plt.ylabel('$x_2$', fontsize=15) plt.axis([-4, 4, -4, 4]) plt.title('red vs. blue classes in the input space') plt.show()
咱们设计的网络的目的是从输入的x
去预测目标t
。假设,输入x = [x1, x2]
,权重w = [w1, w2]
,预测目标t = 1
。那么,几率P(t = 1|x, w)
将是神经网络输出的y
,即y = σ(x∗wT)
。其中,σ
表示Logistic函数
,定义以下:dom
若是,对于Logistic函数和它的导数还不是很清楚的,能够查看这个教程,里面进行了详细描述。函数
对于这个分类问题的损失函数优化,咱们使用交叉熵偏差函数来解决,对于每一个训练样本i
,交叉熵偏差函数定义以下:post
若是咱们要计算整个训练样本的交叉熵偏差,那么只须要把每个样本的值进行累加就能够了,即:学习
关于交叉熵偏差函数更加详细的介绍能够看这个教程。
logistic(z)
函数实现了Logistic
函数,cost(y, t)
函数实现了损失函数,nn(x, w)
实现了神经网络的输出结果,nn_predict(x, w)
实现了神经网络的预测结果。
# Define the logistic function def logistic(z): return 1 / (1 + np.exp(-z)) # Define the neural network function y = 1 / (1 + numpy.exp(-x*w)) def nn(x, w): return logistic(x.dot(w.T)) # Define the neural network prediction function that only returns # 1 or 0 depending on the predicted class def nn_predict(x,w): return np.around(nn(x,w)) # Define the cost function def cost(y, t): return - np.sum(np.multiply(t, np.log(y)) + np.multiply((1-t), np.log(1-y)))
# Plot the cost in function of the weights # Define a vector of weights for which we want to plot the cost nb_of_ws = 100 # compute the cost nb_of_ws times in each dimension ws1 = np.linspace(-5, 5, num=nb_of_ws) # weight 1 ws2 = np.linspace(-5, 5, num=nb_of_ws) # weight 2 ws_x, ws_y = np.meshgrid(ws1, ws2) # generate grid cost_ws = np.zeros((nb_of_ws, nb_of_ws)) # initialize cost matrix # Fill the cost matrix for each combination of weights for i in range(nb_of_ws): for j in range(nb_of_ws): cost_ws[i,j] = cost(nn(X, np.asmatrix([ws_x[i,j], ws_y[i,j]])) , t) # Plot the cost function surface plt.contourf(ws_x, ws_y, cost_ws, 20, cmap=cm.pink) cbar = plt.colorbar() cbar.ax.set_ylabel('$\\xi$', fontsize=15) plt.xlabel('$w_1$', fontsize=15) plt.ylabel('$w_2$', fontsize=15) plt.title('Cost function surface') plt.grid() plt.show()
梯度降低算法的工做原理是损失函数ξ
对于每个参数的求导,而后沿着负梯度方向进行参数更新。
参数w
按照必定的学习率沿着负梯度方向更新,即w(k+1)=w(k)−Δw(k+1)
,其中Δw
能够表示为:
对于每一个训练样本i
,∂ξi/∂w
计算以下:
其中,yi=σ(zi)
是神经元的Logistic
输出,zi=xi∗wT
是神经元的输入。
在详细推导损失函数对于权重的导数以前,咱们先这个教程中摘取几个推导。
参考上面的分步推导,咱们能够获得下面的详细推导:
所以,对于每一个权重的更新Δwj
能够表示为:
在批处理中,咱们须要将N
个样本的梯度都进行累加,即:
在开始梯度降低算法以前,你须要对参数都进行一个随机数赋值过程,而后采用梯度降低算法更新参数,直至收敛。
gradient(w, x, t)
函数实现了梯度∂ξ/∂w
,delta_w(w_k, x, t, learning_rate)
函数实现了Δw
。
# define the gradient function. def gradient(w, x, t): return (nn(x, w) - t).T * x # define the update function delta w which returns the # delta w for each weight in a vector def delta_w(w_k, x, t, learning_rate): return learning_rate * gradient(w_k, x, t)
咱们在训练集X
上面运行10
次去作预测,下图中画出了前三次的结果,图中蓝色的点表示在第k
次,w(k)
的值。
# Set the initial weight parameter w = np.asmatrix([-4, -2]) # Set the learning rate learning_rate = 0.05 # Start the gradient descent updates and plot the iterations nb_of_iterations = 10 # Number of gradient descent updates w_iter = [w] # List to store the weight values over the iterations for i in range(nb_of_iterations): dw = delta_w(w, X, t, learning_rate) # Get the delta w update w = w-dw # Update the weights w_iter.append(w) # Store the weights for plotting
# Plot the first weight updates on the error surface # Plot the error surface plt.contourf(ws_x, ws_y, cost_ws, 20, alpha=0.9, cmap=cm.pink) cbar = plt.colorbar() cbar.ax.set_ylabel('cost') # Plot the updates for i in range(1, 4): w1 = w_iter[i-1] w2 = w_iter[i] # Plot the weight-cost value and the line that represents the update plt.plot(w1[0,0], w1[0,1], 'bo') # Plot the weight cost value plt.plot([w1[0,0], w2[0,0]], [w1[0,1], w2[0,1]], 'b-') plt.text(w1[0,0]-0.2, w1[0,1]+0.4, '$w({})$'.format(i), color='b') w1 = w_iter[3] # Plot the last weight plt.plot(w1[0,0], w1[0,1], 'bo') plt.text(w1[0,0]-0.2, w1[0,1]+0.4, '$w({})$'.format(4), color='b') # Show figure plt.xlabel('$w_1$', fontsize=15) plt.ylabel('$w_2$', fontsize=15) plt.title('Gradient descent updates on cost surface') plt.grid() plt.show()
下列代码,咱们将训练的结果进行可视化。
# Plot the resulting decision boundary # Generate a grid over the input space to plot the color of the # classification at that grid point nb_of_xs = 200 xs1 = np.linspace(-4, 4, num=nb_of_xs) xs2 = np.linspace(-4, 4, num=nb_of_xs) xx, yy = np.meshgrid(xs1, xs2) # create the grid # Initialize and fill the classification plane classification_plane = np.zeros((nb_of_xs, nb_of_xs)) for i in range(nb_of_xs): for j in range(nb_of_xs): classification_plane[i,j] = nn_predict(np.asmatrix([xx[i,j], yy[i,j]]) , w) # Create a color map to show the classification colors of each grid point cmap = ListedColormap([ colorConverter.to_rgba('r', alpha=0.30), colorConverter.to_rgba('b', alpha=0.30)]) # Plot the classification plane with decision boundary and input samples plt.contourf(xx, yy, classification_plane, cmap=cmap) plt.plot(x_red[:,0], x_red[:,1], 'ro', label='target red') plt.plot(x_blue[:,0], x_blue[:,1], 'bo', label='target blue') plt.grid() plt.legend(loc=2) plt.xlabel('$x_1$', fontsize=15) plt.ylabel('$x_2$', fontsize=15) plt.title('red vs. blue classification boundary') plt.show()
做者:chen_h
微信号 & QQ:862251340
简书地址:https://www.jianshu.com/p/d94...
CoderPai 是一个专一于算法实战的平台,从基础的算法到人工智能算法都有设计。若是你对算法实战感兴趣,请快快关注咱们吧。加入AI实战微信群,AI实战QQ群,ACM算法微信群,ACM算法QQ群。长按或者扫描以下二维码,关注 “CoderPai” 微信号(coderpai)