RNN 训练算法 —— 前向传播(Forward Propagation)

在这里插入图片描述

参见基本框架:https://goodgoodstudy.blog.csdn.net/article/details/109245095

问题描述

考虑模型循环网络模型:
x ( k ) = f [ W x ( k − 1 ) ] (1) x(k) = f[Wx(k-1)] \tag1{} x(k)=f[Wx(k1)](1)
其中 x ( k ) ∈ R N x(k) \in R^N x(k)RN表示网络节点状态, W ∈ R N × N W\in R^{N\times N} WRN×N表示网络结点之间相互连接的权重,网络的输出节点为 { x i ( k ) ∣ i ∈ O } \{x_i(k)| i\in O\} {xi(k)iO} O O O为所有输出(或称“观测”)单元的下标集合

在这里插入图片描述
训练的目标是为了减少观测状态和预期值之间误差,即最小化损失函数:
E = 1 2 ∑ k = 1 K ∑ i ∈ O [ x i ( k ) − d i ( k ) ] 2 (2) E = \frac{1}{2}\sum_{k=1}^K \sum_{i\in O} [x_i(k) - d_i(k)]^2 \tag{2} E=21k=1KiO[xi(k)di(k)]2(2)
其中 d i ( k ) d_i(k) di(k) 表示 k k k 时刻第 i i i 个节点的预期值

采用梯度下降法更新 W W W:
W + = W − η d E d W W_+ = W - \eta \frac{dE}{dW} W+=WηdWdE

符号约定

W ≡ [ —– w 1 T —– ⋮ —– w N T —– ] N × N W \equiv \begin{bmatrix} \text{-----} w_1^T \text{-----} \\ \vdots \\ \text{-----} w_N^T \text{-----} \end{bmatrix}_{N\times N} W—–w1T—–—–wNT—–N×N
将矩阵 W W W 拉成列向量,记为 w w w
w = [ w 1 T , ⋯   , w N T ] T ∈ R N 2 w = [w_1^T, \cdots, w_N^T]^T \in R^{N^2} w=[w1T,,wNT]TRN2
把所有时间的状态拼成列向量,记为 x x x
x = [ x T ( 1 ) , ⋯   , x T ( K ) ] T ∈ R N K x = [x^T(1), \cdots, x^T(K)]^T \in R^{NK} x=[xT(1),,xT(K)]TRNK
将RNN 的训练视为约束优化问题,(1)式转化成约束条件:
g ( k ) ≡ f [ W x ( k − 1 ) ] − x ( k ) = 0 , k = 1 , … , K (3) g(k) \equiv f[Wx(k-1)] - x(k) =0, \quad k=1,\ldots ,K \tag{3} g(k)f[Wx(k1)]x(k)=0,k=1,,K(3)

g = [ g T ( 1 ) , … , g T ( K ) ] T ∈ R N K g = [g^T(1), \ldots, g^T(K)]^T \in R^{NK} g=[gT(1),,gT(K)]TRNK


0 = d g ( x ( w ) , w ) d w = ∂ g ( x ( w ) , w ) ∂ x ∂ x ( w ) ∂ w + ∂ g ( x ( w ) , w ) ∂ w (4) 0 = \frac{dg(x(w),w)}{dw} = \frac{\partial g(x(w),w)}{\partial x}\frac{\partial x(w)}{\partial w} + \frac{\partial g(x(w),w)}{\partial w} \tag{4} 0=dwdg(x(w),w)=xg(x(w),w)wx(w)+wg(x(w),w)(4)
d E d w = ∂ E ∂ x ( ∂ g ∂ x ) − 1 ∂ g ∂ w (5) \frac{dE}{dw} = \frac{\partial E}{\partial x} \left(\frac{\partial g}{\partial x}\right)^{-1} \frac{\partial g}{\partial w} \tag{5} dwdE=xE(xg)1wg(5)
(5)中三项如下:
1.
∂ E ∂ x = [ e ( 1 ) , … , e ( K ) ] e i ( k ) = { x i ( k ) − d i ( k ) , if  i ∈ O , 0 , otherwise.  k ∈ 1 , … , K . \begin{aligned} \frac{\partial E}{\partial x} &= [e(1), \ldots, e(K)] \\\\ e_i(k)&= \begin{cases} x_i(k) - d_i(k), &\text{if } i\in O, \\ 0, &\text{otherwise. } \end{cases} k \in 1,\ldots,K. \end{aligned} xEei(k)=[e(1),,e(K)]={xi(k)di(k),0,if iO,otherwise. k1,,K.
2.
∂ g ∂ x = [ − I 0 0 … 0 D ( 1 ) W − I 0 … 0 0 D ( 2 ) W − I … 0 ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 D ( K − 1 ) W − I ] N K × N K \frac{\partial g}{\partial x} = \begin{bmatrix} -I & 0& 0 &\ldots & 0\\ D(1)W & -I & 0 &\ldots & 0 \\ 0 & D(2)W & -I & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & D(K-1)W& -I \end{bmatrix}_{NK\times NK} xg=ID(1)W000ID(2)W000I0D(K1)W000INK×NK
其中
D ( j ) = [ f ′ ( w 1 T x ( j ) ) 0 ⋱ 0 f ′ ( w N T x ( j ) ) ] D(j)= \begin{bmatrix} f'(w_1^Tx(j)) & &0\\ & \ddots & \\ 0& & f'(w_N^Tx(j)) \end{bmatrix} D(j)=f(w1Tx(j))00f(wNTx(j))

∂ g ∂ w = [ D ( 0 ) X ( 0 ) D ( 1 ) X ( 1 ) ⋮ D ( K − 1 ) X ( K − 1 ) ] \frac{\partial g}{\partial w} = \begin{bmatrix} D(0)X(0)\\ D(1)X(1) \\ \vdots \\ D(K-1)X(K-1) \end{bmatrix} wg=D(0)X(0)D(1)X(1)D(K1)X(K1)
其中
X ( k ) ≜ [ x T ( k ) x T ( k ) ⋱ x T ( k ) ] N × N 2 X(k) \triangleq\begin{bmatrix} x^T(k) &&& \\ & x^T(k)&& \\ && \ddots & \\ &&& x^T(k) \end{bmatrix}_{N\times N^2} X(k)xT(k)xT(k)xT(k)N×N2
在这里插入图片描述

前向传播


Y = ( ∂ g ∂ x ) − 1 ∂ g ∂ w ∈ R N K × N 2 (6) Y = \left(\frac{\partial g}{\partial x}\right)^{-1} \frac{\partial g}{\partial w} \in R^{NK \times N^2}\tag{6} Y=(xg)1wgRNK×N2(6)
然后计算
d E d w = − ∂ E ∂ x Y \frac{dE}{dw} =- \frac{\partial E}{\partial x} Y dwdE=xEY


(6)式变形为:
∂ g ∂ x Y = ∂ g ∂ w \frac{\partial g}{\partial x}Y = \frac{\partial g}{\partial w} xgY=wg

Y = [ Y ( 0 ) ⋮ Y ( K − 1 ) ] N K × N 2 , Y ( k ) ∈ R N × N 2 Y = \begin{bmatrix} Y(0) \\ \vdots \\ Y(K-1) \end{bmatrix}_{NK\times N^2}, \quad Y(k) \in R^{N\times N^2} Y=Y(0)Y(K1)NK×N2,Y(k)RN×N2
则有
[ − I 0 0 … 0 D ( 1 ) W − I 0 … 0 0 D ( 2 ) W − I … 0 ⋮ ⋮ ⋮ ⋱ ⋮ 0 0 0 D ( K − 1 ) W − I ] N K × N K Y = [ D ( 0 ) X ( 0 ) D ( 1 ) X ( 1 ) ⋮ D ( K − 1 ) X ( K − 1 ) ] \begin{bmatrix} -I & 0& 0 &\ldots & 0\\ D(1)W & -I & 0 &\ldots & 0 \\ 0 & D(2)W & -I & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & D(K-1)W& -I \end{bmatrix}_{NK\times NK} Y = \begin{bmatrix} D(0)X(0)\\ D(1)X(1) \\ \vdots \\ D(K-1)X(K-1) \end{bmatrix} ID(1)W000ID(2)W000I0D(K1)W000INK×NKY=D(0)X(0)D(1)X(1)D(K1)X(K1)
解得:
Y ( 0 ) = − D ( 0 ) X ( 0 ) Y ( k ) = D ( k ) W Y ( k − 1 ) − D ( k ) X ( k ) , k = 1 , … , K − 1 \begin{aligned} Y(0) &= - D(0)X(0) \\ Y(k) &= D(k)WY(k-1) - D(k)X(k), \\ k&=1,\ldots,K-1 \end{aligned} Y(0)Y(k)k=D(0)X(0)=D(k)WY(k1)D(k)X(k),=1,,K1
所以
d E d w = − ∂ E ∂ x Y = − [ e ( 1 ) , … , e ( K ) ] [ Y ( 0 ) ⋮ Y ( K − 1 ) ] = − ∑ k = 1 K e ( k ) Y ( k − 1 ) \begin{aligned} \frac{dE}{dw} &= -\frac{\partial E}{\partial x} Y \\ &= -[e(1), \ldots, e(K)]\begin{bmatrix} Y(0) \\ \vdots \\ Y(K-1) \end{bmatrix}\\ &= -\sum_{k=1}^K e(k)Y(k-1) \end{aligned} dwdE=xEY=[e(1),,e(K)]Y(0)Y(K1)=k=1Ke(k)Y(k1)

在这里插入图片描述