Skip to content

RNN theory

Diwei Liu edited this page Apr 17, 2018 · 1 revision

layout: post date: 2018-04-12 00:01 title: "循环神经网络RNN理论剖析" categories: ML tag: - Machine Learning - deep learning - RNN comment: true

待整理。

初识RNN

$$ s_{t} = f(Ux_{t} + Ws_{t-1}) \label{1}\tag{1} $$

$$ o_{t} = softmax(V s_{t}) \label{2}\tag{2} $$

其中$x_{t}$是t时刻的输入;$s_{t}$是t时刻的隐藏状态,$f$函数通常为tanh或者ReLU函数;$o_{t}$是t时刻的输出。

  1. 隐藏层$s_{t}$可以认为是网络的记忆,通过$t$时刻的记忆,我们可以单独计算出它的输出$o_{t}$。
  2. RNN共享三大权值,分别是上述的UVW
  3. RNN有多种形式,比如one-to-one、one-to-many、many-to-one、many-tomany、etc,可参考资料[1]

Backpropagation Through Time (BPTT)

  1. vanishing gradient(梯度消失):Long Short-Term Memory (LSTM) 和 Gated Recurrent Unit (GRU)
  2. exploding gradient(梯度爆炸)

$$ E_{t}(y_{t}, o_{t}) = -y_{t}log o_{t} $$

$$ \begin{align} E(y, o) &= \sum_{t} E_{t}(y_{t}, o_{t}) \\\ &= - \sum_{t} y_{t} log o_{t} \end{align} $$

其中$y_{t}$是t时刻的正确输出,$o_{t}$是t时刻预测的输出。整个训练过程的误差图如下:

我们训练RNN的目的,是学习三大参数U、V、W,使用的方法是SGD。计算偏导数:

$$ \frac{\partial E}{\partial W} = \sum_{t} \frac{\partial E_{t}}{\partial W} $$

$$

$$

参考资料

  1. http://cs231n.stanford.edu/slides/2016/winter1516_lecture10.pdf
  2. http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture10.pdf
  3. http://adventuresinmachinelearning.com/recurrent-neural-networks-lstm-tutorial-tensorflow/
  4. http://ir.hit.edu.cn/~jguo/docs/notes/bptt.pdf
  5. https://zhuanlan.zhihu.com/p/27485750
  6. http://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
  7. http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf
  8. http://colah.github.io/posts/2015-08-Backprop/
Clone this wiki locally