Skip to content

johnny5550822/awesome-neat-rnn

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

43 Commits
 
 
 
 

Repository files navigation

awesome-neat-rnn

This is a precise to-read list for recurrent neural network (RNN).

Omit the long author lists; start with year, followed by title, jounral, and link. Newer papers go first. There are additional resources such as codes, interesting blog posts, cool articles, etc.

Maintainer: [Johnny Ho] (https://github.com/johnny5550822)

Contributions

This repository originally is decided for keeping track of resources related to recurrent neural network. I hope everyone can contribue to it and make it better! So, please submit pull requests! For any questions, contact me ([email protected])

Additional resources

This repository is created in order to follow preciseness and neatness. There is another repository that also provide excellent (with full author lists, etc) sources for recurrent neural network, please visit [awesome-rnn] (https://github.com/kjw0612/awesome-rnn).

Table of Content

Software Package

  • python, [neon]
  • python, [chainer]
  • torch, [oxnn]
  • torch, [Element-research]
  • Deep Learning in general
    • torch, [dp], a torch deep learning library. I think the example folder is the most useful, for example, CNN implementation there

Sample Codes

Blogs

  • torch7 blogs (some cool explanation and codes), [blog]
  • Up-to-date DL news from notey [blog]
  • What does DL think about your selfie? [blog]
  • The Unreasonable Effectiveness of Recurrent Neural Networks. [blog]

Review

  • 2015 Deep Learning, Nature [paper]

Tutorial

  • 2015, Recurrent Neural Networks Tutorial [link]
  • 2015, understanding LSTM [links]
  • 2003 A tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the "echo state network" approach [link]

Language Modeling

  • 2016 Long Short-Term Memory-Networks for Machine Reading, arXiv [paper]
  • 2015 Teaching Machines to Read and Comprehend, NIPS [paper]
  • 2015 Character-Aware Neural Language Models, arXiv [paper]

Translation

  • 2014 Sequence to Sequence Learning with Neural Networks, NIPS [paper]
  • 2014 On the Properties of Neural Machine Translation: Encoder–Decoder Approaches, arXiv [paper]
  • 2014 Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation, arXiv [paper]
  • 2013 Recurrent Continuous Translation Models, EMNLP [paper]

Image Generation

  • 2015 DRAW: A Recurrent Neural Network For Image Generation, arXiv [[paper](http://arxiv.org/abs/1502.04623]
  • 2015 Unveiling the Dreams of Word Embeddings: Towards Language-Driven Image Generation, arXiv [paper]
  • 2015 Generative Image Modeling Using Spatial LSTMs, arXiv [paper]
  • 2014 Recurrent Models of Visual Attention, arXiv [paper]

Hand-writing

  • 2013 Generating Sequences With Recurrent Neural Networks, arXiv [paper]
  • 2007 Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, NIPS [paper]

Text Generation

  • 2011 Generating Text with Recurrent Neural Networks, ICML [paper]

Questions and Answers

  • 2015 Ask Your Neurons: A Neural-based Approach to Answering Questions about Images [paper]
  • 2015 VQA: Visual Question Answering [paper]
  • 2015 Exploring Models and Data for Image Question Answering [paper]
  • 2015 Are you talking to a machine? Dataset and methods for multilingual image question answering [paper]
  • 2015 Teaching Machines to read and comprehand [paper]
  • 2015 Ask me anything: dynamic memory networks for natural language processing [paper]

Cell Type

  • 2014 Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling, arXiv [paper]
  • 1997 Long Short-Term Memory, Neural Computation [paper]

Regularization

  • 2016 Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations, arXiv [paper]

Other

  • Hyperparameters Optimization
    • 2015 Gradient-based Hyperparameter Optimization through Reversible Learning, Arxiv [paper]
  • CNN: something cool and worth to share here
    • 2015 Spatial Transformer Networks, NIPS [paper]
    • 2015 Human-level control through deep reinforcement learning, Nature [paper]
  • 2007 Multi-Dimensional Recurrent Neural Networks [paper]

About

A precise to-read list for recurrent neural network (RNN).

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published