Deep Learning: Recurrent Neural Networks in Python
Like the course I just released on Hidden Markov Models, Recurrent Neural Networks are all about learning sequences - but whereas Markov Models are limited by the Markov assumption, Recurrent Neural Networks are not - and as a result, they are more expressive, and more powerful than anything we’ve seen on tasks that we haven’t made progress on in decades.
So what’s going to be in this course and how will it build on the previous neural network courses and Hidden Markov Models?
We are going to revisit the XOR problem, but we’re going to extend it so that it becomes the parity problem - you’ll see that regular feedforward neural networks will have trouble solving this problem but recurrent networks will work because the key is to treat the input as a sequence.
In the next section of the course, we are going to revisit one of the most popular applications of recurrent neural networks - language modeling.
Another popular application of neural networks for language is word vectors or word embeddings.