View on GitHub

Recurrent Neural Networks for timeseries analysis

The world is ever changing. As a result, many of the systems and phenomena we are interested in evolve over time resulting in time evolving datasets. Timeseries often display any interesting properties and levels of correlation. In this tutorial we will introduce the students to the use of Recurrent Neural Networks and LSTMs to model and forecast different kinds of timeseries.

Summary

From the closing price of the stock market to the number of clicks per second on a webpage or the sequence of venues visited by a tourist exploring a new city, time series and temporal sequences of discrete events are everywhere around us. Their study and understanding requires us to take in to account the sequence of values seen in previous steps and even long term temporal correlations.

In this tutorial we will explore how to use Recurrent Neural Networks, a technique originally developed for Natural Language processing, to model and forecast time series. Their advantages and disadvantages with respect to more traditional approaches will be highlighted.

Program

  1. Recurrent Neural Networks
    • Review of Feed-forward networks
    • Introducing recursion
    • Types of Recurrent Neural Networks
    • Our first recurrent network
  2. Gated Recurrent Units
    • Advantages of recursion
    • Controlling information flow
    • Gates and internal logic
  3. Long-Short Term Memory
    • Remembering the past
    • Avoiding vanishing gradients
    • Memory cells

Slides

[web] [twitter] [github] [LinkedIn]