Time series prediction – with deep learning

More and more often, and in more and more different areas, deep learning is making its appearance in the world around us.
Many small and medium businesses, however, will probably still think – Deep Learning, that’s for Google, Facebook & co., for the guys with big data and even bigger computing power (barely resisting the temptation to write “yuge power” here).

Partly this may be true. Certainly when it comes to running through immense permutations of hyperparameter settings. The question however is if we can’t obtain good results in more usual dimensions, too – in areas where traditional methods of data science / machine learning prevail. Prevail, as of today, that is.

One such area is time series prediction, with ARIMA & co. top on the leader board. Can deep learning be a serious competitor here? In what cases? Why? Exploring this is like starting out on an unknown road, fascinated by the magical things that may await us 😉
In any case, I’ve started walking down the road (not running!), in a rather take-your-time-and-explore-the-surroundings way. That means there’s much still to come, and it’s really just a beginning.

Here, anyway, is the travel report – the presentation slides, I mean: best viewed on RPubs, as RMarkdown on github, or downloadable as pdf).
Enjoy!

Deep Learning in Action (the less mathy version, this time)

On Tuesday at Hochschule München, Fakultät für Informatik and Mathematik I again gave a guest lecture on Deep Learning (RPubs, github, pdf). This time, it was more about applications than about matrices, more about general understanding than about architecture, and just in general about getting a feel what deep learning is used for and why. (Deep reinforcement learning also made a short appearance in there. Reinforcement learning certainly is another topic to post and/or present about, another time…)

I’ve used a lot of different sources, so I’ve put them all at the end, to make the presentation more readable. (Not only have I used lots of different sources, I’ve also used a few sources a lot. In deep learning, I find myself citing the same sources over and over – be it for the concise explanations, the great visualizations, or the inspiring ideas. Mainly thinking of Chris Olah’s and Andrey Karpathy’s blogs here, of the Deep Learning book, and of several Stanford lecture notes.)

One thing that always gets lost when you publish a presentation are the demos. In this case, I had three demos:

The first two are great sites that allow you to demonstrate the very basics of neural networks directly in the browser: When do you need hidden layers? What role does the form of the dataset play? In what cases can adding a single neuron make a difference between failing at, or successfully solving, a task?
The third demo is just – I think – totally fun: Would you have known that you can play around with your own convolution kernels, just like that, in GIMP? 😉

Deep Learning in Action

On Wednesday at Hochschule München, Fakultät für Informatik and Mathematik I presented about Deep Learning (nbviewer, github, pdf).

Mainly concepts (what’s “deep” in Deep Learning, backpropagation, how to optimize …) and architectures (Multi-Layer Perceptron, Convolutional Neural Network, Recurrent Neural Network), but also demos and code examples (mainly using TensorFlow).

It was/is a lot material to cover in 90 minutes, and conceptual understanding / developing intuition was the main point. Of course, there is great online material to make use of, and you’ll see my preferences in the cited sources ;-).

Next year, having covered the basics, I hope to be developing use cases and practical applications showing applicability of Deep Learning even in non-Google-size (resp: Facebook, Baidu, Apple…) environments.
Stay tuned!