Deep learning, concepts and frameworks: Find your way through the jungle (talk)

Today at OOP in Munich, I had an in-depth talk on deep learning, including applications, basic concepts as well as practical demos with Tensorflow, Keras and PyTorch.

As usual, the slides are on RPubs, split up into 2 parts because of the plenty of images included – lossy png compression did work wonders but there’s only so much you can expect 😉 – so there’s a part 1 and a part 2.

There’s also the github repository with the demo notebooks.

Thanks to everyone who attended, and thank you for the interesting questions!

Practical Deep Learning (talk)

Yesterday at IT Tage 2017, I had an introductory-level talk on deep learning.

After giving an overview of concepts and frameworks, I zoomed in on the task of image classification using Keras, Tensorflow and PyTorch, not aiming for high classification accuracy but wanting to convey the different “look and feel” of these frameworks.

(By sheer chance, the use case chosen happened to be about telling apart different types of endurance sports ;-))

Here are the slides, and here are the Jupyter notebooks.

Thanks to everyone who attended & thanks for reading!

I’m a developer, why should I care about matrices or calculus? (talk at MLConference 2017)

Yesterday at ML Conference, which took place this year for the first time, I had a talk on cool bits of calculus and linear algebra that are useful and fun to know if you’re writing code for deep learning and/or machine learning.

Originally, the title was something like “What every interested ML/DL developer should know about matrices and calculus”, but then really I didn’t like the schoolmasterly tone that had, as really what I’ve wanted to convey was the fun and the fascination of it …

So, without further ado, here are the slides and the raw presentation on github.

Thanks for reading!

Deep Learning with Keras – using R (talk)

This week in Kassel, [R]Kenntnistage 2017 took place, organised by EODA. It was all about Data Science (with R, mostly, as you could guess): Speakers presented interesting applications in industry, manufacturing, ecology, journalism and other fields, including use cases such as predictive maintenance, forecasting and risk analysis.

I had the honour to have a talk too (thanks guys!), combining two of my favorite topics – deep learning and R. The slides are on RPubs as usual, and the source code (including complete examples) can be found on github.

Last not least, it’s great to see data science, and R, gaining momentum like that (this is Europe, so I can still write such a sentence ;-))
If you allow me to include an advertisement here – if you’re wondering what insight might come out of your data: At Trivadis, we’re a (yet) smallish but super motivated team of data scientists and machine learning practitioners happy to help!

Time series shootout: ARIMA vs. LSTM (talk)

Yesterday, the Munich datageeks Data Day took place. It was a totally fun event – great to see how much is going on, data-science-wise, in and around Munich, and how many people are interested in the topic! (By the way, I think that more than half the talks were about deep learning!)

I also had a talk, “Time series shootout: ARIMA vs. LSTM” (slides on RPubs, github).

Whatever the title, it was really about showing a systematic comparison of forecasting using ARIMA and LSTM, on synthetic as well as real datasets. I find it amazing how little is needed to get a very decent result with LSTM – how little data, how little hyperparameter tuning, how few training epochs.

Of course, it gets most interesting when we look at datasets where ARIMA has problems, as with multiple seasonality. I have such an example in the talk (in fact, it’s the main climax ;-)), but it’s definitely also an interesting direction for further experiments.

Thanks for reading!

Automatic Crack Detection – with Deep Learning

On Friday at DOAG Big Data Days, I presented one possible application of deep learning: using deep learning for automatic crack detection – with some background theory, a Keras model trained from scratch, and the use of VGG16 pretrained on Imagenet. The amount of input data really was minimal, and the resulting accuracy, under these circumstances, not bad at all! Here are the slides.

If you’re interested, I’ll have a webcast on this as part of the Trivadis tricast series (registration). The talk will be in German though, so I guess some working knowledge of German would be helpful 🙂

Thanks for reading!

Deep Learning, deeplearning4j and Outlier Detection: Talks at Trivadis Tech Event

Last weekend, another edition of Trivadis Tech Event took place. As usual, it was great fun and a great source of inspiration.
I had the occasion to talk about deep learning twice: One talk was an intro to DL4J (deeplearning4j), zooming in on a few aspects I’ve found especially nice and useful while trying to provide a general introduction to deep learning at the same time. The audience was great, and the framework really is fun to work with, so this was a totally pleasant experience! Here are the slides, and here’s the example code.

The second talk was a joint session with my colleague Olaf on outlier / anomaly detection. We covered both ML and DL algorithms. For DL, I focused on variational autoencoders, the special challenge being to successfully apply the algorithm to datasets other than MNIST… and especially, datasets with a mix of categorical and continuous variables of different scale. As I say in the habitual “conclusion” slide, I don’t think I’ve arrived at a conclusion yet… any comments / suggestions are very welcome! Here’s the VAE presentation on RPubs, and here on github.
Thanks for reading!

Haskell, R, and HaskellR: Combining the best of two worlds (talk at UseR! 2017)

Earlier today, I presented at UseR! 2017 about HaskellR: a great piece of software, developed by Tweag I/O, that allows to seemlessly use R from Haskell.

It was my first UseR!, it was a great experience, and if I had the time I’d like to write a separate blog post about it, as there were things that did not quite align with my prior expectations… Stuff for thought, but not the topic of this post. (Mainly this would be about how the academic talks compared to the non-academic ones.)

So, why HaskellR? If you allow me one personal note… For the ex-psychologist, ex-software-developer, ex-database administrator, now “in over my head” data scientist and machine learning/deep learning person that I am (see this post for that story), there has always been some fixed point of interest (ideal, you could say), and that is the elegance of functional programming. It all started with SICP, which I first read as a (Java) programmer and recently read again (partly) when preparing R 4 hackers, a talk focused to a great part on the functional programming features of R.

For a database administrator, unless you’re very lucky, it’s hard to integrate use of a functional programming language into your work. How about deep learning and/or data science?
For deep learning, there’s Chris Olah’s wonderful blog post linking deep networks to functional programs, but the reality (of widely used frameworks) looks different: TensorFlow, Keras, PyTorch… it’s mostly Python around there, and whatever the niceties of Python (readability, list comprehensions…) writing Python certainly does not feel like writing FP code at all (much less than writing R!).

So in practice, the connections between data science/machine learning/deep learning and functional programming are scarce. If you look for connections, you will quickly stumble upon the Tweag I/O guys’ work: They’ve not just created HaskellR, they’ve also made Haskell run on Spark, thus enabling Haskell applications to use Spark’s MLLib for large-scale machine learning.

What, then, is HaskellR? It’s a way to seemlessly mix R code and Haskell code, with full interoperability in both directions. You can do that in source files, of course, but you can also quickly play around in the interpreter, appropriately called H (no, I was not thinking of its addictive potential here ;-)), and even use Jupyter notebook with HaskellR! In fact, that’s what I did in the demos.

If you’re interested in the technicalities of the implementation, you’ll find that documented in great detail on the HaskellR website (and even more, in their IFL 2014 paper), but otherwise I suggest you take a look at the demos from my talk: First, there’s a notebook showing how to use HaskellR, how to get values from Haskell to R and vice versa, and then, there’s the trading app scenario notebook: Suppose you have a trading app written in Haskell – it’s gotta be lightning fast and as bug-free as possible, right?
But, how about nice visualizations, time series diagnostics, all kinds of sophisticated statistical and machine learning algorithms… Chances are, someone’s implemented that algorithm in R, already! Let’s take ARIMA – one line of code with R.J. Hyndman’s auto.arima package! Visualization? ggplot2, of course! And last not least, an easy way to do deep learning with R’s keras package (interfacing to Python Keras).

Besides the notebooks, you might also want to check out the slides, especially if you’re an R user who hasn’t had much contact with Haskell. Ever wondered why the pipe looks the way it looks, or what the partial and compose functions are doing?

Last not least, a thousand thanks to the guys over at Tweag I/O, who’ve been incredibly helpful in getting the whole setup to run (the best way to get it up and running on Fedora is using nix, which I didn’t have any prior experience with… just at a second level of parentheses, I think I’d like to know more about nix, the package manager and the OS, now too ;-)). This is really the great thing about open source, the cool stuff people build and how helpful they are! So thanks again, guys – I hope to be doing things “at the interface” of ML/DL and FP more often in the future!

Update:
The talk was recorded, and can be viewed here.

Time series prediction – with deep learning

More and more often, and in more and more different areas, deep learning is making its appearance in the world around us.
Many small and medium businesses, however, will probably still think – Deep Learning, that’s for Google, Facebook & co., for the guys with big data and even bigger computing power (barely resisting the temptation to write “yuge power” here).

Partly this may be true. Certainly when it comes to running through immense permutations of hyperparameter settings. The question however is if we can’t obtain good results in more usual dimensions, too – in areas where traditional methods of data science / machine learning prevail. Prevail, as of today, that is.

One such area is time series prediction, with ARIMA & co. top on the leader board. Can deep learning be a serious competitor here? In what cases? Why? Exploring this is like starting out on an unknown road, fascinated by the magical things that may await us 😉
In any case, I’ve started walking down the road (not running!), in a rather take-your-time-and-explore-the-surroundings way. That means there’s much still to come, and it’s really just a beginning.

Here, anyway, is the travel report – the presentation slides, I mean: best viewed on RPubs, as RMarkdown on github, or downloadable as pdf).
Enjoy!

Deep Learning in Action (the less mathy version, this time)

On Tuesday at Hochschule München, Fakultät für Informatik and Mathematik I again gave a guest lecture on Deep Learning (RPubs, github, pdf). This time, it was more about applications than about matrices, more about general understanding than about architecture, and just in general about getting a feel what deep learning is used for and why. (Deep reinforcement learning also made a short appearance in there. Reinforcement learning certainly is another topic to post and/or present about, another time…)

I’ve used a lot of different sources, so I’ve put them all at the end, to make the presentation more readable. (Not only have I used lots of different sources, I’ve also used a few sources a lot. In deep learning, I find myself citing the same sources over and over – be it for the concise explanations, the great visualizations, or the inspiring ideas. Mainly thinking of Chris Olah’s and Andrey Karpathy’s blogs here, of the Deep Learning book, and of several Stanford lecture notes.)

One thing that always gets lost when you publish a presentation are the demos. In this case, I had three demos:

The first two are great sites that allow you to demonstrate the very basics of neural networks directly in the browser: When do you need hidden layers? What role does the form of the dataset play? In what cases can adding a single neuron make a difference between failing at, or successfully solving, a task?
The third demo is just – I think – totally fun: Would you have known that you can play around with your own convolution kernels, just like that, in GIMP? 😉