Sentiment Analysis of Movie Reviews (3): doc2vec

This is the last – for now – installment of my mini-series on sentiment analysis of the Stanford collection of IMDB reviews.
So far, we’ve had a look at classical bag-of-words models and word vectors (word2vec).
We saw that from the classifiers used, logistic regression performed best, be it in combination with bag-of-words or word2vec.
We also saw that while the word2vec model did in fact model semantic dimensions, it was less successful for classification than bag-of-words, and we explained that by the averaging of word vectors we had to perform to obtain input features on review (not word) level.
So the question now is: How would distributed representations perform if we did not have to throw away information by averaging word vectors?

Document vectors: doc2vec

Shortly after word2vec, Le and Mikolov developed paragraph (document) vector models.
The basic models are

  • Distributed Memory Model of Paragraph Vectors (PV-DM) and
  • Distributed Bag of Words (PV-DBOW)

In PV-DM, in addition to the word vectors, there is a paragraph vector that keeps track of the whole document:

Fig.1: Distributed Memory Model of Paragraph Vectors (PV-DM) (from: Distributed Representations of Sentences and Documents)

With distributed bag-of-words (PV-DBOW), there even aren’t any word vectors, there’s just a paragraph vector trained to predict the context:

Fig.2: Distributed Bag of Words (PV-DBOW) (from: Distributed Representations of Sentences and Documents)

Like word2vec, doc2vec in Python is provided by the gensim library. Please see the gensim doc2vec tutorial for example usage and configuration.

doc2vec: performance on sentiment analysis task

I’ve trained 3 models, with parameter settings as in the above-mentioned doc2vec tutorial: 2 distributed memory models (with word & paragraph vectors averaged or concatenated, respectively), and one distributed bag-of-words model. Here, without further ado, are the results. I’m just referring results for logistic regression as again, this was the best-performing classifier:

test vectors inferred test vectors from model
Distributed memory, vectors averaged (dm/m) 0.81 0.87
Distributed memory, vectors concatenated (dm/c) 0.80 0.82
Distributed bag of words (dbow) 0.90 0.90

Hoorah! We’ve finally beaten bag-of-words … but only by a tiny little 0.1 percent, and we won’t even ask if that’s significant😉
What should we conclude from that? In my opinion, there’s no reason to be sarcastic here (even if you might have thought I’d made it sound like that ;-)). With doc2vec, we’ve (at least) reached bag-of-words performance for classification, plus we now have semantic dimensions at our disposal. Speaking of which – let’s check what doc2vec thinks is similar to awesome/awful. Will the results be equivalent to those had with word2vec?
These are the words found most similar to awesome (note: the model we’re asking this question isn’t the one that performed best with Logistic Regression (PV-DBOW), as distributed bag-of-words doesn’t train word vectors, – this is instead obtained from the best-performing PV-DMM model):

model.most_similar('awesome', topn=10)

[(u'incredible', 0.9011116027832031),
(u'excellent', 0.8860622644424438),
(u'outstanding', 0.8797732591629028),
(u'exceptional', 0.8539372682571411),
(u'awful', 0.8104138970375061),
(u'astounding', 0.7750493884086609),
(u'alright', 0.7587056159973145),
(u'astonishing', 0.7556235790252686),
(u'extraordinary', 0.743841290473938)]

So, what we see is very similar to the output of word2vec – including the inclusion of awful. Same for what’s judged similar to awful:

model.most_similar('awful', topn=10)

[(u'abysmal', 0.8371909856796265),
(u'appalling', 0.8327066898345947),
(u'atrocious', 0.8309577703475952),
(u'horrible', 0.8192445039749146),
(u'terrible', 0.8124841451644897),
(u'awesome', 0.8104138970375061),
(u'dreadful', 0.8072893023490906),
(u'horrendous', 0.7981990575790405),
(u'amazing', 0.7926105260848999), 
(u'incredible', 0.7852109670639038)]

To sum up – for now – we’ve explored how three models: bag-of-words, word2vec, and doc2vec – perform on sentiment analysis of IMDB movie reviews, in combination with different classifiers the most successful of which was logistic regression. Very similar (around 10% error rate) performance was reached by bag-of-words and doc2vec.
From this you may of course conclude that as of today, there’s no reason not to stick with the straightforward bag-of-words approach.But you could also view this differently. While word2vec appeared in 2013, it was succeeded by doc2vec already in 2014. Now it’s 2016, and things have happened in the meantime, are happening at present, right now. It’s a fascinating field, and even if – in sentiment analysis – we don’t see impressive output yet, impressive output is quite likely to appear sooner or later. I’m curious what we’re going to see!

Sentiment Analysis of Movie Reviews (2): word2vec

This is the continuation of my mini-series on sentiment analysis of movie reviews. Last time, we had a look at how well classical bag-of-words models worked for classification of the Stanford collection of IMDB reviews. As it turned out, the “winner” was Logistic Regression, using both unigrams and bigrams for classification. The best classification accuracy obtained was .89 – not bad at all for sentiment analysis (but see the caveat regarding what’s good or bad in the first post).

Bag-of-words: limitations

So, bag-of-words models may be surprisingly successful, but they are limited in what they can do. First and foremost, with bag-of-words models, words are encoded using one-hot-encoding. One-hot-encoding means that each word is represented by a vector, of length the size of the vocabulary, where exactly one bit is “on” (1). Say we wanted to encode that famous quote by Hadley Wickham

Tidy datasets are all alike but every messy dataset is messy in its own way.

It would look like this:

alike all but dataset every in is its messy own tidy way
0 0 0 0 0 0 0 0 0 0 0 1 0
1 0 0 0 1 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 1 0 0 0 0 0
3 0 1 0 0 0 0 0 0 0 0 0 0
4 1 0 0 0 0 0 0 0 0 0 0 0
5 0 0 1 0 0 0 0 0 0 0 0 0
6 0 0 0 0 1 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 1 0 0 0
8 0 0 0 0 0 1 0 0 0 0 0 0
9 0 0 0 0 0 0 0 1 0 0 0 0
10 0 0 0 0 0 0 0 0 0 1 0 0
11 0 0 0 0 0 0 0 0 0 0 0 1

Now the thing is – with this representation, all words are basically equidistant from each other!
If we want to find similarities between words, we have to look at a corpus of texts, build a co-occurrence matrix and perform dimensionality reduction (using, e.g., singular value decomposition).
Let’s take a second example sentence, like this one Lev Tolstoj wrote after having read Hadley😉

Happy families are all alike; every unhappy family is unhappy in its own way.

This is what the beginning of a co-occurrence matrix would look like for the two sentences:

tidy dataset is all alike but every messy in its own way happy family unhappy
tidy 0 2 2 1 1 1 1 2 1 1 1 1 0 0 0
dataset 2 0 2 1 1 1 1 2 1 1 1 1 0 0 0
is 1 2 1 1 1 1 1 2 1 1 1 1 1 2 1

As you can imagine, in reality such a co-occurrence matrix will quickly become very, very big. And after dimensionality reduction, we will have lost a lot of information about individual words.
So the question is, is there anything else we can do?

As it turns out, there is. Words do not have to be represented as one-hot vectors. Using so-called distributed representations, a word can be represented as a vector of (say 100, 200, … whatever works best) real numbers.
And as we will see, with this representation, it is possible to model semantic relationships between words!

Before I proceed, an aside: In this post, my focus is on walking through a real example, and showing real data exploration. I’ll just pick two of the most famous recent models and work with them, but this doesn’t mean that there aren’t others (there are!), and also there’s a whole history of using distributed representations for NLP. For an overview, I’d recommend Sebastian Ruder’s series on the topic, starting with the first part on the history of word embeddings.


The first model I’ll use is the famous word2vec developed by Mikolov et al. 2013 (see Efficient estimation of word representations in vector space).
word2vec arrives at word vectors by training a neural network to predict

  • a word in the center from its surroundings (continuous bag of words, CBOW), or
  • a word’s surroundings from the center word(!) (skip-gram model)

This is easiest to understand looking at figures from the Mikolov et al. article cited above:

Fig.1: Continuous Bag of Words (from: Efficient estimation of word representations in vector space)


Fig.2: Skip-Gram (from: Efficient estimation of word representations in vector space)

Now, to see what is meant by “model semantic relationships”, have a look at the following figure:

Fig. 3: Semantic-Syntactic Word Relationship test (from: Efficient estimation of word representations in vector space)

So basically what these models can do (with much higher-than-random accuracy) is answer questions like “Athens is to Greece what Oslo is to ???”. Or: “walking” is to “walked” what “swam” is to ???
Fascinating, right?
Let’s try this on our dataset.

Word embeddings for the IMDB dataset

In Python, word2vec is available through the gensim NLP library. There is a very nice tutorial how to use word2vec written by the gensim folks, so I’ll jump right in and present the results of using word2vec on the IMDB dataset.

I’ve trained a CBOW model, with a context size of 20, and a vector size of 100. Now let’s explore our model! Now that we have a notion of similarity for our words, let’s first ask: Which are the words most similar to awesome and awful, respectively? Start with awesome:

model.most_similar('awesome', topn=10)

[(u'amazing', 0.7929322123527527),
(u'incredible', 0.7127916812896729),
(u'awful', 0.7072071433067322),
(u'excellent', 0.6961393356323242),
(u'fantastic', 0.6925109624862671),
(u'alright', 0.6886886358261108),
(u'cool', 0.679090142250061),
(u'outstanding', 0.6213874816894531),
(u'astounding', 0.613292932510376),
(u'terrific', 0.6013768911361694)]

Interesting! This makes a lot of sense: So amazing is most similar, then we have words like excellent and outstanding.
But wait! What is awful doing there? I don’t really have a satisfying explanation. The most straightforward explanation would be that both occur in similar positions, and it is positions that the network learns. But then I’m still wondering why it is just awful appearing in this list, not any other positive word. Another important aspect might be emotional intensity. No doubt both words are very similar in intensity. (I’ve also checked co-occurrences, but those do not yield a simple explanation either.) Anyway, let’s keep an eye on it when further exploring the model. Of course we have to also keep in mind that for training a word2vec model, normally much bigger datasets are used.

These are the words most similar to awful:

model.most_similar('awful', topn=10)

[(u'terrible', 0.8212785124778748),
(u'horrible', 0.7955455183982849),
(u'atrocious', 0.7824822664260864),
(u'dreadful', 0.7722172737121582),
(u'appalling', 0.7244443893432617),
(u'horrendous', 0.7235419154167175),
(u'abysmal', 0.720653235912323),
(u'amazing', 0.708114743232727),
(u'awesome', 0.7072070837020874),
(u'bad', 0.6963905096054077)]

Again, this makes a lot of sense! And here we have the awesomeawful relationship again. On a close look, there’s also amazing appearing in the “awful list”.

For curiosity, I’ve tried to “subtract out” awful from the “awesome list”. This is what I ended up with:

model.most_similar(positive=['awesome'], negative=['awful'])

[(u'jolly', 0.3947059214115143),
(u'midget', 0.38988131284713745),
(u'knight', 0.3789686858654022),
(u'spooky', 0.36937469244003296),
(u'nice', 0.3680706322193146),
(u'looney', 0.3676275610923767),
(u'ho', 0.3594890832901001),
(u'gotham', 0.35877227783203125),
(u'lookalike', 0.3579031229019165),
(u'devilish', 0.35554438829421997)]

Funny … We seem to have entered some realm of fantasy story … But I’ll refrain from interpreting this result any further😉

Now, I don’t want you to get the impression the model just did some crazy stuff and that’s all. Overall, the similar words I’ve checked make a lot of sense, and also the “what doesn’t match” is answered quite well, in the area we’re interested in (sentiment). Look at this:

model.doesnt_match("good bad awful terrible".split())
model.doesnt_match("awesome bad awful terrible".split())
model.doesnt_match("nice pleasant fine excellent".split())

In the last case, I wanted the model to sort out the outlier on the intensity dimension (excellent), and that is what it did!

Exploring word vectors is fun, but we need to get back to our classification task. Now that we have word vectors, how do we use them for sentiment analysis?
We need one vector per review, not per word. What is done most often is to average vectors, and when we input these averaged vectors to our classification algorithms (the same as in the last post), this is the result we get (listing the bag-of-words results again, too, for easy comparison):

Bag of words word2vec
Logistic Regression 0.89 0.83
Support Vector Machine 0.84 0.70
Random Forest 0.84 0.80

So cool as our word2vec model is, it actually performs worse on classification than bag-of-words. Thinking about it, this is not so surprising, given that averaging over word vectors, we lose a whole lot of information – information we spent quite some calculation effort to obtain before… What if we already had vectors for a complete review, and didn’t have to average? Enter .. doc2vec – document (or paragraph) vectors! But this is already the topic for the next post … stay tuned🙂

Sentiment Analysis of Movie Reviews (1): Word Count Models

Imagine I show you a book review, on, say. Imagine I hide the number of stars, – all you get to see is the number of stars. And now I’m asking you, that review, is it good or bad? Just two categories, good or bad. That’s easy, right?
Well, it should be easy, for humans (although depending on the input there can be lots of disagreement between humans, too.) But if you want to do it automatically, it turns out to be surprisingly difficult.

This is the start of a short series on sentiment analysis, based on my TechEvent presentation. My focus will be more on data exploration than on achieving the best possible accuracy; more on getting a feeling for the difficulties than on jungling with parameters. More on the Natural Language Processing (NLP) aspect than on generic classification. And even though the series will be short (for now – given time constraints ;-)), it’s definitely a topic to be pursued (looking at the rapidity of developments in the field).

Let’s jump right in. So why would one do sentiment analysis? Because out there, not every relevant text comes labeled as “good” or “bad”. Take emails, blog posts, support tickets. Is that guy actually angry at us (our service desk/team/company)? Is she disappointed by our product? We’d like to know.

So while sentiment analysis is undoubtedly useful, the quality of the results will rely on having a big enough training set – and someone will have to sit down and categorize all those tweets/reviews/whatever. (Datasets are available where labeling of the training set was done automatically, see e.g. the Stanford Sentiment140 dataset, but this approach must induce biases, to say the least.) In our case, we care more about how things work than about the actual accuracies; still, keep in mind, when looking at the accuracy numbers, that especially for the models discussed in later posts, a bigger dataset might achieve much better performance.

The data

Our dataset consists of 25.000 labeled training reviews, plus 25.000 test reviews (also labeled), available from Of the 25.000 training / test reviews, 12.500 each have been rated positive, and 12.500 negative by human annotators.
The dataset has originally been used in Maas et al. (2011), Learning Word Vectors for Sentiment Analysis.
Preprocessing was done after the example of the gensim doc2vec notebook (we will describe doc2vec in a later post).

Good or bad?

Let’s load the preprocessed data and have a look at the very first training review. (For better readability, I’ll try not to clutter this text with too much code, so there’ll only be code when there’s a special point in showing it. For more code, see the notebook for the original talk.)

a reasonable effort is summary for this film . a good sixties film but lacking any sense of achievement . maggie smith gave a decent performance which was believable enough but not as good as she could have given , other actors were just dreadful ! a terrible portrayal . it wasn't very funny and so it didn't really achieve its genres as it wasn't particularly funny and it wasn't dramatic . the only genre achieved to a satisfactory level was romance . target audiences were not hit and the movie sent out confusing messages . a very basic plot and a very basic storyline were not pulled off or performed at all well and people were left confused as to why the film wasn't as good and who the target audiences were etc . however maggie was quite good and the storyline was alright with moments of capability . 4 . \n

Looking at this text, we already see complexity emerging. As a human reader, I’m sure you’ll say this is a negative review, and undoubtedly there are some clearly negative words (“dreadful”, “confusing”, “terrible”). But to a high degree, negativity comes from negated positive words: “lacking achievement”, “wasn’t very funny”, “not as good as she could have given”. So clearly we cannot just look at single words in isolation, but at sequences of words – n-grams (bigrams, trigrams, …) as they say in natural language processing.


The question is though, at how many consecutive words should we look? Let’s step through an example. “Funny” (unigram) is positive, “very funny” (bigram) even more so. “Not very funny” (trigram) is negative. If it were “not so very funny” we’d need 4-grams … How about “I didn’t think it was so very funny”? And this could probably go on like that… So how many adjacent words do we need to consider? There evidently is no clear border… how can we decide? Fortunately, we won’t have to decide upfront. We’ll do that as part of our search for the optimal classification algorithm.

So in general, how can automatic sentiment analysis work? The simplest approach is via word counts. Basically, we count the positive words, giving them different weights according to how positive they are. Same with the negative words. And then, the side with the highest score “wins”.

But – no-ones gonna sit there and categorize all those words! The algorithm has to figure that out itself. How can it do that? Via the labeled training samples. For them, we have the sentiment as well the information how often each word occurred, e.g., like this:

sentiment beautiful bad awful decent horrible ok awesome
review 1 0 0 1 2 1 1 0 0
review 2 1 1 0 0 0 0 0 1
review 3 0 0 0 0 1 1 0 0

From this, the algorithm can determine the words’ polarities and weights, and arrive at something like:

word beautiful bad awful decent horrible ok awesome
weight 3.4 -2.9 -5.6 -0.2 -4.9 -0.1 5.2

Now, what can do is run a grid search over combinations of

  • classification algorithms,
  • parameters for those algorithms (algorithm-dependent),
  • different ngrams,

and record the combinations that work best on the test set.
Algorithms included in the search were logistic regression (with different settings for regularization), support vector machines, and random forests (with different settings for the maximum tree depth). In that way, both linear and non-linear procedures were present. All aforementioned combinations of algorithms and parameters were tried with unigrams, unigrams + bigrams, and unigrams + bigrams + trigrams as features.

And so after long computations, the winner is … wait! I didn’t yet say anything about stopword removal. Without filtering, the most frequent words in this dataset are stopwords to a high degree, so we will definitely want to remove noise. The Python nltk library provides a stopword list, but this contains words like ‘not’, ‘nor’, ‘no’, ‘wasn’, ‘ain’, etc., words that we definitely do NOT want to remove when doing sentiment analysis. So I’ve used a subset of the nltk list where I’ve removed all negations / negated forms.

Here, then, are the accuracies obtained on the test set. For each classifier, I’m displaying the one with the most successful parameter settings (without detailing them here, in order not to distract from the main topic of the post) and the most successful n-gram configuration.

with stopword filtering
with stopword filtering
no stopword filtering
Logistic Regression 0.89
Support Vector Machine 0.84
Random Forest 0.84

Overall, these accuracies look astonishingly good, given that in general, for sentiment analysis, something around 80% is seen as a to-be-expected value for accuracy. However, I find it difficult to talk about a to-be-expected value here: The accuracy achieved will very much depend on the dataset in question! So we really would need to know, for the exact dataset used, what accuracies have been achieved by other algorithms, and most importantly: what is the agreement between human annotators here? If humans agree a 100% on whether items of a dataset are positive or negative, then 80% accuracy for a classifier sounds rather bad! But if agreement between humans is 85% only, the picture is totally different. (And then there’s a totally different angle, extremely important but not the focus of this post: Say we achieve 90% accuracy where others achieve 80% and humans agree to 90%. Technically we’re doing great! But we’re still misclassifying one in ten texts! Depending on why we’re doing this at all, what automated action we’re planning to take based on the results, getting one in ten wrong might turn out to be catastrophical!)

Having said that, I find the results interesting for two reasons: For one, logistic regression, a linear classifier, does best here. This just confirms something that is often seen in machine learning,- logistic regression being a simple but very powerful algorithm. Secondly, the logistic regression best result was reached when including bigrams as features, whereas trigrams did not bring on any further improvements. A great thing with logistic regression is that you can peep into the classifier’s brain and see what features it decided are important, by looking at the coefficients. Let’s inspect what words make a review positive. The most positive features, in order:

coef word
2969 0.672635 excellent
6681 0.563958 perfect
9816 0.521026 wonderful
8646 0.520818 superb
3165 0.505146 favorite
431 0.502118 amazing
5923 0.481505 must see
5214 0.461807 loved
3632 0.458645 funniest
2798 0.453481 enjoyable

Pretty much makes sense, doesn’t it? And we do see a bigram among these: “must see”. How about other bigrams contributing to the plus side?

coef word
5923 0.481505 must see
3 0.450675 10 10
6350 0.421314 one best
9701 0.389081 well worth
5452 0.371277 may not
6139 0.329485 not bad
6970 0.323805 pretty good
2259 0.307238 definitely worth
5208 0.303380 love movie
9432 0.301404 very good

These mostly make a lot of sense, too. How about words / ngrams that make it negative? First, the “overall ranking” – last one is worst:

coef word
6864 -0.564446 poor
2625 -0.565503 dull
9855 -0.575060 worse
4267 -0.588133 horrible
2439 -0.596302 disappointing
6866 -0.675187 poorly
1045 -0.681608 boring
2440 -0.688024 disappointment
702 -0.811184 awful
9607 -0.838195 waste

So we see worst of all is when it’s a waste of time. Could agree to that!
Now, this time, there are no bigrams among the 10 worst ranked features. Let’s look at them in isolation:

coef word
6431 -0.247169 only good
3151 -0.250090 fast forward
9861 -0.264564 worst movie
6201 -0.324169 not recommend
6153 -0.332796 not even
6164 -0.333147 not funny
6217 -0.357056 not very
6169 -0.368976 not good
6421 -0.437750 one worst
9609 -0.451138 waste time

Evidently, it was worth keeping the negations! So, the way this classifier works pretty much makes sense, and we seem to have reached acceptable accuracy (I hesitate do write this because … what is acceptable depends on … see above). If we take a less simple approach – move away from basically, just counting (weighted) words, where every word is a one-hot-encoded vector – can we do any better?
With that cliff hanger, I end for today … stay tuned for the continuation, where we dive into the fascinating world of word vectors …🙂 See you there!

Sentiment Analysis of Movie Reviews (talk)

Yesterday at Tech Event, which was great as always, I presented on sentiment analysis, taking as example movie reviews. I intend to write a little series of blog posts on this, but as I’m not sure when exactly I’ll get to this, here are the pdf version and a link to the notebook.

The focus was not on the classification algorithms per se (treating text as just another domain for classification), but on the difficulties emerging from this being language: Can it really work to look at just single words? Do I need bigrams? Trigrams? More? Can we tackle the complexity using word embeddings – word vectors? Or better, paragraph vectors?

I had a lot of fun exploring this topic, and I really hope to write some posts on this – stay tuned🙂

Doing Data Science

This is another untechnical post – actually, it’s even a personal one. If you’ve been to this blog lately, you may have noticed something, that is, the absence of something – the absence of posts over nearly six months …
I’ve been busy catching up, reading up, immersing myself in things I’ve been interested in for a long time – but which I never imagined could have a real relation to, let alone make part of, my professional life.
But recently, things changed. I’ll try to be doing “for real” what would have seemed just a dream a year ago: data science, machine learning, applied statistics (Bayesian statistics, preferredly).

Doing Data Science … why?

Well, while this may look like quite a change of interests to a reader of this blog, it really is not. I’ve been interested in statistics, probability and “data mining” (as it was called at the time) long before I even wound up in IT. Actually, I have a diploma in psychology, and I’ve never studied computer science (which of course I’ve often regretted for having missed so many fascinating things).
Sure, at that time, in machine learning, much of the interesting stuff was there, too. Neural nets were there, of course. But that was before the age of big data and the boost distributed computing brought to machine learning, before craftsman-like “data mining” became sexy “data science”…
Those were the olden days, when statistics (in psychology, at least), was (1) ANOVA, (2) ANOVA, and (3) … you name it. Whereas today, students (if they are lucky) might be learning statistics from a book like Richard McElreath’s “Statistical Rethinking” (
That was before the advent of deep learning, which fundamentally changed not just what seems possible but also the way it is approached. Take natural language processing, for example (just check out the materials for Stanford’s Deep Learning for Natural Language Processing course for a great introduction).
While I’m at it … where some people see it as “machine learning versus statistics”, or “machine learning instead of statistics”, for me there’s no antagonism there. Perhaps that’s because of my bio. For me, some of the books I admire most – especially the very well-written, very accessible ISLR – Introduction to Statistical Learning – and its big brother, Elements of Statistical Learning, – are the perfect synthesis.
Returning to the original topic – I’ve even wondered should I start a new blog on machine learning and data science, to avoid people asking the above question (you know, the why data science one, above). But then, your bio is something you can never undo, – all you can do is change the narrative, try to make the narrative work. The narrative works fine for me, I hope I’ve made it plausible to the outside world, too😉 .
(BTW I’m lucky with the blog title I chose, a few years ago – no need to change that (see
And probably, it doesn’t hurt for a data scientist to know how to get data from databases, how to manipulate it in various programming languages, and quite a bit about IT architectures behind.
OK, that was the justification. The other question now is …

Doing Data Science …how?

Well, luckily, I’m not isolated at all with these interests at Trivadis. We’ve already had a strong focus on big data and streaming analytics for quite some time (just see my colleague Guido’s blog who is an internationally renowned expert on these topics), but now additionally there’s a highly motivated group of data scientists ready to turn data into insight🙂 ).
If you’re reading this you might be a potential customer, so I can’t finish without a sales pitch:

It’s not about the tools. It’s not about the programming languages you use (though some make it easier than others, and I decidedly like the friendly and inspiring, open source Python and R ecosystems). It’s about discovering patterns, detecting underlying structure, uncovering the unknown. About finding out what you didn’t (necessarily) hypothesize, before. And most importantly: about assessing if what you found is valid and will generalize, to the future, to different conditions. If what you found is “real”. There’s a lot more to it than looking at extrapolated forecast curves.

Before I end the sales pitch, let me say that in addition to our consulting services we also offer courses on getting started with Data Science, using either R or Python (see Data Science with Python and Advanced Analytics with R). Both courses are a perfect combination as they work with different data sets and build up their own narratives.
OK, I think that’s it, for narratives. Upcoming posts will be technical again, just this time technical will mostly mean: on machine learning and data science.

IT Myths (1): Best Practices

IT Myths (1): Best Practices

I don’t know about you, but I feel uncomfortable when asked about “Best Practices”. I manage to give the expected answers, but I still feel uncomfortable. Now, if you’re one of the people who liked – or retweeted – this tweet, you don’t need to be convinced that “Best Practices” are a dubious thing. Still, you might find it difficult to communicate to others, who do not share your instinctive doubts, what is the problem with it. Here, I’ll try to explain, in a few words, what is the problem with it in my view.

As I see it, this juxtaposition of words cannot be interpreted in a meaningful way. First, let’s stay in the realm of IT.
Assume you’ve bought expensive software, and now you’re going to set up your system. The software comes with default parameter settings. Should you need to follow “Best Practices” in choosing parameter values?

You shouldn’t have to. You should be fully entitled to trust the vendor to ship their software with sensible parameters. The defaults should, in general, make sense. Of course there often are things you have to adapt to your environment, but in principle you should be able to rely on sensible defaults.

One (counter-)example: the Oracle initialization parameter db_block_checking. This parameter governs whether and to which extent Oracle performs logical consistency checks on database blocks. (For details, see Performance overhead of db_block_checking and db_block_checksum non-default settings.)
Still as of version, the default value of this parameter is none. If it is set to medium or full, Oracle will either repair the corrupt block or – if that is not possible – at least prevent the corruption spreading in memory. In the Reference, it is advised to set the parameter to full if the performance overhead is acceptable. Why, then, is the default none? This, in my opinion, sends the wrong signal. The database administrator now has to justify her choice of medium, because it might, depending on the workload, have a negative impact on performance. But she shouldn’t have to invoke “Best Practices”. While performance issues can be addressed in multiple ways, nobody wants corrupt data in their database. Again, the software should be shipped with defaults that make such discussions unneccessary.

Second, imagine you hire a consultant to set up your system. Do you want him to follow “Best Practices”? You surely don’t: You want him to know exactly what he is doing. It’s his job to get the information he needs to the set up the system correctly, in the given environment and with the given requirements. You don’t pay him to do things that “work well on average”.

Thirdly, if you’re an administrator or a developer, the fact that you stay informed and up to date with current developments, that you try to understand how things “work” means that you’re doing more than just follow “Best Practices”. You’re trying to be knowledgeable enough to make the correct decisions in given, concrete circumstances.

So that was IT, from different points of view. How about “life”? In real life, we don’t follow “Best Practices” either. (We may employ heuristics, most of the time, but that’s another topic.)
If it’s raining outside, or it is x% (fill in your own threshold here ;-)) probable it will rain, I’m gonna take / put on my raining clothes for my commute … but I’m not going to take them every day, “just in case”. In “real life”, things are either too natural to be called “Best Practices”, or they need a little more reflection than that.

Finally, let’s end with philosophy😉 Imagine we were ruled by a Platonic philosopher king (queen) … we’d want him/her to do a bit more than just follow “Best Practices”, wouldn’t we😉

Better call Thomas – Bayes in the Database

This is my third and last post covering my presentations from last week’s Tech Event, and it’s no doubt the craziest one🙂.
Again I had the honour to be invited by my colleague Ludovico Caldara to participate in the cool “Oracle Database Lightning Talks” session. Now, there is a connection to Oracle here, but mainly, as you’ll see, it’s about a Reverend in a Casino.

So, imagine one of your customers calls you. They are going to use Oracle Transparent Data Encryption (TDE) in their main production OLTP database. They are in a bit of panic. Will performance get worse? Much worse?

Well, you can help. You perform tests using Swingbench Order Entry benchmark, and soon you can assure them: No problem. The below chart shows average response times for one of the transaction types of Order Entry, using either no encryption or TDE with one of AES-128 or AES-256.
Wait a moment – doesn’t it even look as though response times are even lower with TDE!?

Swingbench Order Entry response times (Customer Registration)

Well, they might be, or they might not be … the Swingbench results include how often a transaction was performed, and the standard deviation … so how about we ask somebody who knows something about statistics?

Cool, but … as you can see, for example, in this xkcd comic, there is no such thing as an average statistician … In short, there’s frequentist statistics and there’s Bayesian, and while there is a whole lot you can say about the respective philosophical backgrounds it mainly boils down to one thing: Bayesians take into account not just the actual data – the sample – but also the prior probability.

This is where the Reverend enters the game . The fundament of Bayesian statistics is Bayes theorem, named after Reverend Thomas Bayes.

Bayes theorem

In short, Bayes Theorem says that the probability of a hypothesis, given my measured data (POSTERIOR), is equal to the the probability of the data, given the hypothesis is correct (LIKELIHOOD), times the prior probability of the hypothesis (PRIOR), divided by the overall probability of the data (EVIDENCE). (The evidence may be neglected if we’re interested in proportionality only, not equality.)

So … why should we care? Well, from what we know how TDE works, it cannot possibly make things faster! In the best case, we’re servicing all requests from the buffer cache and so, do not have to decrypt the data. Then we shouldn’t incur any performance loss. But I cannot imagine how TDE could cause a performance boost.

So we go to our colleague the Bayesian statistian, give him our data and tell him about our prior beliefs.(Prior belief sounds subjective? It is, but prior assumptions are out in the open, to be questioned, discussed and justified.).

Now harmless as Bayes theorem looks, in practice it may be difficult to compute the posterior probability. Fortunately, there is a solution:Markov Chain Monte Carlo (MCMC). MCMC is a method to obtain the parameters of the posterior distribution not by calculating them directly, but by sampling, performing a random walk along the posterior distribution.

We assume our data is gamma distributed, the gamma distribution generally being adequate for response time data (for motivation and background, see chapter 3.5 of Analyzing Computer System Performance with Perl::PDQ by Neil Gunther).
Using R, JAGS (Just Another Gibbs Sampler), and rjags, we (oh, our colleague the statistian I meant, of course) go to work and create a model for the likelihood and the prior.
We have two groups, TDE and “no encryption”. For both, we define a gamma likelihood function, each having its own shape and rate parameters. (Shape and rate parameters can easily be calculated from mean and standard deviation.)

model {
    for ( i in 1:Ntotal ) {
      y[i] ~ dgamma( sh[x[i]] , ra[x[i]] )
    sh[1] <- pow(m[1],2) / pow(sd[1],2)
    sh[2] <- pow(m[2],2) / pow(sd[2],2)
    ra[1]  <- m[1] / pow(sd[1],2)
    ra[2]  <- m[2] / pow(sd[2],2

Second, we define prior distributions on the means and standard deviations of the likelihood functions:

    m[1] ~ dgamma(priorSha1, priorRa1)
    m[2] ~ dgamma(priorSha2, priorRa2)
    sd[1] ~ dunif(0,100)
    sd[2] ~ dunif(0,100)

As we are convinced that TDE cannot possibly make it run faster, we assume that the means of both likelihood functions are distributed according to prior distributions with the same mean. This mean is calculated from the data, averaging over all response times from both groups. As a consequence, in the above code, priorSha1 = priorSha2 and priorRa1 = priorRa2. On the other hand, we have no prior assumptions about the likelihood functions’ deviations, so we model them as uniformly distributed, choosing noninformative priors.

Now we start our sampler. Faites vos jeux … rien ne va plus … and what do we get?

Posterior distribution of means

Here we see the outcome of our random walks, the posterior distributions of means for the two conditions. Clearly, the modes are different. But what to conclude? The means of the two data sets were different too, right?

What we need to look at is the distribution of the difference of both parameters. What we see here is that the 95% highest density interval (HDI) [3] of the posterior distribution of the difference ranges from -0.69 to +1.89, and thus, includes 0. (HDI is a concept similar, but superior to the classical frequentist confidence interval, as it is composed of the 95% values with the highest probability.)

Posterior distribution of difference between means

From this we conclude that statistically, our observed difference of means is not significant – what a shame. Wouldn’t it have been nice if we could speed up our app using TDE😉