Raising the Fetch Size, Good or Bad? Memory Management in Oracle JDBC 12c

Yesterday at DOAG I presented on “Raising the Fetch Size, Good or Bad? Memory Management in Oracle JDBC 12c”.
Again I’m too lazy to write a separate post (okay, let’s phrase this positively: I don’t like to be redundant ;-)) so I’ll just say what’s not in the slides …

Firstly, it all started with a story, a story about an Oracle DBA for whom, like for most any Oracle DBA I guess, fetch size was a well-known tuning knob. Most any Oracle DBA, I think, will have read some blog post on this topic, or seen a demo showing that as we raise the fetch size, throughput goes up. In fact, sometimes you can see it going up very much!

And so in one company, there was a DBA who decided to do systematic experiments, in order to find out the optimal fetch size for the in-house application. As it turned out, the optimal value for that application lay around 80. So the DBA told the developers, it was integrated into the code, and released to pre-prod. All was fine or at least nobody said the opposite.

Then inevitably, there came the day of the production release, and all wasn’t fine any more … The application server kept crashing with out of memory errors, and they had to do an emergence release, and no-one never ever wanted to talk about this any more.
What had happened? How could a value of 80 (we’re not speaking 10000 here) have such a catastrophic effect?

So that’s for the teaser… now there’s an Oracle Whitepaper where they describe how memory management in the JDBC driver was fundamentally changed between 11g and 12c. Will it be enough to read this paper, for people who have experienced those OOM errors, to be willing to try it again?
My idea was to perform an experiment and see for myself. And that’s what the talk was about, – how I set up the experiment and what was the outcome ;-). Here are the slides.