When you first wrote about how hadoop is a waste of time if you don't have multi-TB worth of data (http://www.chrisstucchio.com/blog/2013/hadoop_hatred.html ), I thought that was classic linkbait. And then I actually started seeing real benefits of not doing map-reduce for smaller datasets ( small = 1GB-1TB) & just sticking to plain old scala ( poso as opposed to pojo :)
Similarly, this article seems linkbait on the surface but makes a lot of sense if you do anything performance intensive. I recently tried implementing a multi layer neural net in Scala - I eventually ended up rewriting your mappers as while loops & introduce some mutables, because at some point, all this FP prettiness is nice but not very performant. It looks good as textbook code for toy examples, but takes too long to execute. Am still a huge fan of FP, but nowadays I don't mind the occasional side effect & sprinkling some vars around tight loops. Its just much faster, & sometimes that is important too.
I'd like to add that "sprinkling side effects & vars around tight loops" doesn't generalize to all languages. For isntance, GHC can optimize pure code MUCH more than impure code.
Meaning that with purity you get increased performance in many cases.