Absolutely. This, along with DRY, allow you to write in just as much complexity/flexibility as is appropriate at that moment. I think the corollary to this is that you need to become adept at recognizing when additional abstractions and/or consolidations are necessary... in other words, learn when You Really Do Need It.
It would seem that popular software practices are trending away from certain types "premature flexibilization." Conventional Java coding practices - things you might have seen everywhere a decade ago - might call for a number of interfaces, class hierarchies, and perfunctory design patterns (getters, setters, perhaps factories) just to declare a SINGLE concrete class.
These days we're much more accustomed to succinct code in dynamic languages and looser structural rules. Maybe some of this strict enforcement has shifted off into test suites.
The author's observation that anticipating changes / future requirements in software is often misguided and ill-informed fits right into this trend toward simplicity. Concise, readable code that's easy to shuffle around is much better suited for adaptation than a big, bulky "framework" that tries to anticipate every contingency.
I suppose anything "premature" is evil. By its very nature, the word "premature" has a negative connotation.
I won't argue for or against "flexibilization". But I do think separation of concerns is never premature (any isolated piece of code should have as few concerns as possible). And code with properly separated concerns should inherently be flexible.
I think I've heard a similar idea described as "worse is better".
I think part of the problem is languages that force early decisions. In general, the more things can be changed at runtime, the less planning required for flexibility later.
There may be some need to balance that flexibility with discouraging stupid behavior. I've seen a number of complaints about Ruby programmers modifying built-in classes in ways that cause other code to break. I'm not sure it's a fundamental problem with the idea of extreme dynamicism though; it may simply have problems in single-dispatch single-inheritance OO languages.
I'd really like to see a language with generic functions where every function is generic. Clojure-style custom dispatch functions and hierarchies would be good too.
This is tough. I want to both passionately agree and disagree with the idea; I have real life experiences that could act as arguments for/against either side.
The easy thing to do would be to nitpick some small aspect of the article that doesn't have anything to do with the author's main point. (bad example case, maybe a misspelling, OMG how dare you judge the holy patterns book!, lol you totally ripped the patterns book, etc) Instead I want to try to express how I feel about this and how I think it's both correct and incorrect.
I'm a big proponent of keeping code small. Small code means there should be less that can go wrong. I get a headache if I run across a situation where I can't see a reasonably straight forward solution built of components I already have. I dislike making a whole new class if I don't have to, but on the other hand I prefer to have objects that are smart about their purpose rather than stupid/generic objects which have no specific knowledge of what they contain. It's a delicate balance. Do you make a class to represent a product identifier - or do you just use a string? There's often good arguments either way. How to choose?
When in doubt, simplicity should win - but that's a nebulous rule, too. (Ironic that simplicity isn't a simple thing, no?) What does it mean to be simple? The author touched on this in the final paragraph, but I think that deciding the meaning of simplicity is, in fact, the entire crux of the problem he was describing. If you're in a certain frame of mind, building a complex, interconnected framework may be the simplest solution. The same situation, when viewed differently, may appear to be solved with a handful of procedural functions in 1/10 the amount of code - but usually that point of view is with the benefit of hindsight. How do you know for sure that your simplicity isn't just a complex problem waiting to happen?
Let's say the product identifier was originally specified as a string, but as you develop you find that you often need the product name and image in addition to the identifier at a large number of points in your program. It was simpler to use a string at first, but now perhaps some kind of structure representing all of a product's information together would be nice. It might mean the difference between methods taking 3 arguments of product information vs. taking just one product object. Is that simpler or more complex? You've now introduced another class to your codebase. If you had added that on day one, anticipating this situation, you may have been labeled a "premature flexibilizationist." Why is it okay to add it now when you knew you'd need it from the start? How much old code will you be able to simplify now? How much time did you waste working around the old limitations?
To me, the truth of a design choice becomes clear the moment you start having to design around or in spite of it. The moment that happens - and I mean the VERY MOMENT you think, "hey, all I need is a very special-purpose method here that takes an extra argument and then..." you've got to stop everything and throw something away. Something has gone wrong somewhere.
If you ever write a line or two of logic that you've written somewhere else in the codebase, that very same reaction should immediately occur. Never. Repeat. Logic.
So yes, I'm in favor of simplicity - but no, I'm not in favor of blatantly ignoring the truth of your situation. Experience, logic, common sense, etc. exist for a reason. You can't just blindly write the simplest thing that will work when the simplest thing that works NOW will have to be deleted tomorrow to make room for the new simplest thing that you knew was going to come anyway. Should you plan for every. possible. contingency? No, of course not - just plan for what you need, but if you find later that you didn't need something - GET RID OF IT QUICK!
I think the problem with keeping things too simple at first is that somewhere someone has to deal with the complexity that's there but getting ignored. The problems are just pushed up higher and higher until eventually the user of the software is exposed to it. Taking that idea down into the APIs, when designing a class, your "user" might just be yourself later on in another context of the code, but why put yourself through a complex object interface when you could just make it clean here and now? I often see objects with convoluted usage patterns - you have to initialize this before that, if you call method X then method Y will not give correct results, etc. That kind of stuff is insane! Oh sure, it may have been simpler, easier, and quicker to implement the object at the time - and by golly if that wasn't what the agile extreme programming guru said to do - do the simplest thing that works, after all! The thing is, an interface like that is broken, IMO. It's crap. It's too complicated. Sometimes developers ignore that complexity is a larger scope problem than the text of the code itself, and that's an important thing to remember when deciding if something is needlessly complex or not.
http://en.wikipedia.org/wiki/You_Ain%27t_Gonna_Need_It
http://c2.com/xp/YouArentGonnaNeedIt.html