Everyone has their own definition of complexity. I make no mince about a decade worth of developing Basecamp is where I draw my primary experience from.
That system is small by web scale standards -- only 70 million requests/day, 1.5 terabyte of DB data, half a petabyte of file storage, two data centers, and about 100 physical machines -- but probably still larger than 97% of all Rails apps.
Also, plenty of data stores (memcached, redis, multiple MySQLs, solr), many 3rd party libs, job servers, integrations, and more.
So no, it's no Facebook or Yahoo or Google. But it also isn't a toy system, except in the sense that we're still having so much fun playing with it.
I am not dismissing Basecamp. I am just saying that in a large portion of the software world, applications are waaaay more complex than normal rails apps. And in that context, TDD makes sense if only to manage complexity. Even if you are not Facebook, but say Airbnb. If their tests are not fast enough and they cannot trust them to make decisions, they wouldn't be able to deploy in a reasonable time. And when slow tests lead to infrequent deployments, that's when the real problems begin. (Airbnb is an arbitrary choice which came from the top of my mind, not anything specific)
My gut feeling is that >50% of software development happens in those complex apps and not rails apps. So dismissing TDD is just yet another extreme viewpoint, which many people will unfortunately take for granted.
I think the counter argument may be that TDD actually adds complexity to a system by destroying the architecture. So I am interested in what particular arguments you have in how TDD manages complexity instead of increasing it.
Here's my nagging question since TDD-malaise has definitely crept into a broad dev consciousness in no small part to your recent shots across the bow.
Have you distilled out broader guidelines for system dev and valuable testing? Your focus seems to be on your experience and community which isn't getting picked up so well outside of it.
This post was moderately rails-centric and the wider conversation is coming from more varied groups. Is the ruby+rails ecosystem fundamentally different in ways where outside groups should consider your perspective before sharpening their pitchforks?
The TDD drag on development seems different for different folks. The ramp for TDD tells devs that they are following some best practice to limit human error in their implementations. But humans are building the tests and humans also commit errors in focus.
For the devs that can piece together awesome and fast test suites to run against their awesomely structured and implemented code, will they find lower value in all that test-building time?
For devs that have trouble implementing, but can piece together test suites that help them along, will they find higher value in their tests?
You have some devs who don't need to test wasting time and marring otherwise shippable code. You have others guarding against egg on their face spending that much more, but valuable time.
Is there a dev efficiency divide opening up? Are there differences in the value and importance of TDD across all the various categories of languages, tools, developers which just can't be summed up in blog posts and retorts? We demand cargo to build a cult around!
That system is small by web scale standards -- only 70 million requests/day, 1.5 terabyte of DB data, half a petabyte of file storage, two data centers, and about 100 physical machines -- but probably still larger than 97% of all Rails apps.
Also, plenty of data stores (memcached, redis, multiple MySQLs, solr), many 3rd party libs, job servers, integrations, and more.
So no, it's no Facebook or Yahoo or Google. But it also isn't a toy system, except in the sense that we're still having so much fun playing with it.