I was ready to crap all over this--I've seen so many of these kind of posts--but this was (IMHO) quite good. There's a ton of (as the kids say) alpha in each of the bullet points.
I can't say that I practice all or most of these habits, but the points about "calling your shots" and "concrete hypotheses" resonate. For example, when I add a debugging printf/log, I always ask myself, "will this output invalidate one or more hypotheses?" If not, then I need to rethink the problem.
The flow of 80/15/5 is what true seniority looks like in my opinion.
Do a lot of the heavy lifting on a goal, explore valuable (and sometimes promising but with a stretch) avenues around that goal and then be able to document and articulate in a way that another person can grow into it while you venture forth into the next challenge.
I see it is written by Kent Beck - so look him up because he is not your average blogspam wannabe but actually someone you could list as software development master.
On the other hand, I'm not sure he succeeded on that front when communicating Agile or TDD. Arguably there are no works misconstrued more in the realm of computing than those. It may just be luck in this case – or perhaps a skill that has been honed over the years?
I generally don’t meta comment, but a bit surprising to me that the bulk of this discussion has been flagged dead. I think the discussion and criticisms there were valid.
If you have enough karma, you can “vouch” a dead comment back to life by clicking the timestamp.
At the time of these writing, only one comment was dead, and it was rude and content-free. The rest of its tree was arguing about Beck in general rather than the article, so everything’s working as intended IMO.
I haven’t heard of C3 before, but I’m a big fan of reading about software project failures, and I’m not a fan of every aspect of XP, so I was certainly curious about this.
That said, the Wikipedia page neither supports nor refutes your assertion, and Fowler himself discusses C3’s failure here: https://martinfowler.com/bliki/C3.html
Fowler refers to notes that don’t seem to be in the Wikipedia entry any more: “In particular the entry in Wikipedia is misleading and incomplete, much of its comments seem to be based on a paper from a determined XP critic whose sources are unclear. Certainly its comments on performance are a misleading interpretation of material in my Refactoring book.”
Do you have any other links to this project? The fact that it went live and then reverted to the COBOL version is interesting.
I believe the characterization of C3 as a "failure" is because it wasn't able to deliver the goal (goal was paying 87000 people – it only reached about 9000), and was later discontinued for multiple reasons (some unrelated, like people leaving, the merger with Daimler). The claim that "XP was banned" there seems overblown, it seems it's just that "people at DaimlerChrysler stopped taking terms like Smalltalk, OOP and XP" (per link above).
C3 was, by any measure, an abject failure. It got only the very basics working and then died when it ran into the vast number of unspecified exceptional cases (gee, where have we heard that before ...) that needed to be handled. And then got cancelled and completely reverted.
To then use such a failure as a marquee project demonstrating the supposed "superiority" of XP is unabashed chutzpah.
Now, large IT projects generally fail. So, XP is not wholly to blame.
However, the proponents of XP pushed it as superior silver bullet to navigate both the political and technical waters of software projects. The fact that C3 was such a spectacular failure simply demonstrates that XP really wasn't any different than any other methodology being pushed by people with an agenda.
I was trying not to make any judgement call, but I agree.
To me there are worse things in the story, though: they tried to make User Stories and even customer-driven tests and ended up burning out the only customer that was able to do it.
It's not only underwhelming compared to the silver bullet they were selling in conferences and books, but it required some unicorn customer that they couldn't replace.
For years I saw people trying to make poor customers and PMs write Cucumber tests and man...
To me there are worse things in the story, though: they tried to make User Stories and even customer-driven tests and ended up burning out the only customer that was able to do it.
This is still a legitimate concern today with the "product owner" role that a lot of popular Agile processes rely on. In effect the whole premise of having a PO embedded within the team as the authority on requirements that are expected to change at any time means the entire software development process is built around a single human point of failure.
I think I draw the line at asking them to produce user stories, to me that's already too much. Asking them to use Cucumber or BDD rituals is probably against the Geneva convention.
There really is no silver bullet to writing software. You gotta keep a short feedback with users, and not overwhelming them is important.
A lot of the failure of XP was an assumption that all developers develop software the same way, and for all software teams XP’s tenants are optimal.
Practices like pair programming and TDD work on some instances, are absolutely terrible in others. The arrogance of the original XP folks was a hard core belief that they had found the silver bullet of software development, and then marketing it ruthlessly.
> It got only the very basics working and then died when it ran into the vast number of unspecified exceptional cases (gee, where have we heard that before ...) that needed to be handled. And then got cancelled and completely reverted.
> To then use such a failure as a marquee project demonstrating the supposed "superiority" of XP is unabashed chutzpah.
> Now, large IT projects generally fail. So, XP is not wholly to blame.
One could argue that XP achieved a significantly better outcome than the typical project of that size. They didn't cause any big outages, and reached the end result of being cancelled and reverted much more quickly and cheaply than usual.
The article by Fowler cited some ancestors up says something along the lines of "the cancellation of C3 proves that XP is no guarantee for success". Regardless of whether XP works for everyone or not, that's pretty far from claiming it's a silver bullet.
I suspect I have some cached references, but I would have to go dig a presentation out of my backups.
Unfortunately, all parties involved in the C3 project would rather that it be forgotten. As such, it seems that it is going down the memory hole even faster than most Internet things. :(
Thank you so much for sharing this. I finally have some ammo. As a .Net dev, I constantly encounter too many people who think Fowler is a programming prophet.
> Near as I can tell the fundamental problem was that the GoldOwner and GoalDonor weren't the same. The customer feeding stories to the team didn't care about the same things as the managers evaluating the team's performance. This is bad, and we know it's bad, but it was masked early because we happened to have a customer who was precisely aligned with the IT managers. The new customers who came on wanted tweaks to the existing system more than they wanted to turn off the next mainframe payroll system. IT management wanted to turn off the next mainframe payroll system. Game over. Or not, we'll see... -- KentBeck
> So, I'm curious - does this represent a failure of XP? -- AnonymousCoward
> Sensitivity, certainly. But if the people who tell you what to do don't agree with the people who evaluate what you are doing, you're stuffed, XP or no XP. -- KentBeck
OOP is a failure ... looked at through the lens of today.
Back then, OOP originally solved a very real problem--optimizing memory usage of bunches of objects that have mostly common behavior with just a few tweaks different from one another. It did pretty well at that at the expense of introducing some extraneous coupling and complexity.
And then memory got big and disk became SSD.
Now, programmers would rather burn extra memory, avoid pointer chasing (expensive on modern microprocessors), and ditch the extraneous coupling that introduces unnecessary complexity.
YAGNI is a conscious and iterative task prioritization process. From a Pareto perspective, it just means to focus on the 20% of functionality that provides 80% of the value first.
That's not to say the other 80% of requests should be ignored. But instead well documented and groomed in a backlog.
In practice, YAGNI works out exactly how C3 ended up. Your architecture and design ends up myopic and short sighted, and gets overwhelmed by deferring complexity that could have been dealt with adequately early on, it is much harder to retrofit back onto an existing code base.
Fowler means it literally. He had examples published on Artima.com years ago where he gives examples of hard coding things left and right and adding better support “only when you absolutely need it”.
To be clear I am not arguing for big design up front. I am arguing to keep an eyeball on your roadmap, and that is not only OK to anticipate the future but maybe do some small amount of work to make the future work easier.
The fallacy of “You Ain’t Gonna Need It” is that you so very often do, and the developers down the road are cursing out the devs who ignored the future.
If you subscribe to "You Are Gonna Need It", how do you ever get around to shipping software if you are always implementing the things that are fun, but unnecessary, and not focusing on the things that are needed to progress?
The "premature optimization" thing warned against making code hard to read/debug for the sake of performance in areas where performance is unlikely to ever be a concern. If you don't ship software, it is understandable this is isn't much of a problem.
Although I'm not sure how applicable that really is today anyway. The tools have changed dramatically. Often you want to make your code as readable/debuggable as possible as that also gives the best chance for the compiler to find the optimizations. These days, if you try to get fancy, you'll probably make the performance worse.
If you subscribe to "You Are Gonna Need It", how do you ever get around to shipping software if you are always implementing the things that are fun, but unnecessary, and not focusing on the things that are needed to progress?
The point is that you aren't always implementing things that are unnecessary. You probably have a roadmap where you're pretty sure what you're going to be doing for the next few days and weeks and less sure as you look further ahead. Obviously you don't want to spend months building some over-engineered, over-architected monstrosity. But there are plenty of people out there who take YAGNI very literally and argue that you shouldn't implement anything you don't need right now. That's absurdly inefficient if you can guess with 80-90% accuracy what you're also going to need a month from now and you can save a lot of effort by implementing everything mostly right the first time and not repeatedly reworking code you've only just written over that time only to end up at the same place anyway.
YAGNI is about not implementing features until they are needed. It’s not about ignoring the complexities of the domain. You can adhere to the principle while still designing a system that acknowledges the complexities but defers implementation until it is needed.
I have no horse in this race but I believe what the other poster was saying is that you might not need it but you should still think about it and decide if you should factor it into your designs.
Developers aren't great at sharing nomenclature, but by what seems to be the most common definition, YAGNI refers to not going off and implementing something that is more fun to implement, but isn't needed right away (if ever). Focus on what you actually need to get your project to a desirable state.
It doesn't say you should not consider future considerations in your design. In fact, it suggests that you should design your software to be as accommodating as possible, most notably by ensuring testing is core to your design to assist you when the time for change comes. The other poster you refer to and YAGNI seem to be in alignment.
YAGNI is more about design than implementation. It's about worrying about future concerns rather than what you know you need right now.
The truth is in the middle, sometimes you need to design for the future and sometimes you don't. Often designing for the future just means making sure you haven't designed yourself into a corner rather than being able to fully deal with the future but that's a nuance most people miss.
Not as it seems to be usually defined, but I agree that programmers are bad for not sharing a common nomenclature. We can't even agree what something as simple as enums are. So, no doubt that there are camps who hold that perspective. enterprise_cog clearly comes from the "don't implement", not "don't design" definition, though.
While I accept your definition, because, hey, programmer's can't agree on definitions at the best of times, that is not the definition that was used earlier. And is not the definition presented by the XP gang.
John Carmack of Doom fame seems to share your definition, suggesting that attempts to plan architecture in advance will only come to bite you, but I'm not sure he is an XP subscriber and he certainly wasn't involved in C3.
oh yeah, you're right, what the XP guys meant was "spend all your time building the architecture with all kinds of extension points just don't implement them... YAGNI!".
It's a mis-interpretation and it's not a reasonable one either.
Pretty much. Central to the YAGNI message is ensuring that your design brings test coverage to help you build the things you need when you need it. Kent Beck is credited with having invented TDD, so of course that's at the heart of their theorem.
You need those extension points to keep your tests sane. They come naturally as part of the testing process.
It's always strange when someone on a discussion forum defers to someone else for their thoughts. Why even bother?
It's even stranger when the deferral isn't even relevant.
Stranger still, the premise appears to be made up. I can find no mention of "Code that's hard to test in isolation is poorly designed" anywhere on the internet other than that blog post and another blog that cites DHH.
Beck's insight with TDD, over the testing that came before it, was really just that if you write the test first then you have certainty that the test fails. If you are writing tests for code that you have already written, there is no way to know if your test passes because your code is conformant or because there is a bug in your test.
But, whether your write your tests first or last, you do need extension points to keep your tests sane. Rails is no exception. In fact, I would argue Rails' success was directly related to it leaning into testing and it showing developers what extension points are most relevant for web applications as a result. Before Rails came along, throwing MySQL calls haphazardly into the middle of the HTML was the norm.
> this is all just a defensive mechanism to try and dismiss the point.
I figured.
> What's going to happen is that over time you'll eventually have the experience to understand what's being described here.
I've been around long enough to be aware of the surface level motivation – accepting the point without defensive dismissal would cause harm to your ego. I just don't get the appeal. Who cares about the ego? If you have no thoughts to share, why post anything? Surely if DHH wants to share his thoughts here, he can come here himself?
> Because it turns out there's a lot of things that young people find weird that they later understand.
Curiously, older people almost universally state that getting older means caring about the ego less and less. Maybe what you are trying to say here is that you are still too young to understand that? Fair enough.
it'd be akin to someone arguing with a 10 year old about how they should feel losing someone they've been married to for over 40 years. That 10 year old just flat doesn't have the tools to understand so why bother.
But the question asked why you would bother, not why someone who has been married for 40 years would bother. You don't need to know anything about anyone else to answer the question.
It is recognized that you did answer the question. You said as a defence mechanism. That is understood. But the follow up is asking on deeper level: What needs to be defended against, exactly? What is that you think your ego going to do to you if you don't put up these defences?
The last coherent thought you had was an assertion that YAGNI may include not implementing extension points. But then DHH, while admittedly starting with a made up premise that made no sense, came along and obliterated that notion by the end, detailing how extension points – what he calls model, integration, and system – are necessary to keep tests sane.
And, since YANGI emphasizes testing, we can be assured it actually does include implementing integration points.
Say some gigantic piece of function has 600 lines of code that does logic this, logic that and logic this and that. Suppose one day your PM goes into your office and request that a certain special case be added to this feature.
You do not further increase the complexity of this function by having one or two variables in the front, several if-elses in the middle not to mention a couple gotos.
You take these special cases and the most relevant logic away into yet another function, document it and hence ensure the changes to that 'special logic' do not mess up with the rest of that feature.
Even if the suggestions are a bit too generic, they might click for someone some time after they've read it. It also helps validate some things that less experienced programmers might be doing but aren't sure are the best things. I for example found that some things I seem to be gravitating towards are mentioned, which will hopefully allow me to focus on them and grasp them better in the future.
I might be, being miss-interpreted as dismissing this article.
They are definitely good points, and doesn't hurt to read them.
I think all the points are valid.
Maybe I was just contemplating how experts sometimes 'summarize' their knowledge, condense it, but in the process of trying to be succinct, becomes itself un-fathomable, generic.
I suspect that the unfathomable nature of condensed knowledge arises from the fact that there is simply no shortcut to expertise. You must earn it through experience.
Someone with a similar level of experience to the author may well have the right foundations to draw on such that a condensed expression of an idea resonates well. Others may only get a "seed pearl" to help shape how they view their past and future experiences. And some might be able to recognise that there is wisdom there, but not be able to relate it to their own understanding at all.
Without any relevant experience, it's just words devoid of much meaning.
Many of these suggestions made me remember my own experiences, some where I intuitively followed them and some where I did not. Reading this write-up made me realize their value and will hopefully remind me that I need to do these things more often.
As someone who is not an expert but tries to gain wisdom from past experiences, it helps me to see where my intuitions might have been right or wrong, even if I may not get the point right away.
What nobody really talks about is how the project was ultimately cancelled. Reasons for the flop include unclear requirements, and the customer representative resigning due to burnout. That's fine, though many XP proponents sell it as a panacea that lets you respond to any change.
It's just remarkable to me how everyone involved was able to bootstrap careers as thought leaders from a project that failed so bad that Chrysler ended up banning XP.
I had a long post written, and I've decided that it probably belongs in a blog post rather than a random Hacker News comment, so I deleted it. But I'll just say no, Twitter is absolutely not an XP success story.
There are a handful of interesting ideas wrapped in a self-help program. For example, the idea of testing can be useful, while rules like “all code should have tests,” and “you should always write tests first,” are just silly. I’m sure it’s worked for some people on some projects, but generalizing that to all software development is just charlatanism.
I think the point is that at some point, the "thought leaders" start to lose relevance when they're not day-to-day practitioners. I'm not saying it is or isn't true about Beck (I've always enjoyed listening to him speak), but is critical thinking we should apply to anyone.
That's not critical thinking though, it's fallacious thinking. Whether or not someone is a "thought leader" has no bearing on the quality of the article.
Would the article have gotten as much traction if it had been written anonymously? If there's an appeal to authority, then an evaluation is appropriate.
Why submit it like that, but to indicate to readers why this article is worth reading? HN guidelines say to submit the title as is, without additional qualifiers or commentary. (Fortunately, the mods updated it)
I can't speak for the person who posted it. Seems like it could have been an honest mistake to me.
An "appeal to authority" means something more specific (e.g. the logical fallacy "argument from authority") and this is not happening here.
The whole point I'm trying to make is that we should evaluate the writing based on its content. The person I originally responded to was dismissing it solely because of the person who wrote it (which ironically an ad hominem is a fallacy very similar to an appeal to authority but for the opposite reason).
look at any of the bullet points in the article. it's very hard to constructivly criticise any of them, they are all so vapid. typical of beck's writings.
Well if it's so hard to criticise them, that could be an indication that there is some truth in them.
> they are all so vapid
I like that he doesn't make sweeping statements. Because in software development, to quote another great author, there is no silver bullet.
That's not a popular story to tell when you're writing a book or speaking in a conference, people like to hear simple black and white statements. But the fact of the matter is, reality is much more nuanced.
I think it's more likely that if you can't find anything constructive to say about it, that maybe you don't have a valid argument (your original post was an ad hominem attack after all).
You are attacking the person's writing capability, implying that the article is not good because they wrote something before that you disagreed with. You're trying to argue semantics here, but it doesn't really matter because either way your argument is fallacious.
I think Martin has a lot more to answer for as far as the sorry state of software today goes. Watching his discussion with Casey Muratori on GitHub last year was great. Not many people saw it, but boy does he compare poorly to a truly capable and knowledgeable programmer - https://github.com/unclebob/cmuratori-discussion
I generally have a negative opinion of Martin, but did we read the same discussion? Martin was very gracious in letting many points slide (points where he was correct!), and was generously willing to end the conversation at a sort-of draw when it was clear that Muratori was not really prepared to discuss things at a detailed level (It was obvious to me from the start that Muratori thought "dynamic polymorphism" just meant deep hierarchies of inheritance, a la early C++, Martin realized this later and I think that was the first inkling that he was wasting his time).
Muratori was even wasting his time arguing against programmer time _in general_ is less valuable than machine time? And doesn't understand that LLVM is an extremely specialized piece of software, from which general software engineering practices should not be extracted?
> It was obvious to me from the start that Muratori thought "dynamic polymorphism" just meant deep hierarchies of inheritance
Inheritance hierarchies aren't exclusively what he meant though. Interfaces and the whole 'prefer composition over inheritance' style of programming has the same fundamental problem Muratori is getting at: both inherently constrain a program's structure for, what he argues (and I agree with), has no benefit to the program's performance or the programmer's time. In fact, he argues that the constraints imposed by the use of inheritance/interfaces only slow programmers down.
His raw device driver example, in pt2 of their conversation, illustrates the advantage of procedural code over inheritance/interfaces. His API requires users to provide a function pointer that will be called whenever an event is raised. This API user is expected to switch over the enum values that they care to implement. This design is better than an interface that requires its members to implement read(), and write() functions because it is both more performant (no vtable overhead + compilers can make more aggressive optimizations) and more flexible (a new event can be added to the enum without requiring all the old code to be updated if they don't need to handle the new event type).
I started reading this and honestly don’t see the part where he “compares poorly” against Muratori. And disclaimer, I know more about Casey and his work than I know about “Uncle Bob”. If anything, Bob managed to explain himself very well and defend his point of view, which is, “context matters and programmer cycles are more important than CPU cycles in the majority of contexts”. I think this is something we could all agree on, no?
> “context matters and programmer cycles are more important than CPU cycles in the majority of contexts”. I think this is something we could all agree on, no?
I don’t think people agree on this (I don’t at least). I like the story falsely attributed to Steve Jobs about how saving a user 1 second will save hundreds of years or whatever. From that perspective, programmer cycles are way less important than CPU cycles because every CPU cycle you save has a multiplicative effect depending on how many users you serve. And how true is that today when you have thousands of large business apps depending on one cloud service provider. The compounding effects of saving CPU cycles in every level of the stack has never been higher than it is today.
I have watched his journey from early 90s days struggling with OO and C++, to all the nonsense of the 2000’s, and where he is today.
I looked at some of the small amount of publicly available code he has written, and it was frankly horrible. An example of somehow who shouts loud enough getting attention because he can shout longer than most.
Kent writes a lot about metacognitive skills that are notoriously difficult to put to language because they sound so mundane but are highly enlightening when internalized.
Meditation and insight are useful analogues here, they sound utterly mundane or obvious when written about, even in highly technical or mystical contexts like mahamudra because they operate on behavior and schemas below language.
I think Kent’s writing is useful as a pointing method: read what he says and watch how you undermine yourself during work. It’s easier said than done too because cognitively demanding tasks undermine metacognition.
I can’t seem to find the history of it, but it essentially means somethings a bit rubbish, bad, or crap.
Pants are typically underwear in Britain, we wear trousers (Jeans, chinos, suit trousers) over our pants (boxers, y-fronts, briefs) so probably not something you’d want to show off that much.
Thanks for the reference to pants. In US pants covers jeans, chinos, slacks. Shorts are well, shorts. Underwear is our boxers, briefs, tighty-whiteys, etc.
- "adjective. British slang. Not good; total crap; nonsense; rubbish; bad
"The first half of the movie was pants but I stayed until the end and it was actually a great film.""
The art of talking about nothing. Harari has this trait. He talks about nothing for an hour straight and enjoys when people comment on the bullshit. What a character trait. Whenever you see this pattern, you know the person is a shill.
I can't say that I practice all or most of these habits, but the points about "calling your shots" and "concrete hypotheses" resonate. For example, when I add a debugging printf/log, I always ask myself, "will this output invalidate one or more hypotheses?" If not, then I need to rethink the problem.