Hacker News new | past | comments | ask | show | jobs | submit login

The only people that screwed it up are the ones that monetized it. They made it

a) rigid based on their definition b) defined themselves as experts in making teams agile based on their rigid definition c) charged for it

PS. These same guys, once they couldn't squeeze anymore blood out of the Agile stone, moved onto a new marketing term, "craftsmanship". They now charge the same clients, even more money, to teach them this "new way of doing it ..... right".

Plus, they make a mint on the books they hastily write and push out.

I eagerly anticipate that successor to Craftsmanship.




I disagree that craftsmanship is a replacement for agile. To me, being a software craftsman is more about how you write and structure code. Agile is about figuring out what the end product should be and how you get there.


I don't mean to imply they are the same.

I'm reflecting on how the same "agile experts that sell/sold their services" quickly abandoned selling that and moved onto selling "craftsmanship" once their Agile well dried up.

cough Bob Martin cough cough Object Mentor cough


Depends on who you're talking about here. Bob Martin one of the craftsmanship people, is very sincere in his desire to make the field better. I'm not sure about who's cropped up lately, though.

Also, it's very, very rare for somebody to make a mint on a software book. I've talked with a number of authors, and their universal view is that writing code pays much better than writing a book. You do it because you have something to say, not because you want to get rich.


> Bob Martin one of the craftsmanship people, is very sincere in his desire to make the field better.

Unfortunately, you can be totally sincere in your good intentions, and yet still repeatedly be wrong. When you are a high profile figure who presumes to advise others on the best ways to do their job, that makes you a liability.

It's a shame. Some of Bob Martin's earlier work exploring OO and the SOLID principles was quite decent stuff. But I think it's obvious at this point that he and several of his colleagues at Object Mentor have collectively lost the plot.


Could be. I guess I'm not aware of the repeated wrongness on ObjectMentor's part. Got links?

The big problems I saw, though, came from people who weren't particularly sincere. They were happy to sell whatever large companies were buying. E.g., two-day "Scrum Master" courses and a splash of Agile holy water to bless whatever top-down idiocy a company was already engaging in.


At this point, a comprehensive critique of Object Mentor would be more a case of writing a book than posting a few links. However, in the interests of not attacking them completely without justification:

- Object Mentor are big advocates of XP. The fundamental principle of XP is that a certain practice is good, then doing more of it must be better. There is no logic in that position at all, and it doesn't stand up to even cursory criticism. Moreover, if XP is as superior to other processes as the typical advocacy quotes and statistics imply, how come organisations using XP aren't consistently reporting dramatically better measurable results and how come so few software development groups have chosen to adopt it? Sooner or later, people notice that the emperor has no clothes. (I suspect this is why we now have Software Craftsmanship: it's a new positive-sounding but conveniently meaningless marketing term to pitch to clients.)

- Bob Martin has repeatedly stated that anyone who doesn't do TDD is unprofessional. Safety-critical software is typically not developed using TDD; in fact, formal methods, BUFD, and other very much not Agile processes are often used in such fields.

- Michael Feathers redefined the term "legacy code" in terms of unit tests. There are decades of research studying what actually causes a project to decay to the point that it is difficult to maintain and update. To my knowledge, a lack of unit tests has not yet been cited as a causal factor by any paper on the subject. (FWIW, I do think Feathers' book on the subject offered some interesting and worthwhile ideas, I just don't accept his premise that having unit tests is what defines whether code is legacy or not for practical maintenance/project management purposes. I think when you try to co-opt an ill-defined but commonly understood term and give it a formal definition that is very different to the mainstream concept, you lose some credibility.)

- Brett Schuchert, a man writing a book on C++, managed to make "Hello, world" take five source files and a makefile totalling nearly 100 lines, using TDD of course.

- Ron Jeffries. Sudoku. Probably enough said. TDD is not an alternative to understanding the problem and how you're going to solve it.

- From a post on the Object Mentor blog, Brett Schuchert apparently advocates pair programming based on a 1975 study of something involving two-person teams, a ten-year-old study of university students, and a couple of links to secondary sources. The original research for almost every one of the primary sources he appeals to either directly or indirectly is no longer available at the cited links less than 18 months later.

- Bob Martin thinks there are no more new kinds of programming language left to find. That's roughly on par with equating Haskell and Brainfuck because they're both Turing complete, and shows a complete lack of awareness of the state of the art.

- When it comes to the amount of up-front design and formal architecture that makes sense for a project, the amount of retconning in recent comments from the TDD guys is laughable. There was a particular interview featuring Bob Martin and Jim Coplien a couple of years back that was almost painful to watch.

I could go on, but if that lot doesn't paint a clear enough picture for anyone reading this, I don't have a powerful enough Kool-Aid antidote to help them.

I do agree with you about the insincerity. That's worse in theory, but unfortunately it's probably no less damaging in practice.

Edit: Here are few links to support some of the points above.

http://www.infoq.com/interviews/coplien-martin-tdd

http://skillsmatter.com/podcast/agile-testing/bobs-last-lang...

http://ravimohan.blogspot.co.uk/2007/04/learning-from-sudoku...

http://schuchert.wikispaces.com/Tdd.HelloWorld.Cpp

http://blog.objectmentor.com/articles/2010/11/09/info-please...


You start out with a giant misunderstanding, which makes it hard for me to take the rest of your complaints seriously. Extreme Programming is not an arbitrary desire to turn all the knobs you can find to 11. It started as a question: what happens if we take certain practices that are good and do them more intensely? E.g. if some testing is good, what if we test pretty much everything?

That team found that they really liked turning particular practices way up. But you can't turn all the knobs up, so you are implicitly turning others down. E.g., if you turn up iteration speed, then you are turning down the sort of heavyweight waterfall requirements process ubiquitous at the time.

So the "extreme" was a way to explore the space of possible processes, not any sort of fundamental principle. Teams trying XP are explicitly encouraged to experiment similarly. I sure have; the process we use is derived from XP but departs from it in a number of areas.

I think a lot of the rest of your points are similar misunderstandings along with some cherry picking. E.g., the OM blog post on pairing. He said that people sometimes asked him for basic background materials, so he posted some links. To go from that to "Brett Schuchert apparently advocates pair programming based on.." is either very poor reading comprehension or the act of somebody with an axe to grind.

As to not doing TDD being unprofessional, I'd generally agree. I tried TDD first in 2001, and have worked on a number of code bases since. For any significant code base that's meant to last and be maintainable, I think it's irresponsible to not have a good unit test suite. I also think there's no more efficient way to get a solid suite than TDD.

If you (or anybody) wants to discuss this further, probably better to email me; that's easy to find from my profile.


Extreme Programming is not an arbitrary desire to turn all the knobs you can find to 11. It started as a question: what happens if we take certain practices that are good and do them more intensely? E.g. if some testing is good, what if we test pretty much everything? That team found that they really liked turning particular practices way up.

Well, of course they're entitled to their opinion, but that's all it is: an opinion. An argument that if some testing is good then test-driving everything must be better, or that if code review is good then full-time review via pair programming must be better, has no basis in logic. And those kinds of arguments go right back to the original book by Kent Beck, and they have been propagated by the XP consultancy crowd from the top right on down ever since.

IMHO, if a trainer is going to go around telling people that if they don't program a certain way then they are unprofessional, then that trainer had better have rock solid empirical data to back up his position. Maybe as you say, I do have a giant misunderstanding, and in fact Object Mentor do make their case based on robust evidence rather than the sort of illogical arguments I've mentioned. In that case, I assume you can cite plenty of examples of this evidence-based approach in their published work, so we can all see it for ourselves. Go ahead; I'll wait.

I think a lot of the rest of your points are similar misunderstandings along with some cherry picking. E.g., the OM blog post on pairing. He said that people sometimes asked him for basic background materials, so he posted some links. To go from that to "Brett Schuchert apparently advocates pair programming based on.." is either very poor reading comprehension or the act of somebody with an axe to grind.

This is a consultant who presumes to tell others how to do their job, openly posting asking for any source material from others to back up his predetermined position, and then claiming in almost the very next sentence to favour material based on research or experience. He says that the links he gave (the ones where much of the original research is either clearly based on flawed-at-best methodologies or simply not there at all any more) are things he often cites. And he gives no indication, either in that post or anywhere else that I have seen, of having any library of other links to reports of properly conducted studies that support his position. I don't think criticism based on this kind of post is cherry-picking at all, but of course if it is then again you should have no difficulty citing lots of other material from the same consultant that is of better quality and supported by more robust evidence, to demonstrate how the post I picked on was an outlier.

The same goes for any of my other points. If you think I'm cherry-picking, all you have to do to prove it is give a few other examples that refute my point and show that the case I picked on was the exception and not the rule. If you can't do that -- and whether or not you choose to continue the debate here, you know whether you can do that -- then I think you have to accept that I'm not really cherry-picking at all.

As to not doing TDD being unprofessional, I'd generally agree. I tried TDD first in 2001, and have worked on a number of code bases since. For any significant code base that's meant to last and be maintainable, I think it's irresponsible to not have a good unit test suite. I also think there's no more efficient way to get a solid suite than TDD.

Please note that I'm not disputing that an automated unit test suite can be a useful tool. On the contrary, in many contexts I think unit testing is valuable, and I have seen plenty of research that support such a conclusion more widely than my inevitably limited personal experience.

On the other hand, I don't accept your premise about TDD. For one thing, TDD implies a lot more than merely the creation of unit tests. Among other things, I've worked on projects where bugs really could result in very bad things happening. You don't build that sort of software by trial and error. You have a very clear statement of requirements before you start, and you have a rigorous change request process if those requirements need to be updated over time. You might have formal models of your entire system, in which case you analyse your requirements and determine how to meet them at that level before you even start writing code. At the very least, you probably have your data structures and algorithms worked out in advance, and you get them peer reviewed, possibly by several reviewers looking from different perspectives. Your quality processes probably do involve some sort of formal code review and/or active walkthrough after the code is done, too.

If you came into an environment like that, and claimed that the only "professional" thing to do was to skip all that formal specification and up-front design and systematic modelling and structured peer review, and instead to make up a few test cases as you went along and trust that your code was OK as long as it passed them all, you would be laughed out of the building five minutes later. If you suggested that working in real time with one other developer was a substitute for independent peer review at a distance, they'd just chuck you right out the window to save time.

TDD is not an alternative to understanding the underlying problem you're trying to solve and knowing how to solve it. A test suite is not a substitute for a specification. Pair programming is not a substitute for formal peer review. They never have been, and they never can be.

I haven't gone into it here, but of course there are other areas where TDD simply doesn't work either. Unit testing is at its best when you're working with pure code and discrete inputs and outputs. It's much harder to TDD an algorithm with a continuous input and/or output space. Tell me, how would you test-drive a medical rendering system, which accepts data from a scanner and is required to display a 3D visualisation of parts of a human body based on the readings? Even if this particular example weren't potentially safety-critical, how would you even start to test-drive code where the input consists of thousands of data points, the processing consists of running complex algorithms to compute many more pieces of data, and the observable output is a visualisation of that computed data that varies in real time as the operator moves their "camera" around?

If you (or anybody) wants to discuss this further, probably better to email me; that's easy to find from my profile.

I appreciate the offer, but I prefer to keep debates that start on a public forum out in the open. That way everyone reading can examine any evidence provided for themselves and draw their own conclusions about which positions stand up to scrutiny.


An argument that if some testing is good then test-driving everything must be better, or that if code review is good then full-time review via pair programming must be better, has no basis in logic.

That's not the argument at all. That is, as I just said, the reason they decided to try that. Their reasons for continuing to do it and further to recommend it are entirely different.

[...] better have rock solid empirical data [...]

You do realize that almost everything that goes on in the industry is not based on rock-solid empirical evidence, right? And also, that you're privileging an arbitrary historical accident by saying that new thing X has to have evidence when the common practice doesn't?

If you came into an environment like that, and only "professional" thing to do was to [...] make up a few test cases as you went along and trust that your code was OK [...]

That is not something I have ever heard any Object Mentor person say, and it's not something I said. It's so far from what I've ever heard somebody like Bob Martin or Kent Beck say that your misunderstanding is so dramatic that I have a hard time believing it's not willful.

I prefer to keep debates that start on a public forum out in the open.

Well, I'm not trying to have a debate. If you'd like to have one, you'll have to do it without me.


Their reasons for continuing to do it and further to recommend it are entirely different.

So you keep saying. The problem is, almost everything Object Mentor advocate does seem to be based on some combination of their personal experience and pure faith. I object to someone telling me that my colleagues and I are "unprofessional" because we happen to believe differently, particularly when we do have measurable data that shows our performance is significantly better than the industry average.

You do realize that almost everything that goes on in the industry is not based on rock-solid empirical evidence, right?

That may be so, but most people in the industry aren't telling me how to do my job, and insulting me for not believing the same things they do.

That is not something I have ever heard any Object Mentor person say, and it's not something I said.

Good for you. XP consultants have been making essentially that argument, in public, for many years. TDD without any planning ahead is inherently a trial-and-error approach, which fails spectacularly in the absence of understanding as Jeffries so wonderfully demonstrated. Plenty of consultants -- including some of those from Object Mentor -- have given various arbitrary amounts of time they think you should spend on forward planning before you dive into TDD and writing real code, and often those periods have been as short as half an hour. You may choose not to believe that if you wish. I'm not sure even they really believe it any more, as they've backpeddled and weasel-worded and retconned that whole issue repeatedly in their recent work. But I've read the articles and watched the videos and sat in the conference presentations and heard them make an argument with literally no more substance than what I wrote there.

You keep saying that I'm misunderstanding or cherry-picking evidence. Perhaps that is so and I really am missing something important in this whole discussion. However, as far as I can see, throughout this entire thread you haven't actually provided a single counterexample or alternative interpretation of the advice that consultants like those at Object Mentor routinely and publicly give. You're just saying I'm wrong, because, and there's not really anything I can say to answer that.


That's because you're trying to have a debate with the Object Mentor guys rather than a discussion with me. Your problem with them isn't my problem, and neither are your misunderstandings. It is not my job to argue you into a better understanding of something you clearly can't stand.


Thx for putting the effort into this long reply, I've enjoyed reading it


Definitely matches my experience.

And let me just leave this here... http://jamesshore.com/Blog/The-Decline-and-Fall-of-Agile.htm...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: