A manager went to the Master Programmer and showed him the requirements document for a new application. The manager asked the Master: "How long will it take to design this system if I assign five programmers to it?"
"It will take one year," said the Master promptly.
"But we need this system immediately or even sooner! How long will it take if I assign ten programmers to it?"
The Master Programmer frowned. "In that case, it will take two years."
"And what if I assign a hundred programmers to it?"
The Master Programmer shrugged. "Then the design will never be completed," he said.
It is a very humbling experience to make a multimillion-dollar mistake, but
it is also very memorable. I vividly recall the night we decided how to
organize the actual writing of external specifications for OS/360. The
manager of architecture, the manager of control program implementation, and
I were threshing out the plan, schedule, and division of responsibilities.
The architecture manager had 10 good men. He asserted that they
could write the specifications and do it right. It would take ten months,
three more than the schedule allowed.
The control program manager had 150 men. He asserted that they
could prepare the specifications, with the architecture team coordinating;
it would be well-done and practical, and he could do it on schedule.
Futhermore, if the architecture team did it, his 150 men would sit twiddling
their thumbs for ten months.
To this the architecture manager responded that if I gave the control
program team the responsibility, the result would not in fact be on time,
but would also be three months late, and of much lower quality. I did, and
it was. He was right on both counts. Moreover, the lack of conceptual
integrity made the system far more costly to build and change, and I would
estimate that it added a year to debugging time.
That sounds specious - grass is greener fallacy. The evidence was that the CP team failed at the job. But where is the evidence that the architecture team would succeed?
The general suspicion of TMMM is that it extrapolates from failure but assumes that some untried else would be better.
>That sounds specious - grass is greener fallacy. The evidence was that the CP team failed at the job. But where is the evidence that the architecture team would succeed?
Anybody who has worked for a couple of decades of large software project, doesn't need any, cause they have seen this play out time and again. Brooks doubly so -- he literally wrote the book on software development timelines.
Sure, technically you're right.
But it's not like every piece knowledge needs to come packaged in a fancy LaTeX, with confidence intervals, and control groups. Sometimes experience alone is enough to assure us that the rain is wet and that a team of 150 will invariably fail in this way when assigned such a task compared to a team of 10.
Seems like a good way to go would be 15 isolated teams of 10. Some teams might make the schedule. Pick the best project at that point. Big bonuses for the people that get it done and extra for the final winners. Seems like a huge waste of effort, but it looks like that is the way to go. This would be similar to what happens when a company just buys a small startup that has created what they need.
Fred Brooks probably had reasons to think the untried else would have been better. His experience, hindsight from later cases, and also the architecture manager's predictions, which did turn out correct.
When someone makes three predictions, two turn out to be correct, and the third ended up not tried, we would do well to think twice before dismissing that third prediction.
Why didn't he work internally to prevent these issues? It sounds like he was pleased to let them fail, so he could be correct with no effort.
Why are we taking advice from someone who lets others fail so he and his team look better? This kind of internal competition destroyed Motorola and damages many other organizations.
> Why didn't he work internally to prevent these issues?
What makes you think he didn't?
> Why are we taking advice from someone who lets others fail so he and his team look better?
From the quote, Fred brooks was clearly the boss of both the architecture manager and the control program manager. He wasn't part of any one team. I don't see the kind of conflict of interest you're hinting at.
> It sounds like he was pleased to let them fail, so he could be correct with no effort.
To me, it sounds like he simply made a bad call, and was saying "oops".
Let's make "better" the enemy of "good enough". Then we can blame anything on the CP team, making the architecture team look better without any effort or proof on their part.
Where is the koan where the Master tells the manager to hire better programmers? The Master doesn't because it's bad advice up chase waterfalls, unicorns, and 10x devs.
I see what you're saying, but the alternative would be to say that you could never learn anything from real-life in-the-trenches experience, because business organizations rarely if ever do something two different ways.
Management at one of my companies actually used this metaphor (expressed as below), along with another favorite: "If you put lipstick on a pig, it is still a pig."
"If it takes nine months for a woman to deliver a baby, getting nine women will not deliver a baby in one month."
That's the famous definition of a project manager :). The person who thinks that if a woman can deliver a baby in 9 months then 9 women should be able to do it in 1. It's the "throw more men at it" strategy.
Years ago, an investor and I had about the same conversation. He was adamant about needing a spreadsheet that would allow him to model hires vs. cost vs. time.
Fortunately not exactly. It was such a massive project that everyone from my current team touched (or actually was touched by) it - including our current tech lead who had the misfortune of being assigned there briefly.
The topic came up on one of the introductory meetings and we shared our respective theories.
My take was that had this project had half the staff, it would take half the time to complete.
All and all there were over twenty people there doing daily standups.
Then again, the master programmer could have assign 5 people to this project and other 5 to different tasks (hello more detailed tests or research frameworks or even another team doing the same task independently) finishing the whole thing in one year with those 10 people.
So maybe it is more story about how we fail when we think about organization and negotiation. (The master programmer making sure estimate so easily suggest they won't be at time anyway).
I like the article. I think, in general, (project) management could learn a lot from computer science. People working on operating systems etc. figured out solutions to a lot of problems like scheduling and so on.
Another similar penalty with humans that is often ignored is caching of skills into working memory.
For example, consider a process, like writing an expense, that is really simple. Because it's simple, it might be tempting not have a specialized person do it and instead having every person do it on the need basis. But then, if it's done only rarely, people will have to learn it each time, or ask somebody how to do it, spending lot more time on it due to what are pretty much cache misses, and it would be more efficient to have a specialist do it, because then he would do it every day and had all the process corner cases in working memory.
Similar problem with caching happens when say a programmer multitasks on several different things at a time. In that case, the cache (working memory) is completely trashed every time a task is switched.
For example, consider a process, like writing an expense, that is really simple. Because it's simple, it might be tempting not have a specialized person do it and instead having every person do it on the need basis. But then, if it's done only rarely, people will have to learn it each time, or ask somebody how to do it, spending lot more time on it due to what are pretty much cache misses, and it would be more efficient to have a specialist do it, because then he would do it every day and had all the process corner cases in working memory.
This is true, but it's important to remember that people aren't machines. Going too far down the division-of-labour route risks losing sight of what you're actually trying to achieve. Certainly in my case, that makes it a lot harder to perform my best, and it reduces the chance of spotting different ways to slice-and-dice the problem.
I was in an organization where they decided to cut the "indirect labor" cost, which for some reason included secretaries but not engineers. So, fewer secretaries means more of the tasks they used to do for engineers, the engineers now had to do themselves. So, instead of one secretary doing travel expense reports for a couple dozen engineers, and doing them efficiently and well, you had a couple dozen engineers who don't do it often enough to know how, doing it badly and over a much longer (and more highly compensated) time. Multiply this by many different kinds of administrative tasks.
"It will take one year," said the Master promptly.
"But we need this system immediately or even sooner! How long will it take if I assign ten programmers to it?"
The Master Programmer frowned. "In that case, it will take two years."
"And what if I assign a hundred programmers to it?"
The Master Programmer shrugged. "Then the design will never be completed," he said.