Hacker News new | past | comments | ask | show | jobs | submit login

Gah these articles and comment sections should come with a trigger warning for armchair speculation. I swear it's as frustrating as talking to flat earthers.

Estimation is actually well understood in academic project management. There is (Nobel Prize winning!) research about what, actually, are the problems inherent to estimation, and how to produce specific and accurate estimates despite them. This academic field is almost 50 years old, and no one who complains about estimation in blog posts or comments is aware of it.

Stop navel gazing and actually go READ about the subject. I know it's hard for us engineers to take in anything that doesn't come from StackExchange, but please try, BEFORE you write about your shitty experience with estimates and generalize to the entire problem space.

Here's what the research says:

- Humans are ALL bad at time estimation. Even the ones who consider themselves good at it, estimating tasks with which they are very familiar, "only" underestimate by 30% at best.

- humans are pretty good at estimating non-time attributes of work, even those with a direct correlation to time. Like effort, complexity, or "cups of coffee."

- if you estimate something with a time corellation (e.g. complexity) in a consistent way and measure the average throughput over time, you can very precisely and accurately estimate time to completion. This is the Law of Large Numbers, which is how casinos stay profitable when dealing with much more randomness than exists in software projects. It also makes your estimates include unexpected complexity, personal issues, illness, windows updates, etc. It's a statistical law.

- the accuracy of average time estimates is proportional to the time left on the project. It runs opposite to the uncertainty of distant features. I.e. this method does not predict how much you can build in a week; you're better off with your relatively intimate knowledge of the feature at that point in time and a gut check. Rather it predicts how much you will build over 12 weeks, with extraordinary accuracy.

- estimates are more.understandable when presented with a confidence interval, e.g. "the work as we understand it today will take 8 weeks, with a 95% confidence."

What I HAVEN'T seen in the research, but which is undoubtedly true, is that most teams violate these fundamentals and then complain that estimates are useless.

Asking your team to estimate in time units IS useless. Adding up those time estimates to create a long term plan is doubly useless. Cracking the whip on them when their estimates prove inaccurate is triply useless. And complaining about it on the Internet because you've never read any of the grown up work on the subject... well that's Hacker News.




You are incredibly overstating the efficacy of that "knowledge". It is true that we have a body of research showing that using past task performance as a guide for future estimation is better than most methods. It still has huge error bars, and much of that research wasn't on software development. Software has a rather unique property of only requiring a specific task to be done once, ever. Software development has more in common with the planning stage of other fields than it does with the actual execution of tasks in those fields.

The "Law of Large Numbers" burns people constantly, and those that rely on it fail to understand the self-similar scaling of work and the long tail of distributions.

This "grown up work" is old work that has been shown to be poorly applicable to software development, although I agree it is better than "break thing down into tasks and then use your ego to estimate time for each" which is completely useless. But that is a low bar!

Probably the best I've seen (which was built using some of the research you quote) is Three Point Estimation (https://en.wikipedia.org/wiki/Three-point_estimation) but that isn't particularly great either, but is an ok mechanism for persuasion!


> You are incredibly overstating the efficacy of that "knowledge"

He's also agreeing with almost all of the comments on here, right after saying that all the comments are wrong.


Do you have any books that you would recommend on the subject? I'm all for people being informed but I have looked for information myself sporadically over the last 10 years and it's a very sparse landscape from my point of view.


Start with Kahneman and Tversky's work on the Planning Fallacy [1], and follow the rabbit hole to Reference Class Forecasting [2]. I don't know about popular science books on project management, though, sorry. There's some good, readable material about Agile estimation as a group of practices, but you have to avoid the dogmatic stuff from so many of the specific implementations (I'm looking at you, scrum fetishists). Any of the Agile Manifesto authors are a good bet, since they've been around to watch various implementations come up. Martin Fowler, Uncle Bob, etc. Whatever you think about their code advice, their estimation advice is worth listening to.

[1] https://en.m.wikipedia.org/wiki/Planning_fallacy

[2] https://en.m.wikipedia.org/wiki/Reference_class_forecasting

Relevant papers:

Buehler, Roger; Dale Griffin; Michael Ross (1994). "Exploring the "planning fallacy": Why people underestimate their task completion times". Journal of Personality and Social Psychology. 67 (3)

Kahneman, Daniel; Tversky, Amos (1982). "Intuitive prediction: Biases and corrective procedures". In Kahneman, Daniel; Slovic, Paul; Tversky, Amos (eds.). Judgment Under Uncertainty: Heuristics and Biases. Science. Vol. 185.

Kahneman, Daniel; Tversky, Amos (1979). "Prospect Theory: An Analysis of Decision under Risk" (PDF). Econometrica. 47 (2)

Flyvbjerg, Bent (2006). "From Nobel Prize to Project Management: Getting Risks Right". Project Management Journal. 37 (3)

Hope this helps!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: