Hacker News new | past | comments | ask | show | jobs | submit login
Estimating Software Projects: So you messed up. Now what? (2021) (jacobian.org)
96 points by Brajeshwar 7 months ago | hide | past | favorite | 86 comments



> My suggestion: if a single week of overtime will help you meet a critical deadline, and you can promise your team it’ll be an extremely rare occurrence (like, once a year, at most), consider asking for overtime. Otherwise, just take your lumps and accept the late project.

I am of the strong opinion that there are very few projects where the benefits of a couple more hours of work out of your employees a day is worth the cost on morale.

Project gets out tomorrow versus a month from now? Will anyone still care two years from now? No? Wasn’t worth it.

The human propensity for screaming “I want it now!” about things that don’t really matter never fails to frustrate me.

And please don’t get me wrong. There are certainly cases where people’s jobs or the entire company hang in the balance. That is a legitimate reason for overtime. There’s also cases like the UI redesigns with no effect on the bottom line the last time I had to work overtime - and in my experience anyway this is way more common.


> Project gets out tomorrow versus a month from now? Will anyone still care two years from now? No? Wasn’t worth it.

Depends on the industry and the circumstances.

If this is a company shipping their own software product and the deadlines are arbitrary, then of course they should be a little flexible.

But other software development is part of something bigger and/or accountable to third parties who are depending on it. If your software product must be ready for a scheduled run of a hardware build, a launch event that was planned months in advance, a date agreed upon with customers in a contract, or any other circumstance with real dependencies then getting it done on time is crucial.

> The human propensity for screaming “I want it now!” about things that don’t really matter never fails to frustrate me.

In my experience, people who have only worked in the first kind of environment (deadlines are arbitrary) are the most vocal about insisting that estimation is impossible or that deadlines shouldn’t exist. When they encounter a real deadline with real world consequences they struggle to track their own progress and have to do a lot of crunch mode in the last few weeks.

People who have worked in the second type of environment (deadlines are real and have consequences) become quite good at estimating and managing progress toward deadlines without crunch mode at the end. They become good at communicating issues with deadlines far in advance to avoid surprises.

I’ve worked a lot in the second environment. Eventually I learned that it’s important to screen for attitudes toward deadlines because many software devs have only worked in environments where they can deliver whenever it’s ready as opposed to working toward a target with a team. It’s a shock when they’re dropped into an environment where delivery dates actually matter, as opposed to being something a PM made up.


This was also my experience in the Nuclear Navy, wholly disconnected from software development. I was my division’s maintenance planner, among other things, and so I needed to plan when we would accomplish preventative maintenance on approximately 20 different systems. Planning was usually done a quarter at a time, but there were semi-annual, annual, and greater maintenance items you also had to take into account. Further complicating things, many of the items had specific plant criteria to be met (some are easy, like “the reactor must be shut down”), some of which would only happen once in a given quarter at best. Finally, if you wanted to maximize efficiency, you also should carefully read every maintenance item to determine where there was crossover, so you could use one to partially setup for another.

I made a huge dependency table in Microsoft Project, and generated Gantt charts from that. It worked flawlessly. A few key differences on the people side compared to software:

* No one insisted that I plan for the nominal persons’ abilities. If Petty Officer Schmuckatelli happened to be a savant at a specific maintenance item, guess what, he’s always getting assigned that one. Others would assist (nuclear maintenance is always a two-person operation), but getting the job done correctly and efficiently was priority one.

* I didn’t have to get consensus from a team on how long everything should take. This one is of course quite different from software, in that a lot of software work is novel to an extent; on the other hand, my submarine was of a new class, so there wasn’t a lot of institutional knowledge to fall back on.

* Accountability is actually a thing. If you say something has been done, it’d damn well better be done. If you make a mistake, there is an all stop while you convene a meeting of god and country to hold a retrospective (though we call them critiques). There will be consequences. Not career-ending or anything – it’s quite common to cause a critique or three – but you also don’t get to just skip on out. They’re not blameful in that the point isn’t simply to say, “why do you suck,” but they’re also not oblivious to the fact that often, a person’s inattentiveness is in fact the root cause.


I truly wish that teams used retrospectives in this manner instead of the shallow conversations that are often devoid of blame. Not that every sprint needs to have a deep, impactful retro, but it's nice to talk about shortcomings, be able to accept blame, and move forward with actionable items. Everyone comes out ahead.


Assigning (and accepting) responsibility is, I think, rather different from the "blame" that such retro's try to avoid.

You have to have a very safe work environment for that to happen. Otherwise, saying "I was blocked by person X" finding the wrong ears is a great way for person X to wind up on a PIP if the company's management is sufficiently inept.

I do miss being able to be honest at work.

Edit: rereading my comment, I realized I could have been clearer. What I really was getting at was the motivation: how do we help each other avoid the problem in the future, versus "everyone but person X can feel okay about what happened".


Same with me. By blame, I really meant responsibility.

We missed a pretty big deadline not too long ago. In the retro, I accepted a lion's share of the responsibility. I'm a senior dev rather than a team lead, but I had my head down during a lot of the technical discussions and PRs rather than diving deep with the juniors and mid-levels. I apologized for it.

I have made myself more available since then. PRs have been under more scrutiny from everyone, and I feel like the constructive feedback and requests for comments have been flowing more openly.


Fully agree. My wife is a Montessori teacher, and her observation has been that people are afraid of failure, even in an environment where failure is an expected part of learning. There are some cultural differences, but she says that it’s a fairly accurate baseline.

There’s an excellent talk [0] that discusses why and how blameless has missed the mark. Somewhat compressed version of the same talk by the same person here [1].

[0]: https://youtu.be/gfINfi2K1lE

[1]: https://youtu.be/l2CdjZUAmb0


Good points, but sometimes it's better for the organization to assign a less productive employee to a certain task for the sake of cross training. Petty Officer Schmuckatelli won't be around forever and so you need to have a capable backup.


Yes. As I alluded to, there is cross-training, it’s just that given the option, the more competent people will always be the lead on that job.

The model doesn’t translate well to civilian work, I think, largely because of contracts and guaranteed knowledge. In the Navy, you’re practically guaranteed that a given person will be attached to your command for at least 4 years, and their departure is planned well in advance. This gives you plenty of time to find people’s competencies and develop them.

The guaranteed knowledge comes in from the team structure – every division (team) has a Chief, a senior enlisted person who is generally assumed to have seen everything. If they haven’t, and an issue pops up, they will find how to work the problem. Intrinsic to this assumption is the fact that commands aren’t competitors (not officially, anyway), and you can easily ask a sister boat how they did X.


Estimating in the "second type of environment" is also bogus. The existence of some legal or regulatory liability doesn't really make estimations in that environment that much better.

The shining example of this is Big Dig in Boston, MA: years late and 7-fold over budget. On the software side, the initial launch of Obamacare web site was a disaster. Technically not late, but severely badly designed and deployed. I know first hand that many government research (NSF, SBIR, ...) grants are chronically late or under-delivered.

So I don't buy the assertion that some environments somehow make estimating any better. Humans are routinely bad at estimating, but mostly because of social and cultural reasons rather than technical or engineering reasons.


I won't attempt to excuse the failures on the Big Dig project, but the reality of politics and government budgeting is that project supporters are often forced to give unrealistically low estimates in order to get anything approved at all. Then once the inevitable delays and budget overruns start the project has already taken on a life of it's own and becomes impossible to cancel. And in the end the stakeholders are happy to have the new tunnel or fighter plane or whatever even if the final cost was way higher than they ever expected. It's a dysfunctional process, but there doesn't seem to be any other way to accomplish huge audacious goals.

I wonder how late and over budget the Great Pyramid was?


"but mostly because of social and cultural reasons rather than technical or engineering reasons"

Can you please expand on that why that is the case? Also, are there ways to improve estimation skills?


Inability to say no and push back. I am guilty of it myself.

Most experienced software developers know in their gut a lower bound for a project's timeline, which always exceeds the timeline in managers' or PM's mind. When the VP asks how long a project will take, it is hard to not say something optimistic. Or when your release timeline is every six months, it is hard to say a project will take more than a release cycle. These are social issues more than technical. I know a mostly accurate and simple estimation approach, which is roughly the Pareto Principle: make a good effort to build a decent prototype and multiply the time it took by 4 to get to an estimate. But of course good luck selling this to your management!

My advice to improve estimations would be around:

- Building a prototype before making any estimations. Software is in a lucky position where you can build prototypes before the actual thing.

- Applying the Pareto Principle, if you can. The reason for multiplying by 3-4 is because you have to account for research, design reviews, implementation, testing, bug fixes, documentation, and internationalization, etc.


Financial and rigid business incentives (and, yes, regulatory requirements with teeth) are good forcing functions. I’m not sure “bureaucratic tangles” like the Big Dig would qualify. I’ll call this “tightness” in a program. Category two has more tight constraints than category one.

I’ll give an example from my current job (more than a decade, which is crazy for Silicon Valley), where we make small devices (e.g., tablets, smart plugs, HDMI media players) and large devices (e.g., 75” televisions) as well as software-only features and products. If software slips, we might miss some business targets, so software’s tightness is highly variable.

If we move on to small electronics, in a pinch, we can change initial shipment from ocean shipping to air shipping. This can buy 2-3 weeks of made up time, but at a cost of a few dollars per unit. The initial few hundred thousand devices might cost more to deliver if we slip, so we have more schedule tightness than pure software. There are de facto baseline business constraints.

And then we move onto big devices like 75” TVs. It is cost prohibitive to air-ship TV’s, so we have to get at least the things necessary for initial use and updating ready in time for ocean shipment. Additionally, since large televisions are generally made where their panels are made and they are generally sold in brick and mortar stores that also serve as local warehouses (where space is planned in advance), there are commitments in manufacturing, logistics, and retail. TV’s, by nature, have more schedule tightness.

In general, what I’ve observed with these sorts of constraints is that the added known downstream impacts of a program tend to call for greater scrutiny and inspection up front, leading to one of my most often cited rules of executive management: “no surprises”.

I agree that there are folks who understand schedule tightness and folks who don’t (or at least, don’t yet). I think it’s less about a specific characteristic and more about broader downstream impact.

I’ll note that I’ve seen people from completely different domains (e.g., automotive, aviation, military, software, cloud services, media) come in and be great at this or flub it, whether they had experience in our domain or not. Planning is a skill of its own.


I feel like you are cherry picking extremely high-profile projects that you were personally not involved in, so

(1) sometimes estimations do end up wrong and mistakes can happen, so being able to pick any example and choosing the ones that failed is disingenuous

(2) picking someone else’s projects means you have absolutely no insight into the process. sometimes I will very intentionally under- or over-estimate a deadline for political - or really practical - reasons and someone on the outside will think I messed up. the projects you picked are extremely high profile and have to contend with a lot of political forces and are probably the worst examples to pick


I picked those projects because they were high-profile. But I'll give you another example that I am intimately involved with.

Take the software/web accessibility remediation mess where universities and companies are being sued left and right for ADA (Americans with Disabilities Act) violations. These organizations know the real threat of legal liabilities, make projects to improve accessibility of their products (web sites, etc.), and routinely blow up their project timelines only to start with new projects. So the concept of "deadlines are real and have consequences" firmly applies here. Yet, NONE of these projects are completed on time. There are many reason for these project failures, but my original argument still stands in this case: "Estimating in the second type of environment is also bogus".


Bent Flyvbjerg calls (2) the "sublime". He points out that disastrous megaprojects get authorized because folks with the money and authority simply ignore the haters.

https://en.wikipedia.org/wiki/Bent_Flyvbjerg#Megaproject_pla...


When working in an environment where there are serious consequences to being late, everyone has an easier time accepting that hard choices have to be made to get a solution out the door before the deadline. The paradigm shifts from “we want this particular solution, how long will it take?” to “we have this hard deadline, how can we solve it?”.


It cuts both ways. In my experience working in the second environment, where deadlines are arbitrary, managers and leaders do not do the work to decide what is important or what it means to “be done”. You can start out with the prior that deadlines are important and discover that there are activities that may need to be done that will blow your deadline. You can seek clarification on what should be done, but be told either of “don’t tell me the details”, “figure it out”, “I trust you to make the right decision” etc.

In environments where deadlines really matter, managers are much more willing to make decisions about scope and control it closely and work with their colleagues to define a done state, because it actually matters.


>People who have worked in the second type of environment (deadlines are real and have consequences) become quite good at estimating and managing progress toward deadlines without crunch mode at the end.<

I don't know. In my experience, estimation is one of the most difficult aspects of software engineering. A black swan event or some unknown unknown will blow apart your thoughtful estimate (especially near the end of a development cycle) - no matter how good you are at it. I'd much rather have a crystal ball at hand, and let the chips fall where they may, than engage in a herculean struggle to accurately predict the future.

edit:formatting


> People who have worked in the second type of environment (deadlines are real and have consequences) become quite good at estimating and managing progress toward deadlines without crunch mode at the end. They become good at communicating issues with deadlines far in advance to avoid surprises.

So, incentives matter - shocking, just shocking.

Or maybe the take away is that is there is no incentive, someone made it up and it doesn't matter.


> if a single week of overtime will help you meet a critical deadline

In my experience, every unit of overtime is followed by a larger unit of "undertime" where the team (usually subconsciously) goes through a period of low productivity. The ratio is often as high or higher than 1:2, so a week of overtime can result in two weeks of undertime.

I present this as a fact when project sponsors request that the team work OT so that they're aware that they're trading future productivity at a loss for productivity today.


Yeah, I think "If we don't complete by XX we will be in breach of Regulation YY and will have to admit this to the SEC" is probably worth a bit of extra effort.

"This milestone will go Red on my RAG report and make me look bad to my manager" - less so.

But like anything else, hopefully there is give and take. "If you finish this version on time to help me out, we'll put half the story points in the next iteration so you can leave at 5/pay down tech debt/whatever."


> Project gets out tomorrow versus a month from now? Will anyone still care two years from now? No? Wasn’t worth it.

1. TurboTax

2. A game or other software that needs to get out before Christmas

3. A dependency is being shut down (https://www.honeycode.aws/)


The "No?" portion of the hypothetical you quoted is designed to filter out your examples. A few lines down from that clearly addresses the need for OT in critical situations.


Fair

Side note: I was actually at AWS when Honeycode was introduced and on a team working on a popular open source “AWS Solution”. The Honeycode team wanted us to integrate with it and I vehemently opposed the idea. I said wait until it gains traction.

Luckily, I won that fight.


It will never be "once a year" once you do it once.

Once you start doing it, mandatory overtime will become a tool for making deadlines.


Some things may be urgent but most deadlines are bogus and created by sales and managers promising something to someone. And everything revolves around that, creating a toxic blame culture. Looking at you Scrum!


Sometimes the choice for salespeople is between promising to deliver by a certain deadline or losing the sale. Some companies simply aren't in the financial position to turn away revenue and still survive. If you don't want to ever be in that position, then make sure to only work for companies that are already financially secure.


It depends on commercial pressure and on how overtime is compensated.

Early in my career our main customer put huge pressure on the company to fix bugs ASAP and we had to work 2 weeks flat out with overtime late every night plus over the weekend.

We managed to satisfy the customer AND I doubled my salary that month.

If it is a rare event then I have no issue with that. This has no negative effect on morale (it can even be a boost and improve team spirit), and in fact would be happy to get the opportunity.

It also helped that we had a good leader: our Director could not help on the debugging but made a point to stay with us and took charge of making sure we were kept well fed, paid for by the company, of course.

Leadership and compensation are key.


Sadly it feels that overtime has been eliminated from the industry. Every contract I've ever signed (and this is in the supposedly more worker-friendly EU) has had text along the lines of 'Your working hours are 40h a week. Occasionally you may be required to work more. Overtime is not compensated.'

The best I've had in a workplace is an informal policy of time-off-in-lieu for extra hours worked.


is not compensating overtime even legal anywhere in the EU?


From what the Wikipedia page of Overtime says about the EU (https://en.wikipedia.org/wiki/Overtime#European_Union) it seems that the laws are very concerned with working hours, but have nothing to say about overtime being required to be paid or not.


It is in Denmark, my contract also states that I work 40 hours a week with uncompensated overtime. The only rules creating a cap on my working hours are the EU's, with 48 hours per week on average in a 4 month period.


I agree and would add the following criteria to identify the few rare cases where it is worth it to meet a hard deadline and where cutting scope is not an option.

Specifically, if meeting the deadline is so important that it justifies unplanned overtime or a death march, then leadership should be happy to pay the team a one-time special bonus equal to 2-5x the time spent or even more.

If faced with this choice leadership balks at the idea that’s a very good sign that a death march is unwarranted.


This depends. SaaS software with a sales window you need to make is way different than random software that doesn’t really need to go out at a certain time.

Certain industries may have a few month sales window, and missing it could cost many millions against competition.


I'm dealing with another common version of this right now. Another contractor (not from my company) had estimated a certain amount of time to fix everything and add some updates to code they had developed. Our mutual client didn't want to renew with them so asked if we could do it. But lo and behold, the deadline was two days from now so the sooner the better. I cut down the estimate by half but said it wouldn't include fixing anything since presumably they were living with existing bugs in production they could keep living with them if the new features were the priority.

I did say that we would have to pay the piper eventually and that this rushed timeline would mean we would miss things and that it was only possible if we had everything in place to do the development work. Things like access, correct data, properly setup environment etc.

Of course we started development and had none of those things. I did get it out on time according to my shortened estimated. But then they discover that they absolutely do need some fixes and that the assumptions the dev team had to make because we didn't have real data were wrong so the whole thing is delayed by about as much time as the original estimate but there's a whole bunch of hand-wringing and people thrashing about because of "slipped timelines".

And I knew this going into it. No one ever hears "But we will have problems I just don't know what they will be" They just hear "I can get it to you in 3 weeks"


A good UI redesign should have an impact on the bottom line.


In my experience, software estimation's biggest problem is piss-poor management. A few years ago, I worked at a company that needed to move off of a legacy system. They had 5 years notice and the contract from the old legacy system had a hard stop (IE: after this date, we could no longer use the system).

This system was running our entire business and had lots of moving parts. Management dragged their feet until there was 1 year to go. Our entire team ended up working nights and weekends and holidays From August->January. Most nights didn't end until 1am and we were expected to come in the next day at 8am for work.

We would have worked Christmas day, but the director thought the company would get sued because it was a religious holiday. Management didn't have to work any of these long hours.

There also wasn't any automated testing (and the company didn't want to dedicate any resources to it), so you needed to send in these hundred-column .csv files with various pieces of information on it and had to hope that after you got it all working correctly, that the next change to the system didn't break everything (this happened constantly, which also contributed to the longer hours).

The deadline was overshot by about a month, but they somehow were able to get a contract extension.

Management said this wouldn't happen again, but of course, they tried to do it again with a different project. At this point, I left.


That's my experience as well.

Most deadlines are bogus. The ones that are real are missed because management didn't fund the project appropriately, or started the the project way too late.

I've worked in too many organizations where I was asked to give an estimate, and then asked to give another because they didn't like the number.


In which the industry continues to miss the core issue of estimation: task complexity is the shoreline paradox. The reason people fail to communicate lateness is the same reason they make "poor" estimates to begin with. You cannot "realise you are late" any more than you can "realise your estimate is wrong" before the task is complete. The hard part is deciding at what level of internal risk/concern you feel you need to raise the issue and suggest that the deadline is likely to be missed.

The lower a task's fractal dimension is, the easier it is to automate. Therefore, the majority of ticketed engineering work has a high fractal dimension. That is, much of its complexity is unforeseen and unknown. It follows that estimates will be inaccurate.

This is the reason people can continue to shill techniques, tools, and processes that proclaim to solve this problem. I propose leaning into this shoreline model and accept your tasks have a high fractal dimension, and use a multiplicative factor on all your estimates.

Alas, this boring solution is both too mathematically motivated and sounds too much like an admission of failure (or of "messing up") to be accepted anywhere I've worked.


While it's accurate that task complexity is the shoreline paradox, it also has the same obvious solution as the shoreline paradox in day-to-day scenarios: Agree a resolution ahead of time and discuss the measurement in those terms.

Nobody wants to know to-the-second when a project's Jira board will be complete, in the same way nobody wants to know to the atom-width how long the shoreline of Britain is. They want to know if the project is going to finish it in-or-near the last week of Q3, they want to know how many meters the shoreline is.


The problem with the shoreline paradox isn't just about resolution. It's about the unique geological features _within_ that resolution. I live near the Forth river. At a low resolution, the distance from my location to a location on the bank of the other side of the river will look very short. But at a higher resolution, the distance is an order of magnitude larger. My original estimate at the chosen resolution was way off!

Framing the issue as one only of resolution misses my point. There is a degree to which setbacks are completely unknowable without actually performing the task. People need to learn to embrace this. I mean, ignore my warning with peril: you can keep claiming you simply need to take better measurements, make more accurate estimates, whatever. But the next time you get it way off remember this conversation :).


To continue with your metaphor using the Forth, if your project is to get from North Berwick to St Andrews, and you chose to estimate it "as the crow flies", your result would be about 20-30km. Whereas, if you chose to traverse the Forth up to Stirling, then return along the northern coastline, you'd be looking at 200km.

The Firth of Forth is visible with the naked eye from the ISS though. So picking a resolution that misses that geological feature and making an estimate for crossing the Forth that's out by an order of magnitude would be transparently and self-evidently incorrect... Particularly if you and your team are on the bank of the Forth when you make the estimate.

Picking an exact resolution is challenging, but picking a "good enough" resolution that minimises "out by an order of magnitude" errors really isn't for most tasks unless you're taking part in the most cutting edge of research.


I don't know what the equivalent of the ISS is for project estimation, but I sure as heck don't have access to it!


Invoking the shoreline paradox is very interesting, but not quite fitting. To continue the analogy, your task isn't to measure the shoreline length (which cannot be meaningfully done), it's to physically walk it. Walking the shoreline _can_ be done, and someone who's walked it many times in a practical way before would be able to give a reasonable estimate.


I recently inherited a project that was "late". The team and project were 6 months into execution, and had nothing to show for it. The status on my Day 1 was "estimates were missed -- now what?"

On appearances, everything should have been just fine. The SWEs and product leader were running two-week sprints, with all the ceremonies scheduled on all calendars (well, everything except the end-of-sprint demo.) Every two weeks, a status powerpoint slide deck came from the product leader espousing all the fantastic capabilities that were recently implemented by the team.

I needed to understand what was going on. I chose to observe everything for two sprints (and not interrupt how everyone worked) while I focused on establishing a clear and comprehensive view of The Project.

What I discovered:

- An ambiguous and unclear definition of the project. Key facts, problem statement, overall solution ... did not exist. The product leader could give you some spin of this off the top of his head, which would invariably change from time to time.

- Work stories (we used JIRA) were garbage. I'm an engineer and even I couldn't understand what work was being completed.

- No capacity for the team to build & deploy software in a reasonable manner.

- No agreement or understanding of expectations among stakeholders.

So, what did we do?

1) Defined the project, with stakeholders in agreement.

2) Devised a solution, based on the project definition.

3) Broke down the work necessary to implement and deliver the solution.

4) Determined individual lengths of time to complete the list of work.

5) Reviewed everything, finding areas of risk and potential further problems.

It sounds like BUD (big upfront design) -- but in practice, it was Big Upfront Discovery. We structured work and components to ensure we had plenty of two-way doors (only a few one-way doors, based on requirements from stakeholders.)

And with that, we made our estimate. Another 6 months. The stakeholders did not like the timeline, but agreed "as long as we make that date." We provided weekly status updates, leaving out no details. Our stakeholder meetings went from contentious and confused to understanding and brothers-in-arms. We made the date and we didn't kill the team (morale actually went up.) The stakeholders loved the resulting solution.

****************

No magic here, just a focus on attention to details and honest assessment.


Yes, ultimately if you don’t have a clear vision of the problem you’re solving and the value you’re providing then success will be impossible. And most people confuse building features with providing value or solving a problem and so most projects fail.

If you’re able to define the problem you’re solving and what counts as “solved” then success it’s much easier. It’s also much clearer from the start when a problem is intractable. And it’s easy to tell if you’re not ready to start building yet.


Sounds like stakeholders are expecting waterfall guarantees without the overhead of proper planning and refinement.

Honestly, the industry should move back to waterfall.


Yes, we had some "stakeholder training" that was necessary.

The biggest complaint: "I don't understand why this takes so long."

Our stakeholders were mostly plant operations people -- users, not engineers. We re-framed the discussion to focus on WHAT they didn't understand. They didn't understand how software comes together, gets deployed or updated, etc.

I explained that it wasn't important that they understand how that occurs (or, if they do, please join the engineering team!) This literally took pressure off of them to be knowledgeable. I pulled a few of them in as "first-pass quality control", ensuring what we built would work for them and their teams.

Immediately, I had advocates for us and interested stakeholders willing to participate as partners.


I'm not a die-hard GTD or David Allen fan, but I think he got some things right. My favorite quote of his goes like this: "Things rarely get stuck because of lack of time. They get stuck because the doing of them has not been defined."


0. Humans are bad at predicting the future. Make sure you're socializing that fundamental principle, especially to the customer.

1. You also need to model your project in more than one way as a self-validation. PMI-certified PMPs especially love a single, structured approach.

2. Provide estimates in ranges and fight for that to be the organizational norm. Once you win this battle, insure your first few reps are on time/on budget.

3. Don't let anyone pad their estimates. Do the padding at the end. Engineers especially love to pad their estimates since they hate giving answers based on unknown information, so show them that you're padding at the END and that their job is to provide the most accurate estimate.

4. Postmortem ALL THE TIME. At the macro and the micro. Your sales engineers should be sufficiently experienced to balance the hard with the soft skills AND have previous projects to draw on experience-wise.


Saying that your future estimates will improve based on your last ones, is like saying you have more chances of winning the lottery if you lost some games before.


Not at all. Estimation is not based on randomness.


I mean, unless you know all the variables in the feature scope it is pretty much random...


Critics of estimation often use a sneaky trick called the motte and bailey fallacy. They mix up two different ideas: one that's pretty reasonable and easy to defend, like "we can't know _exactly_ what's going to happen in the future," with another that's way out there, like "the future is a total mystery, so trying to learn anything about it is useless because it's all just random noise."


"Critics of estimation" usually do this because managers don't actually know the concept of "estimation" and define your worth and paycheck on this concept. So it's just easier to simplify it to "just random".


You know some. You don't need to know every variable to create an estimate.


Past experience can be used to drive good estimation. But it takes experience with the process of estimating to know when you "know" that your experience is relevant, and when it's not clear if the previous experience translates.


It doesn't matter how much experience you have with past estimates if one of the steps in the flow involves a scenario where you just can't estimate it. (Happens a lot when dealing with third parties)


I think for that scenario, there would have to be experience estimating based on the third party involved.

We have third parties building portions of our systems and for some of them I can estimate reasonably well their actual net duration because I've had a lot of experience with them, and they've been fairly consistent.

But there are others where there's a wide range, so I think it boils down to whether you can spot a pattern or not.


> That estimate turned out to be wildly incorrect. The project ended up taking a year longer than I’d originally projected. The reasons are complex; the short version is that I made some major technical assumptions that turned out to be wildly incorrect. Several large architectural changes I’d thought we could avoid turned out to be unavoidable, adding all sorts of dependencies and additional work to the project.

Honestly, I can't help but judge either the company or the author. What kind of project ends up taking more than a year to complete? Wait... it's "a year longer than I’d originally projected." So what was the original estimate? Six months? It doesn't matter how complex it was. If it is taking that long, the project should have been a smaller piece of what was planned.

The author talks about re-estimating, communicating early, and owning up to your mistakes, which is all fine advice, but what if the project itself was too ambitious, and too broad to have a reasonable estimate?


I think some HN users have only ever worked on web or mobile software and have no concept of what happens in the broader software industry. For example, on the F-35 (JSF) program they took about 10 years to ship the first real production release with multiple large schedule slips along the way. And that was despite deferring a lot of the planned scope.


Agree, working in an enterprise where you need to integrate with many existing systems, conform to guidance etc is a different beast than startup code written by a single team. If you haven't worked in an enterprise it's difficult to understand why things take so long.


There is a difference in a "project" in your example and a "project" within a company (usually of a smaller scope and up for modifications). My opinion is that estimates are bound to be missed when there is so much scope for unknown variables creeping in.


You don't need to go to that extreme.

I've whipped up a complete (extremely simple) mobile app in 3 days.

Normally i do complex solutions that start with custom hardware/os, possibly include servers...

... for those, only the initial hardware bringup and OS setup may take months.


"Honestly, I can't help but judge either the company or the author."

Jacob includes that story under the heading "Don’t do what I did".


All of this is very true! Thanks to the author for being open about their mistakes.

If you want to avoid this scenario, your company needs good leadership. If you're at a company where the managers don't do any due diligence in terms of verifying the progress of work, you are at an either inept or toxic company. If all people do is ask you for a "status report" and they just hope it's correct, they're setting everyone up to fail.

Good management is like a teacher in school who checks if students are completing their work, and if they aren't, gives them assistance. The teacher must actually check the work, and be interested in the welfare of the student as much as improvement.


> Good management is like a teacher in school who checks if students are completing their work, and if they aren't, gives them assistance.

This requires the managers to know a lot about the technical aspects (nitty gritties if you will) of the work. In my experience, most line managers, and certainly the bosses above them, are so woefully clueless about the work that they are unable navigate timelines, scopes and challenges.

Often, the bosses interests are also misaligned. Rather than take a step back and rescope/reevaluate the project, they want to squeeze engineers to get "something" done. Why? Because some upper manager will lose a fraction of their bonus or stock or promotion due to optics.

The result of all this is Boeing of 2024.


Immediate managers must have some experience with the report's work, absolutely. But farther up the chain, they still need to check work. There's certain details of progress you just can't fake.

If you have 10 pieces of work, and you show each piece getting done at regular intervals, along with demos, then you can show your actual results and exactly how close you're getting to finished. Even someone who has no insight into the product can follow along and see the progress's results and trajectory. The more detail you get about the work, the harder it is to fake (you can fake a demo, but it's much harder to fake an entire Jira board)


Uncle Bob said that you know your project's estimation only once you finish it.


He might have pinched it from Watts Humphrey, who pointed out that the earliest estimate is the most valuable, but the last estimate is the most accurate.


This is a great article with some valuable insights.

I also noticed that it is part of a larger series of articles on Software Estimation which in my experience is one of the most challenging aspects of development in any organization.

Worth reading: https://jacobian.org/series/estimation/


Has all of software forgotten about Gantt charts?


Even worse, all of software has forgotten about PERT charts. Those work much better than Gantt charts for projects where the task estimates are uncertain.

https://en.wikipedia.org/wiki/Program_evaluation_and_review_...


Yes, the entire industry thinks they're so special nothing can be estimated to any degree.


My experience with management is that they want a single number, and expect it to be done within 10% of that estimate, when the reality is that an estimate can often be a wide range like "1-3 weeks" depending on how many yaks we have to shave along the way.

Precise estimates only work for tasks we've done many times before in an almost identical manner. I think that is pretty unique to creative (in some sense) fields like ours. Please be more specific, which fields are you comparing us against?


That's not it in my experience. If I give actual bounds on the estimate, management cannot tolerate what the Gantt chart says when the pessimistic estimate is true.


Like yeah both sides have to talk the same language if I try to be smart saying it will take between 3 and 5 days - and management hears only “will be done in 3 days” I don’t care and next time go with just giving upper bound anyway, then they start negotiating and I give lower one and then they are still not happy I did not finish in lower estimate… That’s like not my problem anymore.


For better or worse Gantt charts turned into the poster child for old-school waterfall development and pretty much got kicked to the curb when Scrum showed up.


In my experience, every scrum project I have been on became an exercise in satisfying the requirements of scrum, versus the requirements of the project.

I actually had multiple scrum masters tell the developers that if they finished their assigned stories early, and there was still time left in the sprint - don't start any new stories, wait until they next sprint officially started, because it made the reporting better to upper management.

People like that need to be fired.


My experience in writing (or performing) user stories for scrum was also annoying.

Was someone else’s story too vague, but you know what actually needs to happen? Too bad, kick it back and make them write down what you already know.

Did you write the story with such explicit requirements that there is essentially only one way to accomplish it? Don’t do that, because <reasons>.

I hate scrum. I’ve found that poorly-done Kanban is also a nightmare, unfortunately. I’ve yet to find a happy medium.


>don't start any new stories, wait until they next sprint officially started

I think there's room for this kind of thinking, if the end of sprint is tied to a release, and spare capacity gets redirected to helping complete existing stories (both those things need to be true). I've never been in a team that did that though so I don't know how well that would work in practice.


You can start on new work even if it doesn't get merged that sprint. Especially thorny tickets where 90% of the work happens before any code is written.

But I agree with your point about assisting others on existing stories (if help can be given).


They do need to be fired because they don't understand scrum. What you get done, was above the line. Any extra stories one completes, is below the line, which means it does not count towards the sprint goals.


Why? Have Gantt charts ever been useful for any industry?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: