Hacker News new | past | comments | ask | show | jobs | submit login
Is management pressuring you to deliver unfinished code? (2020) (iism.org)
151 points by jasonbourne1901 on Sept 16, 2021 | hide | past | favorite | 76 comments



One of the problems I have found is that we don't talk the same language when communicating with the business, we use terms like "Technical Debt", "Test Coverage", even "Minimal Viable Product".

In my opinion the universal variable across all these is risk, and it's easy for all to grasp what we mean when we say the word "Risk".

There is delivery risk, will be get this out in a timely fashion for the market.

There is operational risk, will this fall over if we get 10x users or if someone looks at it funny.

There is market fit risk, are we building the right thing for the customer.

Framing these conversation with the business as functions of risk analysis and management is a fundamental part of leadership in my opinion.


This is a common theme but I don't believe it's true. It mirrors all of those twee blog posts that come up with Yet Another Metaphor for technical debt as if people with MBAs would then just "get it".

It's not really about finding the right metaphor it's about lacking a common, shared unit of account.

Risks can't really be factored into decision-making unless you can measure them. Theyre not risks otherwise theyre just black swans waiting to happen (or not).

Technical debt and code coverage "as risks" cant be factored in either. Instead of trying to cargo cult the way "the business" talks or coming up with yet another metaphor we should be coming up with better ways to measure these things so that they can be plugged into an excel spreadsheet.

This is done incredibly badly right now. Most measurable code metrics which proxy things we care about are downright terrible at proxying them (e.g. test coverage). In place of working metrics most businesses (in my experience at least) rely on guesswork and trust in high level executives.


> Technical debt and code coverage "as risks" cant be factored in either.

They can be measured. Lots of companies don't, which is mind-boggling.

When a release goes sideways and has to be rolled back, you figure out why. Ah, you launched a feature that revealed an intersection of edge cases in your testing? That's n developers * m hours * p dollars of blended dev salary down the drain.

When your feature delivery slows to a crawl over time, you dig in – your programmers aren't getting worse over time... are they? No, you find that a ticket that took your average developer x hours to deliver at the beginning of your development now takes 2x or 3x. Make a value stream map and you'll find out that your test suite has become so sluggish that your developers can't iterate quickly, that QA now measures regression time in days not hours, and as a result your developers are taking on less work to compensate. X dev hours * Y blended rate in ongoing waste, plus factor in the value of missed sales because of missed features if you want to really put a point on it.

> It's not really about finding the right metaphor it's about lacking a common, shared unit of account.

Name the local currency you get paid in – dollars, Euros, pesos. That is the shared unit of account. If you don't care about it, start walking up the org chart. You won't have to go far before you realize that's what actually matters, and that engineers who can translate technical risks and inefficiencies in their world to dollar values are highly valued.


People get mad when you start counting up how much salary the hours of meetings about why you aren't moving faster chews up.

That daily status meeting that pulls in twelve people for an hour to hem and haw about nothing? That costs $600 every day. $30k a year.


Only $600 for a dozen people? That's only about three people based upon the billable rates I'm familiar with.


>Name the local currency you get paid in – dollars, Euros, pesos. That is the shared unit of account.

Good luck trying to accurately measure technical debt in dollars.


Accurately is the hard part, but you don't have to be perfect here – just as accurate as next year's sales forecast is.

"I can't outrun a bear, I just have to outrun you" ;)


To take the metaphor, mangle it, and run with it - The problem is that technical debt isn't being chased by a bear, it's walking through the woods in bear country. It only becomes truly quantifiable when the bear charges you. Until then it's a "could be a problem". We probably don't even know the exact details of what sort of bear is going to come maul us, so it's hard to say "if we don't stop this a brown bear is going to come out and get us".


I don't disagree, which is why my original post talked about pricing technical debt retrospectively. You don't know when the bear will charge tomorrow, but past bear charges can be highly instructive if you learn from them.

My prod ETL process is an unloved mess, and we've had enough maulings occur to learn from it:

* I learned the problem – data models drift in the production app and changes aren't reflected / properly tested in the ETL or the analytics environment. And we don't have enough monitoring in place to catch the problem when it happens in the wild.

* I learned the cost of inaction – my analytics environment is responsible for $10mm of CARR, and I know the impact to customers when it goes down. Heck, I know how much customer credit has been given out due to SLA breaches, so there's a quantifiable price today.

* I learned the price to fix it - we've estimated the effort and run cost of the new solution.

Now I've got something that I can work with: customer acquisition says they've got $4m in the near-term pipeline? OK, great, but if we onboard them with a broken system we risk spending $+x in customer credits and risk damaging our reputation. And since it costs $y to fix it, and $x > $y, let's fix it.

I have plenty of other unloved systems that are in bear country but haven't mauled me (yet), but even then you can start thinking about risks. One system is small but critical for multiple products, so my exposure is "hey every single customer is getting service credits" – expensive enough to force the monitoring/refactoring work that it always needed. One of them serves logins for ~7k users but only for one customer so my financial exposure is bounded. We don't worry about bear spray when walking around there ;)


I've never really seen tech debt cause an immediate, hugely-costly single event where you could say "see, that was the risk we were taking."

I've seen it frequently slow feature development down, though.

But... I've also seen a lot of rewrites fail to improve feature development speed.

So until we, as a discipline, can quantify and predict development speed w.r.t. shitty vs good code, it's going to be a tough conversation that'll rely on persuasion and gut estimates.

We normally can't even predict how expensive (time-consuming) doing that rewrite that we want to do would be! Let alone the benefit!


I'd like to second this line of thinking.

I've built a software business over the last 15 years using this exact communications theme, and whilst my little business is a sample size of 1, we've certainly found that framing conversations with customers in terms of "risk" has kept us all on the same page.

I'd also add that by adopting this communication style, one can then look at opportunities in a new light as well. On that note, I was lucky to read a book called "IT Risk" (by Westerman & Hunter, HBS Press [0]) back when I first started the company, and it gave me an interesting perspective on risk.

In a nutshell, once you identify and minimise/eliminate all the usual risks (many of which you identified in your list), you can then reorient your business in such as way so as to start actively taking risks which stand to improve your overall offering.

This in turn allows you to build a strategic moat of sorts, because whilst your competitors are still scrambling to address the usual risks, you're actively taking on opportunistic risks which at times reap tremendous rewards.

[0] https://www.amazon.com/Risk-Turning-Business-Competitive-Adv...


If you are lucky enough to work with people who are willing to explain this and take the information in objectively, that's great. Sometimes, I have been that lucky person.

On the other hand, in not a trivial number of cases, you'll be working with people who get all the upside when things go right and the blame the tech side successfully when they don't. Their interests do not align with the interests of the business either but since the business side consists of their bros, they all instinctively align on that.

So, you keep talking about "risk", that gets portrayed to upstairs as "I am doing my best, but so and so here is throwing technical minutiae at me which is slowing us down."

They are used to being graded on the curve where your position gets better if everyone else does worse and taking advantage of that system.

Among the many reasons I never graded on a curve, but you can't do that with upper management in business.

I say this as a person who believes business cost/benefit calculations trump everything else. However, decisions must be made by people who understand the tradeoffs and are accountable.


I am not versed enough in car things, but it seems to me this would be something maintenance shops nailed down decades ago. How would they convey that your car is lacking maintenance even if you can still drive it to work ?

Perhaps we should go with "needs repair" or something like that ?

"Risk" feels like insurance territory (which goes along with "there is always some risk"), and a lot of people beautify the notion of taking risks to get higher gains.


The risk for a shoddy car is that you end up killing someone or get written up and charged with a violation. The risk for shoddy software (in most cases) is that you have an outage and suffer some financial or reputation damage, but that won't put you out of business.

There's laws against bad cars, there's no laws against bad software.


> There's laws against bad cars, there's no laws against bad software.

There's often privacy/data protection obligations, but they seem to be impossibly difficult to get the courts to pay attention to. If the average business owner would find themselves in legal shit every time an external party got access to their data (i.e. just being the victim of a ransomware attack puts you at risk of losing your home), they would probably pay more attention.


Risk-taking is very cultural dependent, in some contexts people would avoid taking risks as their default option.

That said, I feel that car shops just work because, well, it's the law, especially in some countries.

In the end "needs repair" is the same as saying "mitigating accident risk".


More than needs repair - I think that a pretty good analogy for tech debt is deferred maintenance - if you don't change your oil or your tires the chances of expensive/bad problems increases. If you've got dents in the body you will need to address these before a respray.

And if you have a 15 year car with a clapped out motor, you are not going to achieve modern safety and fuel efficiency by swapping the worn-out motor with a NOS replacement, so there's a point where investing money in the old solution isn't going to move you into the future you need.


And then we can scathingly label the most hated parts 'beyond economical repair'!


I don't disagree, but at my last company, I was always very explicit about these things in business terms.

The real enemy I ran into was the SVPs desire to please the CEO who wanted to please the board who wanted to please the investment PR racket and make sure we could tell Gartner that X feature is ready by Y date to ensure we were included in their Magic Quadrant. The answer to the bosses had to be "yes it will be released by Y date". Saw that pattern repeated, realized "Agile" was just a word for waterfall, quit for a startup.


"How a plan becomes policy"

http://web.mnstate.edu/alm/humor/ThePlan.htm

This poem is my favorite description of this pattern, because it focuses on the the bad communication that creates the problem. Regardless of the intent of the people involved, gradually filtering out important information at each level as people try to please their superior guarantees a GIGO mess for the that the people at the top. As the poem says, "this ... is how shit happens",

Adopting something like the airline industry's "no-blame" culture that focuses on getting accurate reports by explicitly not focusing on blame might help avoid the natural tendency to eventually fall into this pattern.


http://www.art.net/~hopkins/Don/unix-haters/tirix/embarrassi...

> I wrote a note in sgi.bad-attitude about the "optimist effect", which I believe is mostly true. In condensed form:

> Optimists tend to be promoted, so the higher up in the organization you are, the more optimistic you tend to be. If one manager says "I can do that in 4 months", and another only promises it in 6 months, the 4 month guy gets the job. When the software is 4 months late, the overall system complexity makes it easy to assign blame elsewhere, so there's no way to judge mis-management when it's time for promotions.

> To look good to their boss, most people tend to put a positive spin on their reports. With many levels of management and increasing optimism all the way up, the information reaching the VPs is very filtered, and always filtered positively.


I agree with you, and I think it's just as frustrating when the decision makers don't document and communicate risk "down the chain", instead packaging up tasks for the implementers and expecting them to have the same sense of priorities, or to rediscover all the nuances of their design decisions during development, etc


Businesses frequently don't understand the need to fix tech debt / testing / maintenance / security. They just think it's just code for the engineers wanting to "sit around and do nothing" (this is the actual phrasing I was told) because it doesn't add new features they can sell and "contributes nothing to the company."


You forgot the most important one: the risk of running out of money.


Middle and upper management are mostly concerned about risk, because the environment they are in rewards being “done” more than trying something that could have a bigger payoff.

That is, the downside of accepting risk tends to outweigh the upside of achieving a goal at the cost of running over schedule. In that environment.


Very much this; not least because, having organizational power, they can manipulate the situation to evade the downside. As discussed in the classic Gervais Principle essay: https://www.ribbonfarm.com/2011/10/14/the-gervais-principle-...


There is another thread on the front page about every engineer having to try out consulting.

This is one of the ways to learn how to use language that business gets the point across.


Didn't quite parse your second paragraph. Missing a few words?


Rephrasing it, with consulting one is mostly on the frontline and has to deal with all areas, so one grows into using a language to express the technical issues in ways that get the buy-in from business, specially if it is a one person team size consulting gig.


My dad who was a quite successful artist back in the day said something I’ll never forget.

“Son, I’d like to work on that painting forever, because for me it’s never finished. But I have to stop and sell it because I have to feed the 3 of you children”.

His sole source of income was his art, and we grew up not wanting anything, because he was pragmatic enough to understand at some point he had to ship his art to survive - even if to him, it was not completed.

That’s why when I heard Steve Jobs say “real artists ship”… I immediately understood at a visceral level what he meant. And iPhones over the years have had countless bugs.


Hmmm, everything has bugs. The iPhone shipped as a great product that customers loved. The point of the article is to release something great that the customers will love. Internally, yes, ship something early and iterate on feedback.


That's a great lesson your dad taught you there. I wish I had been taught that early on in my career because it applies just as well to an organization.


Here's a question: why do you, as an engineer, care? You're just an employee, it is the upper management's role to make sure your company moves forward, maintains the trust of clients, has reliable enough solutions. If they decide to sacrifice quality for speed that's their problem/decision. Really. You were hired to provide technical solutions within specs, and if specs require fast completion more than quality then that's probably what you should deliver. Then watch it blow up, or maybe it will actually be good enough.


The fastest way to cap your career growth is to think about yourself as someone whose role is to take a spec and provide a technical solution inside that box.

Upper management values – and pays accordingly – people who go beyond that. Learn some finance, learn to mentor others, learn to talk with a customer and understand what they're really after. You'll gain trust and find that you're being pulled into things far earlier in the development pipeline, which often puts you in a place to build better products.


Upper management won't value you constantly disagreeing with them on risk/reward tradeoffs like "how finished does the code need to be" or "how many bugs are OK for now." Maybe they say "yes, but if we don't get it done by the deadline, we might lose $LARGE_NUMBER in business because..." After that point, there's a difference between seen as usefully raising a concern and being stubborn and annoying.

Or, possibly upper management is just clueless to the cost of the bugs that are shipping. But you still probably won't get far if you just say "this is too buggy" and can't show them why they're wrong and you're right about it harming the business. Maybe you can persuade them if you go get some data. Maybe not.

Ultimately, if the kind of software they want doesn't line up with the kind of software you want to build, don't expect to get rewarded for constantly complaining about it. You're probably better off finding a place where you align better in terms of what type of software they want to ship.


It's the fastest way to learn to understand your career limits, stop worrying and start to love what is possible to love in that job

>Upper management values – and pays accordingly – people who go beyond that

They love when many do that while they pay accordingly only to the very few in order to entice the others.


Oh, playing the ladder can be rewarding. But you don't achieve that by saying "our code will be piss if we ship too quickly" if a decision has already been made without you. Don't be clueless, you must get your seat at the table first.


ive had issues with management paying me accordingly. lucky me in obsessed with mentoring


> If they decide to sacrifice quality for speed that's their problem/decision. [...] Then watch it blow up, or maybe it will actually be good enough.

You apparently have never been sued for your managers decisions.

Here is how that was for me:

My manager released something I worked on, despite me saying that it would need a few more months of work. Apparently that would have made this unprofitable, so he disregarded my advice and they released it anyways. I joined the company when this project was already a Year late. So it was not surprising that this project was close to costing more than the customer would pay.

Except for a few interns, I was the newest dev in the company, so I didn't push super hard for more work on this project. However I was the most experienced dev on the team, so I should have pushed more.

Or should have at least got it in writing that they decided against my recommendation.

When it eventually crashed and caused a 30 minutes outage I was fired.

Why? Because it was my code that was the problem of course. Never mind the fact that I told them it is not ready and that the error wasn't even code that I WROTE. Some intern wrote that part. I wasn't a lead or manager and had nothing to do with it.

They then tried to sue me for more money then they ever paid me. 30 minutes of factory outage can be very expensive which they of course tried to recuperate out of my bank account.

I was very lucky that I had a good lawyer. Actually I could only afford a very junior lawyer, but he was sick on the day of trial and asked his professor to go instead. I was probably super lucky with that one.



Because you eat shit later when it blows up or makes your life more difficult.

Also, not having agency about how you spend your energy eats away at the soul. That's more subjective, but I'll do a way better job with way less effort if I am a primary decision maker of what work I'm doing instead of an intake machine.


I'm honest about delivery dates, but I do deeply care about getting a project across the line on time. Not just because it gets me cooler, more demanding projects with larger compensation, also for its own sake. It's fun to win, especially when the timeline isn't arbitrary and you can pull it off.


There's a saying when I was in the military: "If it affects you, it's your problem".

If your code blows up, it will come back to haunt you. Now you get to fix your code, except it involves a bunch of other teams. Maybe that's what you want, but most likely not.


This _should_ be true, but isn't always, if you're at a company (or just have a manager) who will let the mess/blame fall onto your shoulders.

That being said, if you're at said company/have said manager, you should get a new job. I hear the market is hot.


True, but the "be good enough" will probably be updated on what "good enough" is after you deliver your first impossible project. It'll eventually catch up, but that's just how bad executives and bad companies manage.


If your team fails because you have incompetent leadership, then YOU are the one who suffers.

Upper management doesn't go around looking at what individual people are doing. They just see Team X as a whole keeps having problems, so Team X is literally sitting around and doing nothing.

There's no need to talk to people and communicate. Just fire them. There's no other possible reason they're having problems other than lazy engineers not doing anything.


This advice is soo very wrong. As a programmer you need to think first. The code you write is gonna affect your life maintaining it. So it does affect you or some other developer maintaining it. By actually caring you make things easier for you and your peers and that helps you grow your career.


Whenever someone writes a comment like GP's, it's safe to assume they are not involved in long term maintenance of a system. Or they're like my current team, reveling in the paid OT their shoddy work lets them justify (which is now unpaid for them, but paid for me, watch your contract terms people!).


If you're the kind of developer/programmer that cares about their work, then your company probably doesn't know how lucky they are.


You care if you work in security, and ops but perhaps to a lesser extent.


> The top two sources of failure in software development projects are failing to discover what is actually needed and failure to eliminate chaos in the software before delivering it to the customer.

The first one is called requirements gathering (the method, requirements engineering). I was wondering how far I was going to have to read to find someone mentioning that.

It can be as simple as writing down a list of things the software has to do. It could be one afternoon brainstorming meeting, to start. But we don't even bother. Why? Because this discipline is not engineering, not science, not manufacturing. One my darker days, I think we are just playing in the mud.

I am grateful to work on virtual machines and compilers. They at least have functional requirements--an input specification--and a well-specified target machine. The rest in the middle is a fun design exercise, but there is a crucible at the end of the day. If you can't run the programs without bugs, you have failed. That makes it easy to put functional correctness first and performance second, as it should be. We need to find more ways of translating problems into requirements in order to reduce black art to science to practice.


I agree it's a bit like playing in the mud, but the more I work the more I suspect that's just accepting reality.

Requirements gathering works wonderfully for internal projects with a fixed set of stakeholders and a well defined problem with measurable outcomes. Product development is a whole different world whose outcome is the much more vague "generate profit for the developers". Particularly in the realm of B2B software.

You need to find some local minimum that will meet the needs of thousands of organizations in order for development to be worthwhile.

You need to get many busy people who don't know you to actually talk to you.

You need to talk to multiple different stakeholders within the organization.

And even after you group all your stakeholders into some kind of user personas, you still haven't accounted for what the larger organizational behavior will be, as B2B products generally require large-scale buy-in.

And at the end of the day, no matter what people say they will do, the true test is when a company actually opens their coffers and pays your invoice.

We've arrived down here in the mud through years of experience of doing requirements gathering, building to a spec, and then finding out that we built the wrong thing and nobody will pay us for all our hard work.

Sometimes you really do have to build it to see if they will come. And the more formalized (ie expensive) you make the requirements engineering process, the more compelling the iterate and start getting paying customers asap plan becomes.


>"That makes it easy to put functional correctness first and performance second, as it should be."

That only works when performance isn't a core deliverable. In my area, that have to be co-developed as it is too expensive to build merely functional and make it fast later.


As I had to convince a former colleague, fast but wrong is useless. I can make a very high performance program if correctness is irrelevant. Correctness is the requirement (at least from my professional work), and performance is necessarily secondary. That doesn't remove it as a requirement, but if I'm giving wrong results quickly then planes aren't flying (worst case, they're burning up on the ground and everyone is dead). Performance is obviously important because I've mostly worked on real-time systems. But I can take a correct program and make it fast much more easily than taking a fast but incorrect program and make it correct.

I guess if your work has no consequences correctness can be discarded or rendered a secondary criteria for your programs.


Oh, believe me, performance of a Wasm engine is a core deliverable.


This reads a lot like a lean/MVP manifesto. I would say that there is always a pressure to deliver working code, but that “unfinished” should always be in terms of project scope and not in terms of either functionality or reliability. “Code” is the wrong terminology here.

It’s OK to not have every feature implemented. But it’s not OK to ship half-baked, untested functionality, no matter what the “code” looks like.


> This reads a lot like a lean/MVP manifesto

It also reads like the Agile manifesto from 20 years ago, which caused a huge splash when it was first published - before it was misinterpreted by everybody who might have been able to actually apply it to obtain results as "do exactly what you've always been doing (especially in terms of fixing a delivery date long before you describe what the software is supposed to do), but use terminology like 'sprints' and 'standups' to do it".


Conversely, half-assed is ok if you only need half an ass.


As Dilbert says: "Our boss can't judge the quality of our work, but he knows when it's late".


And just like Scott Adams is so wrong about many things he says, Dilbert is wrong here or at least short sighted too. While you notice immediately that something is on time, you'll also notice really quickly that something is of bad quality, unreliable, inconsistent, low performant, etc which translates in whoever the customer is to be very unhappy very quickly. There needs to be a well-informed balance.

I know you were just posting a funny Dilbert quote but I don't respect Scott Adams anymore so I was triggered, please accept my apologies.


I can see that you have not been exposed to the magic of bureaucratic indifference.

If you release a really shitty product, and your customer doesn’t have choice (has no competitor, made a large upfront payment, has fallen into vendor lock-in, etc), you don’t have to respond.

You can translate an unhappy customer into a compliant one by putting up barriers to the reporting and documentation of the issue. Make automated resolutions that don’t quite fit the situation, put the issue into a ticketing system that never addresses the issue, give employees roles that either overlap with each other or don’t intersect at all over the customers issue, etc.

These are the situations that Scott Adams parodies with Dilbert.


> While you notice immediately that something is on time, you'll also notice really quickly that something is of bad quality, unreliable, inconsistent, low performant, etc

Not in my experience. Our team has a couple core perf metrics that are alarmed, like page load and missed frames, but it's easy to do really bad things that won't trigger the alarms. Or do them such that the automated tests are using different content for the pages they test than the real users who will see the commits weeks later. E.g. someone commits a change to feature x that locks up the screen, but the test user pool never uses feature X, or never puts content into it and just sees the empty state screen.

Quite common for developers here to write stuff that works for 99% of users, but falls over otherwise as well. Like today I fixed an issue where tapping a button on one screen to go to another really fast, like under 1 second, crashes because of a race condition. Testers aren't going to notice that. It just shows up in the company's overall crash rate which is spread across 4000 developers. Automatic UI tests caught it, but the responsible team had just filed the crash stack trace JIRA into their backlog and left it to sit for months. Similarly, today, we had a production issue because someone wrote some code that only works for certain users who had already accepted a certain terms of service screen.

Shipping a feature is rewarded heavily. Not screwing up the app for edge cases and perf and people who have to implement the next feature after you? Good test coverage? Not at all. If you dare to give an estimate that includes full test coverage, PMs will just take you off the project and pick a developer who doesn't do that.


Dont take project management advice from Scott Adams. His judgement is seriously questionable. he is okay for a few laughs at the expense of old corporate culture but anything more serious is useless.


> Engineers are problem solvers, and a manager who applies pressure becomes the problem the engineers decide they need to solve. If it is really important to deliver Something™ by the date, engineers will, if enough pressure is applied, deliver unfinished code by that date. Unfinished code that is full of race conditions, performance problems, outright bugs

This is wrong. You can and should deliver "by the date", but you should also prioritize ruthlessly, and ship at least the top of most-important-features list with reasonable quality when time runs out.

Of course, this implies that both managers and engineers have to understand deeply, what value each feature brings, and communicate among themselves and with their client continuously.


I loved the build up in this article, but the solution left me wanting more. My org suffers from the disease of Date Driven Development, but just telling my boss to let us work until it is done simply means we will never ship because it will never be done.

Literally the list of new feature requests from product is a mile long. So where do you draw the line? And once you draw the line, how can you justify pushing off on everything else that needs to get done while you are hardening and making sure that what you have decided to ship is actually ready to go?

The problem statement resonates with me, but I’d love more help on the solution. What is a viable way to get out of date driven development? Please help!


Take an incremental approach.

Establish the core needs, what must be shipped as a basic system, and ship it. Then extend over a series of iterations where you incorporate more needs and wants. You won't totally avoid the deadline, but you will mitigate one of the deadline problems: All-or-nothing.

If you establish a deadline 1-3 years from now it is expected the entire system will be finished on that date. Well what if it's not? Establish quarterly releases (or more often, especially if you can work with the team deploying the system like if you're an internal dev shop) and aim for a first release in a few months that hits the essentials. Everything else is an extension, you may still take 1-3 years (or longer) but you'll be releasing and getting value from the system right away.

I've even done this with safety critical real-time systems in my work. We built what was necessary for flight test, but not desirable for operations (missed a few needs, a lot of wants). By the time the plane was rolling off the factory floor for release to airlines, we had all the needs covered and most of the wants. The rest of the wants were covered in the next year or so as well as some newly discovered needs. But if we'd waited to have everything done, the plane wouldn't have been delivered to customers for several more years (because flight testing would have been delayed by about 3 years).


Simple: differentiate between basic feature maturity and additional optional features. You release a working product, then you add working functionality over time. This is what the successful software companies do today anyways.


> Unfinished code that is full of race conditions, performance problems, outright bugs and worst of all, poor to non-existent product/market fit.

The last one is so different than the rest. Bugs shouldn't even "ship" internally in that they shouldn't really be making it out of feature branches often for some reasonable value of often.

But back to the last one, that's something that should be decided with high confidence before the first line of code is written!


The last one is the hardest to get right in my experience. Sometimes you just won’t know until you start asking people to put their hand in their pocket.


I mean there's been a few successful companies like Slack that started off with something completely different before they pivoted. Sometimes market fit takes some experimentation.


Not to be thrown into semantics but... Scope vs. Goal The goal defines the scope. If the goal does not change, neither the scope. When the goal includes a date...the scope has no way to adjust because scope refers, simply put, to a to-do list. A poorly conceived date does not change the scope, it cuts short the to-do list. How long does it take to a Nascar Team to win the Indy 500? 500 miles as fast as humanly possible.


This article raises some great points but is still trying to close the stable door after the horse has bolted.

If your business is agile, it begins with the style of engagement between you and the customer. Really agile businesses don't deal with absolutes (such as fixed price contracts), instead they accept change at the core.

Accounting hate that, so most businesses aren't agile.


Half finished work they would never do su


Deliver something the customer wants, make the product better, or figure out why those 2 things are not the priorities. It's usually money or politics.


As a Dev this looks like such a wonderful approach to me, but how do you get the customer on board with this? How do you get a nervous customer that doesn't trust you yet to enthusiastically work with you figure out what they want?


A rational manager would scope projects with their technical leaders AND senior engineers who will ultimately execute. This scoping must happen before selling the project and also constantly throughout the lifecycle of the project.

Then, the rational manager would also manage expectations with other stakeholders, not committing to things they cannot see the end to.

But most managers are not rational and do the complete opposite. They want to "appear" amazing managers to other stakeholders while simultaneously bringing their "ego" in discussions with team leads and senior engineers.

Nobody is happy with such managers. I think a better org structure would be managers reporting to senior engineers instead of senior engineers reporting to managers.

Senior engineers are responsible for the implementation. Managers merely act as assistants delivering messages everywhere.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: