Hacker News new | past | comments | ask | show | jobs | submit login

> basically, you decided that crash or losing settings equals severe and cosmetic stuff equals not severe. Why?

Because you, as a developer, are clueless regarding business needs, and thus are unaware of why "cosmetic stuff" might be far more important than the risk of deleting someone's user profile. For example, perhaps the color scheme clashes with the one used by your client's main competitor and therefore might leave him vulnerable to lawsuits.




> Because you, as a developer, are clueless regarding business needs

Then that's the problem we should fix. Instead of creating an extra database field to capture somebody's incorrect opinion, the people who understand priority should be helping developers know enough to have useful opinions.


> Then that's the problem we should fix. Instead of creating an extra database field to capture somebody's incorrect opinion, the people who understand priority should be helping developers know enough to have useful opinions.

The priority database field is how you communicate this factor, but there is a point where reasonable people can disagree and the organization needs a way to make decisions clear.

To draw out the example even more: you could have a $600K invoice riding on customer acceptance, when the person at the customer site who signs off won't do so until the customer's logo color is correct. While that crasher when you enter a non-numeric character in a numeric field of a new feature? "We accept the functionality meets the milestone so will sign off but we don't plan to roll it out until next quarter after we have begun training personnel on the new feature."

Sure, every good organization should want everybody, not just developers, to understand the customer's business and such but sometimes you just need to get it shippable.


>The organization needs to make this clear

Why? What is the business value of having severity as essentially a protest field logged by people who don't understand business impact? You are basically in all your points explaining the business value of priority which nobody ever disagreed with and then going "and that's why we need two fields".


> Why? What is the business value of having severity as essentially a protest field logged by people who don't understand business impact?

I don't understand where you could possibly get the "protest field" idea. Severity is an objective statement regarding the known impact of a bug as verified by a developer. It summarizes the technical impact of a software defect. Stating that bug X is high-severity because it crashes is not a protest, and just because the PM decides to give priority to other more pressing issues it doesn't mean you should throw a tantrum.


What is the 'technical impact' of a defect, and how can you divorce it from the user impact? How can it be stated objectively?

Crash bugs aren't bad because crashes are inherently bad, they're bad because they have negative user impact - if the program crashes and loses user context, or data, or takes time to restart... those are bad things. If it crashes a little untidily when it receives a shutdown event from the operating system... maybe not so much.

Same goes for performance issues, scalability problems, security flaws, even badly structured code - they don't have technical impact unconnected to their user (or business, at least) impact.


> What is the 'technical impact' of a defect, and how can you divorce it from the user impact?

TFA provides a concrete definition and also the method to classify bugs based on severity.

Severity does not divorce a bug from "the user impact". There is, however, also the problem of low-severity bugs or even tasks having low user impact but high business impact.


> low user impact but high business impact.

But that's a contradiction. Unless the users aren't important (and the business is another entity, e.g, a CxO that has clout and demand a fix for a thing that users don't care about).


It could be useful if the folk prioritizing things are dealing with non-specific complaints about the software being unreliable or not working correctly.


Databases are a very bad communications medium. So if that's the major way devs and and product people are conversing about issues, it's no wonder the devs lack sufficient understanding of business context to understand what the real priorities are.

I do get that people have all sorts of adaptations to dysfunctional working conditions. So if a severity field is one of them, fine. But I don't want people to mistake that for healthy collaboration.


>>Databases are a very bad communications medium

Are they? That's how majority (all?) systems that are asynchronous work. The data to be communicated has to be persisted. I think asynchronous is a good communication method.


I am not talking about machine-to-machine API calls. I'm talking about human communication, which is clearly the topic of what I replied to.


> Then that's the problem we should fix.

There's nothing to fix. Developers assess severity but the project manager defines the priority. As the PM calls the shots, they pick which issue should be addressed first, and obviously it's expected that some low-severity issues should be given priority over some high-severity issues.

In fact, the only thing that needs fixing is any potential misunderstanding on behalf is some developers on how low-severity issues shall have priority over high-severity issues.


Why does severity need to be assessed at all if we're just going to use priority instead?


Because a crash is different than a button rendering with the wrong color, and although priority assessment might change with time, a crash is still a crash.

It seems that a recurrent theme in this thread is that some developers have a hard time understanding that PMs might have solid reasons to prioritize low-severity issues over high-severity issues. It's like these developers are stuck with a mindset where the forest doesn't exist and only the trees they personally have direct contact with should.be worth any consideration.


Why set a severity if you're not going to use it? A crash is still a crash if you don't set the severity and just write that it's a crash in the bug description.


So the PM can triage high severity issues quickly, because even though they may be p2 issues they’re probably worth serious consideration.


I know some people look at me like I have three heads every time I say this. But if a project is dropping so many balls that it's hard to keep track of them all, I think the real solution is to work smaller and close feedback loops faster, so the sheer number of bugs is not overwhelming.


> Why set a severity if you're not going to use it?

Your question misses the point entirely.

The point is that severity is an element that's used to classify an issue with regards to priority. Severity does not exist in a vacuum, and priority is given depending on context. Severity is an attribute, other attributes, that's used to determine the priority.


It's only used to determine it in a way that's divorced from the business context. If everybody understands the business context, that's no longer useful. Ditto if people are collaborating with actual discussion, rather than trying to mediate communications via a database.


> It's only used to determine it in a way that's divorced from the business context. If everybody understands the business context, that's no longer useful.

That's the point, everyone does not understand the business context. Nor are they expected to. That's the job of the PM it's his main responsibility, and it's the reason PMs are hired to manage teams of developers.


I understand some organizations work that way. I'm saying it's bad.

The point of developers is to make software that people use. So if we want to do our jobs well, we have to understand how we are creating value. Product managers may manage that information flow, and they may get the final say in decisions. But if they are specifying software in enough detail that developers can be 100% ignorant, then developers can (and should!) be automated out of that work.


What extra context does "severity 0" give you on top of a bug title like "Site crashes on action X"?


I think this thread is interesting and kind of funny, because it reminds me of work where I maintain some systems that keep track of projects for PMs, and I originally thought my job was to make everything consistent. But there are a whole slew of ways to express the "closedness" or "openness" of a project, and the PMs have evolved conventions where they want to be inconsistent and resist all efforts to make it all make sense. You have a project status which may be in progress or closed or something else. You have a closeout milestone which may be in progress or closed or something else. And you have a time entry code which may be open or closed. But it turns out there is no simple way to make these consistent, because people use inconsistent combinations to express things...but it's hard to tell what.


You guys are missing the corollary that low-severity bugs being escalated to high-priority is the edge case.

The point is that severity is not ignored; it does inform the priority. Most of the time there may even be a direct correlation between severity and priority.

But other (real-world business) factors also inform the priority; while severity will never change in the absence of new information about the bug, those other factors may change frequently. It doesn't make sense for a PM to reread every single ticket and reassess each one's severity when adjusting priorities, when the developer can just determine that once and record it in the ticket from the start.


Perhaps it's a problem of language. Instead of severity, may be it should be technical complexity.


Complexity sounds to me like more of an implementation-level concern.

e.g. A bug might be critical severity if it wipes the entire production database, but low complexity if the fix is to delete one line of code. And maybe its priority is P1 instead of P0 because the customer said they'll remember how to avoid triggering the behavior but they really need that logo color changed asap for an important demo.


The point i was trying to make is that the severity hardly changed the priority, if its user impact is low. But then that means the severity isn't high either!

So what's the point of severity?


Where are you getting "hardly" from? In this example, it normally would have been P0 (release-blocker) but was downgraded to P1 (still the second-highest priority) because of a special consideration on the customer's end.

The point of severity is that it's an objective metric determined in isolation based on specific technical guidelines that any developer on the team can follow (such as https://www.chromium.org/developers/severity-guidelines). Whereas priority is a higher-level determination that isn't purely technical.

It's like the difference between body fat percentage and attractiveness. Any "engineer" (doctor with a DEXA scanner) can tell you your BF%, and attractiveness will typically correlate with BF%, but ultimately your "customers" (romantic partners and targets) decide how attractive you are. Not a perfect analogy (priority is still something you'd decide internally, not your customers directly), but hope that clarifies things.


Hiring good programmers who are not too far out on Asperger/ introvert scale is an issue. So you can fix it by letting programmers only worry about the tech part and letting PM prioritise things. I think motivation will not be as high but it is a way to get things shipped and profitable.


Speaking as someone with Asperger's who considers himself both a good programmer and capable of navigating / leading cross-functional prioritization discussions, and who likes knowing the context behind his work: maybe you should re-evaluate your assumptions about neuroatypical people.

(...and if you're indeed in a position where you're responsible for hiring decisions or performance reviews: strike "maybe" from the preceding sentence.)


Agreed. Non-neurotypical people may or may not need to approach understanding the context differently than neurotypicals. But it's not like we're incapable of understanding the context!


I work in a niche field. Not so niche that there isn't a lot of money in it, but niche enough that we need to explain to all of our new hires what we do as a company.

For us, it is absolutely 100% necessary to hire domain experts to prioritize bugs and features. It's not a question of incompetent or dense developers, it's a question of things that are not obvious to someone who doesn't have tons of experience in the field.

It's a problem that I imagine developers working on Chrome, Call of Duty, iTunes, or Outlook don't have. You can hire recent college grads and expect them to understand what the software does, have reasonably good instincts how to prioritize bugs and put together the right user experience even if the description in the feature request is sparse on details.

By the way, I heartily recommend working for such a company. My company works very, very hard to retain people. Someone who's spent ten years getting used to the weird stuff our customers expect is far more valuable than someone with half the pay who needs someone to hold their hand through every single issue. Everyone has their own office, management is extremely permissive about the shit that doesn't matter, there's never deadlines or crunch time, everyone chooses their own work/life balance. (We're hourly, and the expectation is that you work more than forty hours a week and need manager approval if you want to work more than 60 for more than three pay periods in a row) If we want more vacation, we can bank hours and spend it on supplemental vacation. Everything's great.


Playing devil's advocate: Why does the developer need to know why one bug is more important than another, if the priorities are clearly defined? I.e., if the backlog manager sets the priorities according to the customer's/business' needs, then the developer just needs to know that the cosmetic bug has a higher priority than the crash bug, but they don't need to know why the priorities are ordered that way to accomplish their tasks. And if they do want to know for their own knowledge, they can just ask someone who understands the needs, without the need for a more complex set of bug report attributes.


I agree with your overall point I think, but I find that people in general just work better when they have some context for their task and its priority/relevance. Absent that, they sometimes -- consciously or not -- decide "this is stupid" and either rush to just check it off or slow-walk it by allowing themselves to be distracted by other things.


The person you are responding to is saying, yes, "cosmetic stuff" might be far more important. So it's more important! Why have another dimension of assessment where we label it less important? Why not only have the dimension of assessment that actually matches the clients' needs?


Like I said in my other comment, because it makes the difference between controlled and uncontrolled.

Eg "the color is wrong because we specified it wrong" and "the color is wrong because the app doesn't respect what we ask it to display" both ends up with the same bug (wrong color), same priority, but not the same severity because the second case is uncontrolled.

Severity is a dev tool, priority is a business tool.


How does setting a higher severity for one bug over the other help devs?


How would you signal the difference between:

1. We have the wrong .png asset in the database

2. Our entire rendering infrastructure is suspect


That would be extremely obvious from the bug title and description, which are presumably being read by the person who sets priority.


So instead of a severity rating, you are saying severity is encoded in the language of the description? Using descriptors a potentially non-technical PM can understand unambiguously?

I'm not saying this is the wrong approach by the way, it's just interesting how people approach this differently.


If the PM doesn't have enough expertise to understand how severe a bug is, how are they supposed to accurately assess the business impact?


It's not another dimension. It's a classification. Some issues matter more from one perspective but might not justify allocating resources to address them from other perspectives. To be able to do an adequate job prioritizing issues, you need to take in consideration their classification. You're arguing about a desired outcome without paying any attention to the process that leads you to that outcome.


Because there is expressed client needs and real needs. They say they care about this cosmetic thing now so it better be fixed. However you well know that if you don't get this other thing fixed soon internal politics at the client will mean they throw you out. Thus you fix the thing they demand be fixed now (it may only take a few minutes), but you ensure the other things are fixed in the next release even though the client doesn't know they care yet.


Sure, but you don't need separate priority and severity scales to do that: it's just one priority scale but you just assign the priority not entirely on the clients expressed needs but rather also factoring in your own assessment of their needs.


You don't need that, but you are not everybody. When you have a large organization having a simple way to capture this type of thing and make it clear what you are talking about matters.

Of course it does add complexity. It is the call of each organization which things are important enough to be worth the extra complexity and which are not. Maybe for yours it isn't worth the extra cost - there is nothing wrong with that - but other people have different needs and so they will have different answers.

In short, you are wrong in thinking there is a universal right answer.


Yes but what is the point of severity _and_ priority? Why not one field that's first estimated by QA and then updated by project manager when the clients needs are known?


So that they can be tracked independently and reviewed later. As I explain in a comment elsewhere, my company uses an app to generate severity, and it may not be adjusted outside of that. We can then track the number of low or high severity bugs in a delivery regardless of how the customer perceives the impacts of the bugs, using a more-or-less objective measure. We can compare that to the customer's view of the quality of the delivery by using the number of low or high priority bugs.


Makes total sense and my team does this as well. I think calling the value completely perpendicular to fix priority is hyperbolic. Fix priority should be some combination of severity, frequency, effort and stakeholder desire.


What benefits do this measurement and comparison provide?


We have dozens of customers worldwide for our software packages, and each package is highly customised for each customer's business. The severity measure lets us compare release quality across customers using an objective measurement defined and managed by us. The priority measure lets us refine that comparison per customised packaged. Generally, a release with a lot of high-severity issues will have a lot of high-priority issues (since by a default an S1 is a P1, an S2 is a P2, etc.), but some customers have different requirements, different custom features, and some are just more fussy.

If a base release that was customised for multiple customers has an expected number of P1, P2, P3, P4 issues for most customers but a high number of P1 and P2 issues for one particular customer, off of the same number of issues in the base release as measured by severity, then that will stand out in our measurements and we'll dive deeper into that customisation to see what's going on.

(Edited mid-stream... accidentally submitted too early.)


FTA, severity reflects "a bug’s impact to the functionality of the system", while priority reflect the "client’s business and product requirements".

The point of this system is that high-severity bugs might have lower priority than low-severity bugs if you take business requirements into consideration. Yet, this does not mean that severity should be ignored.


You nailed it. Stick with priority and the right stuff will get fixed. Severity just encourages more debate about what needs to be fixed vs. deferred. Inevitably you will end up with a list of defects which will never be fixed.


Developer knowledge of business needs is rarely on the low end of the spectrum.

For example in the teams I lead I make sure developers participate on PO + stakeholder meetings as observers.

This way when devs fix something or develop a new feature they know first-hand what the business expects.

A nice bonus is that the team often gets personal praises from our clients.


that's a strong argument for having a single priority... Even (generously, IMO) assuming severity represents a tangible thing like a threat to code quality or system stability/debugging, a developer should not be the one trying to balance those internal demands against a customers priorities. The important thing here is that devs know what needs to be done. Distributing that arbitrarily across two fields obscures that.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: