Hacker News new | past | comments | ask | show | jobs | submit login

And just like Scott Adams is so wrong about many things he says, Dilbert is wrong here or at least short sighted too. While you notice immediately that something is on time, you'll also notice really quickly that something is of bad quality, unreliable, inconsistent, low performant, etc which translates in whoever the customer is to be very unhappy very quickly. There needs to be a well-informed balance.

I know you were just posting a funny Dilbert quote but I don't respect Scott Adams anymore so I was triggered, please accept my apologies.




I can see that you have not been exposed to the magic of bureaucratic indifference.

If you release a really shitty product, and your customer doesn’t have choice (has no competitor, made a large upfront payment, has fallen into vendor lock-in, etc), you don’t have to respond.

You can translate an unhappy customer into a compliant one by putting up barriers to the reporting and documentation of the issue. Make automated resolutions that don’t quite fit the situation, put the issue into a ticketing system that never addresses the issue, give employees roles that either overlap with each other or don’t intersect at all over the customers issue, etc.

These are the situations that Scott Adams parodies with Dilbert.


> While you notice immediately that something is on time, you'll also notice really quickly that something is of bad quality, unreliable, inconsistent, low performant, etc

Not in my experience. Our team has a couple core perf metrics that are alarmed, like page load and missed frames, but it's easy to do really bad things that won't trigger the alarms. Or do them such that the automated tests are using different content for the pages they test than the real users who will see the commits weeks later. E.g. someone commits a change to feature x that locks up the screen, but the test user pool never uses feature X, or never puts content into it and just sees the empty state screen.

Quite common for developers here to write stuff that works for 99% of users, but falls over otherwise as well. Like today I fixed an issue where tapping a button on one screen to go to another really fast, like under 1 second, crashes because of a race condition. Testers aren't going to notice that. It just shows up in the company's overall crash rate which is spread across 4000 developers. Automatic UI tests caught it, but the responsible team had just filed the crash stack trace JIRA into their backlog and left it to sit for months. Similarly, today, we had a production issue because someone wrote some code that only works for certain users who had already accepted a certain terms of service screen.

Shipping a feature is rewarded heavily. Not screwing up the app for edge cases and perf and people who have to implement the next feature after you? Good test coverage? Not at all. If you dare to give an estimate that includes full test coverage, PMs will just take you off the project and pick a developer who doesn't do that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: