Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Programming is hard (myme.no)
121 points by ingve on Sept 20, 2023 | hide | past | favorite | 114 comments



"Not so much on how we can more effectively understand requirements and translate these into code."

The first step to understanding requirements: There needs to be a requirement worthy of the name.

Because let's face facts: programmers are quite often confronted with "requirements" cooked up by some "stakeholder" that don't even describe the problem domain, much less a path to how it should be modeled.

A lot of the spaghetti that contemporary production code ends up looking like far too often, is programmers having to play a stupid game of "chase the nebulous and ever changing 'requirements'".

Once you develop for A but then B is required as well, and the architecture didn't anticipate that, you may be able to go back and change it ... or not, because usually the requirements change, but the deadlines don't, and good luck doing the same, or more, work all over again, in half the time.

Yes, programming is a hard problem. It really is. But it is also quite often being made harder than it needs to be.


But all that is part of programming. It's not made harder than it needs to be, it is that hard. Because exploring the whole problem space with the stakeholders and deciding together what the software should do is a central part of the problem of programming.

And it's ignored by everyone, the stakeholders don't realize it's a problem, the programmers know but think it's not their problem, and nobody includes it in estimates or plans the required time and meetings for it to happen.


> the programmers know but think it's not their problem

I disagree. We know it's our problem. But we quite often don't get a, or too little, say in the decisions made to manage that problem properly.

Because yes, it would be nice if we would "explore the whole problem space with stakeholders", and it would be great if we could "decide together".

What ends up happening quite often however, is some "stakeholder" saying yes and thank you to whatever he-who-holds-the-money (or his career-advancement) says, with little to no regard whether the promised addon/feature/whatever is feasible and/or accomodated by the existing architecture. Suddenly, the space that needs to be explored may be galaxy-sized, or moebius-strip-shaped.

What also tends to happen: The "exploration of the space" is outsourced to some consultant or architecture astronaut, who wont write a single line of code, and certainly won't have to deal with any unforseen difficulties.

It would also be nice if, once the "exploration of the space" is done, devs get to implement based on what was explored. Which, given the aforementioned unforeseen properties of the problem domain, is laden with unforeseen change as it is. What tends to happen quite often is that in addition to that, requirements, and thus the problem domain change mid project...sometimes multiple times.


Congratulations, you just rediscovered the pains that led to agile.

The key insight is that it is incredibly hard to capture a system that has the flexibility of handling every past, present, and future edge case while providing consistency for the happy path and preventing "bad" things from being done.

XP and the proto-agilists said, as such, that it was futile to try and capture the whole problem because it turns into contract negotiation and pitting stakeholders against each other. Further, they found out that what stakeholders said and what they really needed ended up being not-quite-right and it wasn't exposed until people actually used the software.

(Why? Because the stakeholders are representing a team, and sometimes they themselves are playing a game of telephone. And sometimes they describe a problem in a way that doesn't capture the real problem.)

So, the idea was to have the business representative in the same room. The problem was that those representatives still had to do their jobs, so they ended up burning out and quitting.

And that's how we got all of these product owners lying around - because we just punted the problem and went back to business analysts and gave them a slightly different mission. And that's why people talk about "dark agile." But I think it's easy to understand why the misalignment exists but very very hard to fix.

For internal development, I think the real answer is that the development team and any analyists should be required to do the job as if they were an intern. If there are multiple stakeholders, they have to do all of the jobs, at least for a few weeks, and meet the people who they are building for. Then, they can at least understand the full complexity and why all of these contradictory requirements exist.


An awful lot of programmers seem come out of college thinking that their problem is to find the fastest algorithm that solves some well-defined spec.

Requirements gathering gets short shrift, if it's mentioned at all. But really, it's the overwhelming majority of the job.


That's something that continues to baffle me whenever I interview people fresh from college. So many of them think their leetcode scores actually help them understand whats required in this job.

It doesn't, and it always baffles them when I don't even give them a whiteboard problem to solve. I don't need people who have memorized all graph search algorithms, I don't need people who can code them up in 6 languages without looking anything up. People who can google competently (or use a textbooks index) can do the former just as well, and LLMs can do the latter.

What I need are people who can take requirements, anticipate as much as humanly possible what else the client will do over time, and find a maintainable architecture that implements those and maybe is resilient when facing further changes down the road.

And that is a HARD skill, that leetcode and most of what CS curricula will not teach.


> Requirements gathering gets short shrift, if it's mentioned at all. But really, it's the overwhelming majority of the job.

Tell that to managers - oh wait - hire on algorithms and performance review on BS such as commit count.


> But all that is part of programming.

Nah, its systems analysis, a field which is largely either forgotten (in the “cross-functional means everyone is a nobdifferentiated developer” school of Agile teams) or replaced with “business analysis”, whose practitioners have less relevant skills (particularly, systems analysts were experienced programmers with additional process engineering skills, while BAs are nonprogrammers in a role that is often structured more as stenography than engineering) and “software architects” (who have a closer skill set to systems analysts, but have a role with a simulta!eously more abstract and narrower, implementation-focused orientation.)

Systems analysis requires (or at least benefits from) an understanding of programming, but its not the same skill


My bias is that "systems analysis" is a fake bullshit job, but I'd be happy to be wrong on this one. My guess is that if I asked you to define what a Systems Analyst is supposed to be doing, you'd tell me what I think a Software Engineer should be doing. Whereas you may think that a Software Engineer's job is to "write code". No. Writing code is like writing English. Someone whose job is to simply type out English words is a typist. Someone whose job is to simply type out code is also a typist, albeit one that needs to understand what a "for" loop does. The actual determination of exactly what the program should be doing, and why, and how that fits into all the other things it's doing, is Software Engineering. Someone who just wants to write "for" loops all day without doing that kind of analysis is not a Software Engineer, they are dead weight. Nothing. Doing what the compiler should be doing. But some business analyst who can't code is not a Software Engineer either; they are illiterate, like an novelist who "does the thinking" to figure out what all the characters should be doing, but expects someone else to do the actual typing. They are also dead weight.

Of course I don't expect you to debate all this in one HN thread. I'll browse through "Just Enough Structured Analysis" -- thanks for the recommendation -- but that will be in order to to make myself a better Software Engineer, not to become whatever a "Systems Analyst" is supposed to be. I guess if you expect a Systems Analyst to know how to write code as well as the other programmers, you are simply describing what I call a Software Engineer, and what you're thinking of as a "non-analyst programmer" is really just a typist that should be replaced by the compiler.


> My guess is that if I asked you to define what a Systems Analyst is supposed to be doing, you'd tell me what I think a Software Engineer should be doing.

Possibly: I'd certainly agree that what a Systems Analyst traditionally does is exactly software engineering, whereas whether “software engineers” do that, or are selected for the skills to do it, is rather mixed across the industry. The explosion of the Software Engineer title for programmers did correspond with the downswing in Systems Analyst as a role, but it often was not accompanied by the role and expected skillset of the retitled programmers changing, if anything, it was accompanied by a greater focus in selection in narrow coding skills.

> Someone whose job is to simply type out English words is a typist. Someone whose job is to simply type out code is also a typist,

A programmer who is not a system analyst is not just typing out code someone else is written the way a typist types out natural language that someone else has written. They are still solving problems — sometimes fairly complex ones — with code. And there are probably environments where you need many more of them than people doing the rest of what Systems Analysis entails, and in a world where instead coding skills were devalued that would be a problem.


Software Engineers are misnamed. They do not register with a professional body and commit to a code of ethics; nor can they be disbarred from working if the regulating Society of Software Engineers decides they have failed to work to the standards required.

Software engineers are mostly know-nothing cowboys. The fact that you haven't previously heard of "Just Enough Software Analysis" confirms me in my impression that there are no standards and no regulation.

If you want to git gud, also read "Quality Before Design: Exploring Requirements" by Don Gause and Gerald Weinberg.


There actually used to be a job description "Programmer/Analyst" or "Analyst/Programmer" - in the old days you were expected to do it all yourself.


Of course: commonly abbreviated as "Analrammer" ... on second thought, let's go with "Programalyst"


Good BAs that have domain specific knowledge are really valuable.


Sure, my issue is with the culture and idea of what is needed that has driven the structure of the workforce and the way many orgs treat the role.


Sorry - I was trying to agree with you :-)


If you're in the area, what books are some good guides on this topic ?


Because of when the culture shift happened that started devaluing systems analysis, a lot of the works are extremely dated in terms of the context into which they see the field fitting (written in the 1970s-1980s); Just Enough Structured Analysis (Ed Yourdon, 2006) [0] is a free mammoth PDF in which Yourdon updated his earlier influential work in the field; it is still quite dated when it discusses (e.g.) particular automated tooling, but it covers the scope of the field and a fairly comprehensive (but highly modular) approach to it in depth.

[0] http://zimmer.csufresno.edu/~sasanr/Teaching-Material/SAD/JE... (first live link besides a non-downloadable Google Drive link I could find, the primary distribution was Yourdon’s personal Structured Analysis website which became defunct not long after Yourdon passed away.)


> But all that is part of programming. It's not made harder than it needs to be, it is that hard. Because exploring the whole problem space with the stakeholders and deciding together what the software should do is a central part of the problem of programming.

strongly agree, but I've found a lot of developers want to make a 6-figure salary and not have to do that part.

Furthermore, often times the stakeholders themselves don't have a concrete idea of what they want, it's nebulous in their head because they've identified a problem and a general direction and are relying on conversations with the technical team to help nail down the details.

That back and forth is the meat and potatoes of software development and its what separates senior from junior.


> exploring the whole problem space with the stakeholders and deciding together what the software should do is a central part of the problem of programming

I think the situations that the parent comment is addressing don't really fit the description of "exploring the whole problem space with the stakeholders and deciding together what the software should do"; the programmer is often only looped in _after_ the stakeholders decided the requirements, and if there was any "exploration" done, it was deemed complete before the programmer had a chance to give input. If management actively chooses to work in a way that doesn't facilitate programming being done well, that's not programming being hard; it's just management being bad.


Sure, but this is a degenerate case of bad management. Leaving aside bad management and rigid role definitions aside for the moment, I think it’s fair to say that getting the requirements and tradeoffs right is the hardest part of Product/Biz System engineering. It definitely requires close engagement from technical folks. No corporate environment is perfect, but if that’s not happening at all then it severely hampers your skill development and upward mobility as a programmer, and may be time to look for greener pastures.


that's nice but if the "stakeholders" tell you a particular set of requirements at one point, and then a different one later that's not really anything to do with exploring the problem space.


The problem space is the business, and business needs change with the market


Yep, if your state passes a law between the time you start the project and now it doesn't matter, you still need to adapt the software to be compliant.


People are allowed to change their minds, even the ones labelled "stakeholders".


sure, but the thing introducing complexity at that point is not that you did not explore the problem space.


yeah, it's an unfortunate reality. Odd analogy, but it's like breaking up with someone and it hurting them. There are things you can do to help mitigate, but it is what it is.


Programming is a lot like design

- the customer tells you they need a design for their product, but they don't know what design they need, just what design they want. Now of course it falls back on you if you give them what they want and it doesn't work well.

- there are many ways to tackle the same problem, many of them work okay, some of them work well and look nice, some of them work well, look nice and are easy to manufacture and maintain. But it also has to be the solution they need and one where you can convince them they want it. Finding a good solution is a lot like finding the biggest common denominator of a large set of numbers.

As someone who worked in design before I can say that the most work and stress for me was bringing the clients up to speed, so I can communicate to them about the designs on one level. That meant explaining to them why their primary-color-powerpoint-comic-sans-poster communicates things they do not intend communicated.


I suspect that this sort of problem is present is a _lot_ of fields; it's not a programming problem or a design problem specifically, it's a human communication problem, and those can make just about anything more difficult.


Exactly, just think about any construction site on earth where throughout the construction they realize that they actually need windows or electricity.


I've just spent the last two days building out a system based on an API created by our clients internal dev team, only to have both the project manager and client throw in a massive curveball they knew about but neglected to mention - the API's been rewritten and replaced with a new one with a vastly different architecture.

That's ~12 hours of work down the drain and one demoralized developer.

It baffles me how this kind of thing continuously happens everywhere. Part of it seems to be that despite how many times you explain it to stakeholders, they don't seem to grasp that development takes time, and altering the plan mid way through is not only incredibly distracting but comes as a mental blow to the developer.


I think it is more that they don't respect developer time.


Any project of any size with any degree of coordination ends up with considerable wasted motion and "do what I mean not what I literally said." Maybe this wouldn't happen in an ideal world. We don't live in an ideal world. Even rigorous efforts to specify requirements (e.g. waterfall) didn't work very well and it's pretty much the definition of iterative development of anything that you throw a lot of stuff out but hopefully not big chunks (which 12 hours is not). That would be spending a month of something to find you were on the wrong track.


I think the point is that nothing about this is really that specific to programming; what we're talking about here isn't an instance of programming being hard but effective communication being hard. The parts of software development that are time consuming aren't very intuitive to non-programmers because there isn't an obvious way to show tangible progress, which makes it very hard to track externally, so it relies more heavily on verbally communicating the current state of things, and that introduces the potential for a lot of misunderstandings.


Designing in all sorts of fields involve long stages without much in the way of visible process as people work problems out. Developers aren't nearly as special as they like to think they are.


which is why you get in contact with their development team directly.

I've seen internal teams that thought the communication channels couldn't go dev to dev and it was a nightmare.

If you're experiencing this, consider that you have options if you're willing to take them.


I feel like communication of requirements is difficult too. I'm currently a business end-user, with developer experience, who is maintaining legacy VBA technology, trying to get an official system produced by a 3rd party contractor.

The requirements started out truly quite simple - we want a database and a nodejs web server.

But Technology are gate-keepers. We're not being trusted with what we want, or with that level of control over the system. Which then starts this cat chasing mouse thing of what the Technology will allow and what the business wants.

I've always vouched for "Internal IT" - specialised developers who work closely with the team, even doing the jobs users do, to self-identify what technology is required and kickstart these projects.

Edit: in this particular project, I handed over numerous diagrams of how the system should be architectured, and Technology BA's came back with a proposed set of technologies (SASS) that simply wouldn't be technically feasible... It's really made me lose hope.


This actually touches on one of SV's durable advantages in the software world: the realization that if you want good software your engineers must be involved in the requirements gathering, design, and scoping phases. Critically, the business understands that engineering is the most authoritative voice on cost and since all business activities should pass through an ROI analysis filter engineering needs to be in the room.

Keeping engineers in the room then adds a positive feedback loop: as they understand the business better they start to offer up new opportunities to the business that have surprisingly low cost.

(I will caveat that for sufficiently solved problems you can generally involve software engineers less)


I think there's something to this. When I write programs to solve my own problems I find the process easy and enjoyable.


How big are the programs, how often do your requirements change, and how many years or decades do you develop and support them?

Because tiny one time programs are indeed fun for a while until they “grow up”. :)


Small, rarely, and usually for one or more decades :)


After 15 years of software development experience, I've been thinking about this a lot lately. It's like you don't stop learning to be more careful. Even after 15 years, my mind is expanding. Sometimes I find code that I wrote 2 years ago which, at the time, was produced with far more care and extreme pedantic attention to detail than the average developer would have given it, yet it was still not correct for ALL POSSIBLE edge cases. It's insanely difficult to produce correct code. I'm starting to have thoughts like "Has anyone in the world (aside from Knuth haha) written a single program which is over 10K lines of code and which is 100% bug free right now? Does such program exist?" I don't even believe that Bitcoin is bug free; it just has too many lines.


I think about this pretty often from the opposite perspective. As a frequent user of software, I can't get over how much it all sucks. Nearly every nontrivial piece of software I've ever used has had annoying and frustrating bugs and it feels to me like that's only getting worse. I agree that there's no correct software and I think the software engineering discipline needs to be blamed more for it. I don't think we deserve these salaries just to churn out broken code. I love programming but you can bet that the second there's an AI that can write perfect code, I'm advocating for it to replace us all because I'm tired of broken software.


Well, there is a massive force that acts directly against not doing broken code - people(read managers and stakeholders) don't want to wait, they want the value for their money now, and from their perspective a certain degree of brokenness is fine


As long as it works "well enough" a broken product that ships now is better than a perfect product that never ships.


Yes, but also this very easily turns into shipping absolute crap, that in itself can cause a good concept to flop. There is a balance to be had, and at least some decent minimum requirements on what is good enough to ship. You neither want to overdosing, nor ship the first draft that sort of kinda works, and that second thing happens way too often.


Fully resonates with me. The complexity and bloat of many modern software has gotten out of control to the point that even users end up being victims; in spite of all the additional effort, workarounds and automated testing undertaken to try to shield them from it.

Developers are fallible humans and almost every single developer tends to underestimate the complexity and difficulty of what they're dealing with. Even those developers who are naturally good at coding are nowhere near where they need to be to produce merely correct code. Not outstanding, readable code, just correct code.

Modern software has become an inseparable amalgamation of people and code because the code on its own is just not good enough. DAOs which can operate on their own with minimal to no human intervention are still a pipe dream.


Hey, did you hear about the company that wrote perfect software?

No, they went out of business to some other company that wrote imperfect software with a lot more features a lot faster!

Especially in the '90s I remember hearing the "Why does everyone use X, it sucks and it crashes", and yet whatever X was tended to have a broad range of features clients needed.


I recently watched a video about craftspeople who spent a lifetime mastering their particular craft and one was a Japanese calligraphy brush maker who had been working on a particular brush for 25 years. He called it “no compromise craftsmanship”.

It made me wonder what a “no compromise craftsmanship” computer or software would look like. For one, it would never truly get done, there’s always more to do, but then again I guess that’s the same with the 25 year calligraphy brush.


> It made me wonder what a “no compromise craftsmanship” computer or software would look like.

Dwarf Fortress fulfills each point of your criteria, and some more that I could add. It's an epic achievement.

I don't play it. I've never invested the brain cycles it would take to learn the interface, and I know that if I did I'd fall into a deep, deep hole that'd compromise every other priority in my life.


Oh, yes! DF is an excellent example.

I don’t play it either, I tried it once or twice and it was just too much for me. I might try their new graphical steam version that they released last year sometime, it looks a little more accessible.


What software are you using and what were last 10 frustrating bugs you encountered?

I find it fascinating because my use of software seems bug free in last 5-10 years. It is hard for me to run nowadays into a frustrating bug.

When I was in high school there were just loads of bugs and everything seemed broken it was 2005ish and before. Windows was crashing now and again, had to reinstall frequently, software was crashing taking my work with it so I still have compulsive ctrl-s tick.


I couldn't tell you the last 10 bugs I saw (they are too frequent and varied to memorize), but here are the most common bugs I run into:

- Firefox randomly freezes up on Android (Google Pixel 6a), and then sometimes this crashes my phone and causes it to spontaneously restart. This usually happens at least once a day, often more, despite being very widely reported for a very long time.

- Assorted Adobe software develops strange behavior after it has been open for a while (selected items stop showing as selected, some tool or editing technique spontaneously stops working, exported images are borked, pressing ctrl+S reports that it has saved the file but it actually has not, etc). I have probably seen 50+ unique failure states over the last couple of years. I am affected at least a couple times a week, and I have learned to restart my software several times a day as a preventative measure.

- On desktop (Windows), some website will fail to load something or it may display bizarre behavior (erratic scrolling long after the page has loaded, back button stops working, links stop working, video playback stops working, etc.). Usually reloading the webpage fixes it, but sometimes the tab itself has broken or the browser has to be restarted. Other times, it's a consistent bug in the website and there is nothing to do but wait a couple days for them to fix it. I usually see website/browser issues like these about once a week or two.

It's better than the 90s, but it's still pretty bad and could be way better (as proven by all the software/systems/sites out there that don't have so many bugs).


> I find it fascinating because my use of software seems bug free in last 5-10 years.

It doesn't have to have an explicit bug in order to be shit.

There's too many instances where something is doable, but only painfully so. copy/paste over the last 10-15 years is a perfect example of something that isn't bugged but is complete shit in most software nowadays.

search is another one, how often have you been on a website where ctrl+f doesn't work right because subsequent data only loads after you scroll below the fold? no bugs, but still shit software.


> What software are you using and what were last 10 frustrating bugs you encountered?

I actually tried writing a section of my blog called "Everything is Broken" to cover some of the ways software breaks in (as well as some issues in general): https://blog.kronis.dev/everything%20is%20broken

Unfortunately, there are so many bugs out there, that I only write about the things that are annoying enough to push my hand. Some of the last things that happened to me:

  * Ubuntu LTS updates broke my GPU drivers, preventing the server from starting up (among other things)
  * Debian updates made some unnecessary piece of software start listening on port 25, thus breaking my mailserver in Docker
  * OpenProject packaged PostgreSQL inside of a single container, except that didn't bother setting up automatic data migration for new major version
  * Vector maps are somehow way more laggy than raster tile based maps, horrible user experience on mobile for me (video included)
  * a trailing comma in Visual Studio Code configuration file made fonts end up being non-monospaced
  * Kdenlive (probably due to ffmpeg) failed to correctly apply text titles on top of a video at the correct times
  * containers break due to different file system permission models on Window and Linux, bind mounted files have incorrect line endings
  * Nginx crashes the entire web server if something like 1 out of 20 reverse proxied domains doesn't have a successful DNS lookup on startup
  * Grav (blogging software) can have changes be made to the install without admin authorization, my blog got vandalized
  * Oracle database randomly dropped connections, some relation to a generated column
I've also had issues with GRUB, disk management in Linux, Nextcloud, Libre Office, OpenCV, Docker Swarm and pretty much every class of software (desktop, web, CLI tools, GUI stuff, OS components) out there - sometimes in pretty exciting ways of them breaking!

I also have a folder on my computer where I collect screenshots of software breaking out in the wild, currently it has 143 images, though perhaps that's a bit too much for blog posts. Currently the last 10 there are:

  * Mozilla Observatory failing to analyze my site due to their own CSP being broken (script-src)
  * PHP install failing because the ondrej/ubuntu PPA failing to work like it should
  * my bank login page having an untranslated text string for one of the options: #xml-dtd.LOGIN-TEXTS.LOGINEPARAKSTS
  * my mobile service provider telling me that I've used 212166% of my mobile data
  * me being unable to modify a Visual Studio Build Tools install locally because they want to update their installer but I'm offline
  * Windows 11 Explorer breaking the file path bar and showing me two overlapping text strings for some reason
  * Oracle not giving me my OCA cert because my name contains non-ASCII symbols
  * the War Thunder game telling me that I have 2147483647 crew available for my tank
  * OpenProject functionality to move multiple issues failing... despite them having a dialog exactly for that
  * installing a Ruby Gem following official SQLite instructions results in UDS:Trojan-Downloader.Win64.Alien.eg/ef being detected
Aside from that, perhaps my favorite bug is this: https://i.imgur.com/97wu8j5.jpeg


I see here bunch of:

- not really a bug, just system was not build to support scenario - not really a bug, user does not understand system thinks it should work differently - not really a bug, combination of 2 or more software things that have no way of knowing about each other - not really a bug, because it does not affect me in any meaningful way

There are some I can agree are bugs of course. Though I see I am getting older and much more humble and lenient.

For humble it is that if I expect some software to work some way and it does not, I step back and really go back to understanding documentation and what it all means. I don't call out things "fail to work like it should" (which of course might be my issue and parent poster might be right calling it out).

Then for lenient, I just don't care about labels, text misalignment etc. The same with expecting non-ASCII symbols being supported, yeah we have UTF for quite some time, but it is also not something trivial to switch to and company most likely has loads of other work to be done.

In the end world is not about me and there are tons of users who have other issues and my edge cases are not affecting enough people. World is much bigger so to say. If it does not affect me in any meaningful way I just move on.


That's a very fair point, that's why I had to mention the whole "(as well as some issues in general)" thing as well.

For example, if software works in a configuration that was setup correctly (according to their own instructions) in version #1 and after update the version #2 makes the setup no longer work, is that a bug? I'd probably say it is, unless the patch notes clearly instruct that something needs to be done, though even that can be debatable. What about the case where you don't even consent to install some package that will take up a port on your OS and break something else, but it silently gets pulled in through some update? There's probably nuance to it all.

At the end of the day, as a user, I might subjectively not even care about the internal workings behind some abstraction and the many reasons behind it, but rather about how it should work in a reasonable configuration - if a web server refuses to start if it cannot resolve a domain that it's supposed to reverse proxy when starting up, even though the other 19 out of 20 could be proxied, is that a bug? Probably not, but it sure seems pretty darn broken, since those 19 could be served and the faulty domain simply checked later.

Of course, the software itself doesn't care about the user either, so at the end of the day it is the user who needs to learn patience and to trudge through the docs, posts online and other sources to hopefully resolve the issues, even if it's a bunch of weird workarounds or concessions, no matter how frustrating they might be. But hey, there's nothing wrong with documenting said frustrations or issues.

I am glad that you've embraced a more neutral mindset, though!


> I am glad that you've embraced a more neutral mindset, though!

Putting on the proverbial blinders and declaring software "better now" is not really something I'd consider neutral, to be honest. Sounds like expectations dropping over time, more than anything. Note how there are plenty of real examples of outright bugs that are left unaddressed here and no acknowledgement that yes, the world of software is full of them.

I don't particularly care whether someone has a rosy outlook in general, but it's pretty odd to declare that software is somehow so good when it's at the very least as bad as it was in the early noughts.

It's understandable that people don't like overly pessimistic people dooming and glooming too much but I find the inverse equally if not more annoying. If you factor in how much slower most software experiences are vs. what our hardware should achieve plus all of the bugs that are at least as plentiful if not more our general software output is much, much worse than it ever was. We're not even producing it much faster because of galactic scale overcomplication in software stacks as well.


Some things I could come up with off the top of my head. I wrote down every bug I found in programs for a week or so at one point and the list was way, way longer than I thought it would be. It highlighted that despite my fairly pessimistic view on software there's a lot that went unnoticed because I just don't even expect it to work very well anymore.

I would encourage you to write down bugs when you see/experience them but in all honesty I don't know that it will give you much; it will probably only completely destroy your positive outlook on things that you seem to have retained.

- On YouTube sometimes the interactive elements of a video (quality settings, etc.) will also scroll my page down about 15% when I interact with them. If I click on the settings wheel to change the speed of the video that's 3 clicks and I'll end up ~50% scrolled down from the video I'm watching.

- YouTube randomly resets my settings for inline playback when I disable it. With inline playback "Add to watch later" is not available as a thumbnail hover option.

- When I was running Windows it would randomly set my bluetooth headset volume to close to zero out of nowhere without any changes made on my end, but only on a reconnect, so I would always have to check every time I connected whether it had done it.

- Sometimes when my bluetooth headset connects on Linux I'll just plain have no sound at all despite it saying it's connected. The workaround is to just reconnect.

- During a routine package upgrade for some reason something installed `xdg-desktop-portal-gnome` and this made every application that uses GTK (I think) start about 10 seconds slower. Uninstalling this package solved it.

- Thunderbird will leave an empty e-mail subject line sometimes when I delete an e-mail I've received, but the body it shows when I click it is another e-mail I deleted.

- World of Warcraft will randomly lose connection despite there being zero connection issues and rejoining will immediately work. This happened maybe 3-4 times per week in the 3 week period I played recently.

- Discord sometimes just removes my key binding for toggling push-to-talk.

- Elixir (via `mix`) refuses to compile dependencies sometimes and solutions range from having to remove folders, update deps and all manner of things. There's usually no real rhyme or reason to it and we all just nuke a folder or two and move on.

- Same as above but with `node_modules`.

- We had a package from `npm` that would sporadically not type check with our usage and we could never figure out what it was caused by. Every 20th or so build `tsc` would report a type error where none had existed before. This was with pinned versions.

- When I used VSCode I would regularly end up in a situation where VSCode suddenly started saving files super slowly and took forever to auto-format code. This happened with at least 3 different languages so it was not language or extension-specific. I've also seen it happen live to at least 3 other people.

- Plex will die when playing video files of a certain quality on my TV despite playing them fine for ~15-20 minutes.


Bug free code isn't pragmatic at all.

For any system that is reasonably complex; that solves a real world issue, there's simply no way to handle all the edge cases.

One thing that I've found myself doing in recent years is to simply let the program blow up and have the end user experience a 500 error. I know this isn't ideal for so many orgs but I've come to the conclusion that the worst thing you can have happen is a feature that breaks but returns a "200".


> One thing that I've found myself doing in recent years is to simply let the program blow up and have the end user experience a 500 error. I know this isn't ideal for so many orgs but I've come to the conclusion that the worst thing you can have happen is a feature that breaks but returns a "200".

Acceptable for webapps. I can't really think of anywhere else that it becomes acceptable to simply say "We had an oopsie. Try again later".

Mostly what you want to do is give the user an actionable error message - i.e. "This is what you can do to fix this error".

"cd /somedir" gives errors of the form "permission denied" (i.e. change your perms level, or ask the owner of the dir to change its permissions), "not found" (i.e. that dir does not exist, so create it if you need it), "i/o error" (i.e. your disk may be bad, run fsck), "Not a directory" (i.e. specify a directory, not a file), etc ...

With most programs written in HLL these days, all I get is something along the lines of "Operation could not be completed", or, worse an exception handler that has absolutely no clue what is needed to fix the error (a real one I got last week: "expected integer, got string". I have no idea where in the 3k lines of input that field actually is, or what it is called).

The only good error messages I see are written in programs that don't have exception handlers, so the error is printed out where and when it happens, and the programmer who wrote the code that generated the error is also forced to handle it, and so has all the context to generate a reasonable error message.


OP here. This is one of the exact things that, in my mind, makes programming hard. It probably wasn't covered in the post, but striking the right balance between "bug free" and "pragmatism" towards accepting lack of functionality and error tolerance, is crucial.

Within certain domains like healh-care (my area these days), space travel, military, edge case management play a significant role in the valuation of where to place your effort (although it doesn't exclude the ability to fail fast and hard during development cycles). In other domains exposing users for these hard failures is considered perfectly fine. It depends...

It's just one other aspect - when thinking beyond just the code - that makes the decisions and consideration I have to make every day "hard".


This is so true. I see a lot of bad APIs and returning something like a 200 response + a message body saying something like "Psych! Actually it'd broken" is a surprisingly common anti-pattern.

Reminds me of Petet Gabriel's Worse Is Better[0], one of the insights of which is that it's often better to fail for edge cases than try to handle every single one.

[0] https://dreamsongs.com/WorseIsBetter.html


This particular thing is a matter of opinion, there are pro's and con's to each approach.

The question is what constitutes an error and should web semantics and application semantics align?

should I return a 404 on a missing record, and if I do, how can I differentiate between that and a routing error?

My point is that you're confidently stating something where both sides are neither right nor wrong, they simply value different things.


We had to do this in order to show the user a relevant error message because our system is behind apache and it doesn't forward the content body from wsgi for (IIRC) whole classes of error codes.


Another almost worst thing is having production environments with significantly different stacks to what you are using locally, with branches in the code to account for this.

This message brought to you by a day of commits dealing with edge cases that only manifest in the cloud environment which uses a "strategic" database technology that is impossible to run locally.

And it deploys from one of our old crappy Jenkins pipelines that decided 30 minutes ago to start having a failing volume.

Should have just used GitHub Actions + Postgres.


>Another almost worst thing is having production environments with significantly different stacks to what you are using locally,

Everyone has a test environment, some people also have a seperate production environment :D

Database/datasize issues typically are the worst here. Dev/UAT environments are typically considered a cost center and you'll commonly see them with some small amount of ram and compute resources. Which is fine when you're testing 'small data'. But if you have enterprise style clients that have hundreds of gigs/terabytes of data, your dev environment behavior is generally going to be very different.

Ran into a case where an enterprise client ran massive db performance problems. We found an unindexed table scan that was killing performance. When doing a post mortem on it, it turns out that 95% of our customers only had around 5 entries in the table. No reason to add an index to that. But this customer had found a neat and interesting workflow that had added thousands of entries to that table.


>This message brought to you by a day of commits dealing with edge cases that only manifest in the cloud environment which uses a "strategic" database technology that is impossible to run locally.

Did you consider running a second test instance of the DB in the cloud?


> Did you consider running a second test instance of the DB in the cloud?

This is using the test instance :) The issue replicates both on prod (cloud) and test (cloud).

But not locally. So a nice 30 minute deployment pipeline run to test if your fix worked, since we are firewalled from direct DB connections from local dev machines, even for test instances.


I've got a 10 min lag. Now i don't feel so bad.

SOP though..


> For any system that is reasonably complex; that solves a real world issue, there's simply no way to handle all the edge cases.

There's a spcetrum between 'all the edge cases' and 'most of the edge cases'. A web app for a blog is not exactly like an automat onboard the JamesWebb telescop...


As long as you're fixing and preventing the 500 in the future, this is how you build stable software.

it's not possible to predict everything, so instrument, let your software be loud, and listen to it.


Bug-free is more about how narrow the spec is.

Can your GUI app works on a Windows 95 with Polish as the system langauge while using a Chinese input method, and the user holding Alt all the time? Probably not. But the question we should ask isn't how could we make it work, but why the hell do we need to support this use case?


> But the question we should ask isn't how could we make it work, but why the hell do we need to support this use case?

I don't even think it is necessary to ask that question. The only question that should be asked for something like this is "How do we handle all unsupported use-cases?"

Because then it becomes simpler, when an edge-case causes the software to freeze, to detect for that edge-case and hand it off to the "not supported" handler.

In the best case scenario, we simply detect whether the use-case is supported and then either proceed or hand control to the not-supported-handler.

In the worst case scenario, when the bug report comes in, we don't even bother trying to reproduce it, we just add a general check for it and move on to the not-supported-handler.

In this particular case, the detection can do any one of the following:

1. Check for english, and if not, display "unsupported language". 2. Check that input language and system language match, or display unsupported". 3. Check that the OS is supported, display "unsupported OS" if not. 4. Check that the input is valid (i.e. no special codes, whether by holding down halt or otherwise), or display "unsupported input".

Whether the spec is narrow or wide, the developer needs to do the work to ensure that the user is running within spec.

Not checking results in more "bugs" that are, as you said, use-cases which shouldn't be supported anyway.


>"How do we handle all unsupported use-cases?"

Until you find out that your operating system handles 95% of unsupported cases in one way, then uses a hidden API to deal with the weirdness of other languages and the unsupported case isn't detected in that last 5% of languages. Of course that never came up in testing because who uses a hebrew/japanese combination and now the error message being returned by your application has nothing to do with the actual problem.

Your worst case scenario is just the common edge case bug handling that support/QA/dev's handle every day.


I don’t know about bug free, but Code Complete claims that the space shuttle project shipped a million lines of code with no bugs in production.


Somewhere Edward Kmett has a great post about how he realised he was just writing buggy code that would have to be thrown again away and away, and that was the point where he switched to writing Haskell libraries that were perfect and could live forever.


> which is 100% bug free

define "bug free". In Python, I've been reimplementing monads recently. It's such a shame that Python still doesn't have Null/None propagation. After each obtained result and before using it in the next chain, I have to run a series of sanity checks (is it Null or not, etc.) which should be resolved automatically by the language. Alas, most people don't care.

Something like this would be nice:

    result =? foo(bar :? str)
meaning that foo should receive a str bar, but maybe bar is not str. Then foo should return something, but maybe it returns None (the =? part).


This is an interesting idea- I'm not sure I understand it fully though. Do you have much luck with static type analysis? I find in Python, nothings ever certain, but mypy with good type hints solves a lot of these kind of None based shenanigans.


I have heard about mypy but haven't seriously used it. Type hinting has helped a lot (basically you'll end up writing Python like it's Rust), but something like the Maybe monad (my example above) would reduce a lot of boilerplate.


Something like that would fail the code review at any large organization. Why? Because ternary operator is hard for maintenance purposes so the reviewer would simply ask to use if..else syntax instead. I gather you never worked in such organization.


Not the OP and not a fan of ternary operations, but if programming were left to large organizations we would still be writing .asp pages.

Which is to say that inovation happens only when you try to do stuff just a little different each time, so I’d say that code reviews as currently implemented at those big organizations is against long-term programming-related innovation.


That is not the ternary operator.


But following the setting of the gp comment, that operator would fail code review as well. Because the book of sacred rituals says so.


The goal isn't to produce perfect code, the goal is to evolve to perfect code.

This gets missed by so many developers, once you change your mindset on this, you'll find you write better software.

One of the questions to ask when writing code is "how long will it take to discover if this goes wrong?". Make it quick to find, let someone else in the future deal with the imperfections.

I'm not saying don't try for perfect code, I'm saying not having written perfect code isn't a problem and it isn't a failing. code evolves to be stable, it is not born that way.


Programming is hard mainly because its open ended. Its not a well defined finite problem. The domain of application keeps expanding and the logic required to organize information flow keeps getting more complex and varied.

It all started as the serial, offline processing of numbers emulating some aspect of the world. Since then you have the major but still largely incomplete revolutions with real time, interactive and concurrent applications.

The metaphors have long stopped being numbers. LLM's pretend to encode human language itself. Our entire semantic universe, to the degree it is externalized, is slowly codified and mapped into bits.

You also have the deepening of the stack. It started with simple machine code but now you can have like four layers of abstraction to help our poor brains manage torrents of information flows between very complex pieces of hardware.

So yeah, programming is hard because we are pushing hard to find the limits of what can be done with digital devices.

I dont think we are even halfway there yet. Its going to get much harder :-)


Author here. Thank you for this and from what I can tell you're nailing my frame of mind and where I was coming from with this.

Writing valid computer programs - from "Hello, World!" and the likes - is safe to say "easy". What programming has turned into - and what a large percentage of professional programmers have to deal with on a daily basis - is not.

So it's an article sure to stir up some comments, because we're all coming at it from different angles and people considering themselves programmers fill a wide variety of roles in their companies and in the profession. If people feel they're having an easy time, good on them! But neither feel no shame when the imposter starts creeping in on you or you feel the mountain you have to climb is too tall.


For some reason programming is not getting any easier, I can even make an argument its getting worse despite all the improved tooling and andragogy investments we have done.

Programming is in some sort of local minimum and we keep going in circles. Few years ago I watched this talk by Gerald Sussman: "We really don't know how to compute" - https://www.youtube.com/watch?v=HB5TrK7A4pI and it made me question my views on what I think programming is.

I dont know if its because people are afraid to try new things, or there is no funding to try new things, we are kind of stuck. People thinking "don't reinvent the wheel" and for some reason think we have built a wheel (as Casey Muratori says: https://www.youtube.com/watch?v=fQeqsn7JJWA). We have a square at best. There are so many problems in the world, and a lot of them need robust and resilient programs to be solved, e.g. self cleaning public toilets or worm farm that you feed bio garbage and it produces protein flour.

The challenge is exactly the same as it was 80 years ago, manage complexity in chaotic and emergent systems: chaos, complexity and entropy as the poet said. And to be honest, I think we have gotten worse at managing all of those, most bugs that we encounter now is like a murder mystery from Detective Conan and it requires Shinichi Kudo level of deductive reasoning to figure out what is going on.

Programming will remain hard if you keep thinking it is just a translation problem from a "business requirement" to a "computer language".


> I can even make an argument its getting worse

I agree and disagree.

We can do a lot more with a lot less people, but the issue is that the developer ergonomics have taken a straight up nose-dive due to the cloud.


That's why programming is more of a challenge and more exciting than ordinary old computer games.

Games only hold a challenge until the game-writer's secrets are broken.

When I first started out in computers, I thought the games would be thrilling. They weren't. But programming .. trying to pit my wits against the very fast moron who insisted on deliberately doing no more and no less than I told him to ... that was thrilling.


> Games only hold a challenge until the game-writer's secrets are broken.

Games ultimately are code themselves, and you can manipulate it to some extent, creating other infinitely complex interactions (source: advancements in speedrunning or various game challenges like the A press challenge [1])

[1] https://www.youtube.com/watch?v=yXbJe-rUNP8


Nice observation, never thought myself in such way about programming. This compares nicely with my experience with sudoku. It was fun until I've found and implemented algorithm to solve them.


The end user is the final boss, and your job is to keep them alive.


Programming was hard in the past, nowadays with all high level languages and advanced monitoring/debugging tools programming is easy.

Today, the real challenge is correctly managing and delivering changes and integration with other programs or external systems.


> At the end of the day code has no obligation or enforced requirement to resemble the problem it’s supposed to solve

Unless you do language oriented programming (in Racket) or create DSLs.


Compared to learning other skills programming is easy, it is made to be optimized for human use, legibility and learning, it has the most resources on learning by far. Yes, you might not be able to create a perfect and complex computer program, but this has more to do with the nature of the world and complexity than programming.


Learning computer programming is easy ... mastering it is a life-long journey.


Your definition of programming is too narrow. It‘s like saying writing is easy, every 6 year old can do it.


You assume I mean just learning syntax, I don't, I mean the actual skill of developing programs in production.


I've seen somewhere that computer programs are the most complex objects that humanity has created. Nothing we did before computers can compare. Clockwork, cathedrals, electronic circuits, math proofs, novels, etc.. None of these have as many interlocking "moving parts", as medium-sized software projects.

And in fact, programming is all about managing complexity. Really, it is all there is to it. The operations that computers do are simple, anyone who can operate a light switch can understand them in minutes. But how to piece millions of these to do something useful takes a lifetime, and it involves layers of tooling, custom or written by others, among them things like compilers and text editors, libraries, OSes, etc...


The biggest challenge in programming is dealing with stakeholders and management that expects things to be done yesterday - as in, they can't accept the concept of unknown unknowns - mostly because they have promised ambitious timelines to others.


I thought I was going to find this far more entertaining post when I read the title: http://www.stilldrinking.org/programming-sucks


It is hard. And isn't. How hard is it? In number of years / hours. Relative to what?

> “When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.”

-- Lord Kelvin


So Walt Whitman would express his mastery of poetry by what, the number of poems he's written?

It sounds like something worth considering only in some contexts but is phrased as an absolute.


Perfectly reasonable and expected response. I think if you asked Lord Kelvin he would say it meant only in a particular context.

If you said Whitman was more prolific than, say, Frost, but didn't have the numbers then it is an empty assertion.


Yep I agree.


How can you assign a number to an unbounded problem?


> Having to translate real world requirements into abstract constructs that – when evaluated by a computer – model our problem domain to a satisfactory degree.

I tend to think of it as starting from abstract constructs and matching it to a known real world problem in an satisfactory degree. From that sense it didn't seem and feel so hard.

In mathematic theory it starts by defining 0 and 1 and the whole thing comes together shortly after and can be used to model the real world to an amazing degree. Software engineering in comparison are still quite straightforward even in its highest complexity.


No, programming is complex.


Don't really like the trend of articles "quoting" themselves. If people want to quote you, they will.


It is not hard.

It depends on what do you plan to use programming for. Creative and sophisticated programming is hard, but CRUD one isn't


"Programming is hard" ... let's go shopping instead?


Hard as in NP-Hard?


i once bouth chessishard.com not good marketing value.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: