Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Tell HN: Modern software engineering is ridiculous
47 points by grumblingdev 61 days ago | hide | past | favorite | 22 comments
My job would be so easy as long as I can debug and trace things.

But modern software engineering doesn't give a shit about debugging.

There are just endless layers of abstraction, and glueing together 100s of packages of compiled or transpiled code. To debug anything properly I have compile all the intermediary packages and setup the full toolchains they need.

I'm just watching it for 20 years and its getting worse and worse.

New programming languages every few years and the debugging experience starts from scratch.

I just want to be able to easily trace code in a debugger with minimal abstraction.

Everything is so cobbled together and no one seems to get it.z

This obsession with the "stack" of things named like: bloop, blip, blaz...oh we're moving to blap now its really great, I have to rewrite everything in it and now they have an annual conference and I found a job that let's me write blap, and its venture funded.

The problem is no one has the right principles. It's a fashion show and there is just so much money to cover all the inefficiencies of developers fetishizing over the latest crap.

Just let me debug things anywhere in the stack with ease!




Yep. Ongoing problem for decades. And now with “AI” to make it more ridiculous. I look forward to retiring soon and never looking at code again.


lol

But in theory, AI should be able to write directly in assembler. The 'stacks' would be merely layers of understanding in the AI, and it should be able to compress these down to tight assembly code that does the thing required, rather than using actual software layers. In theory.


In theory the AI can just hallucinate the output of the program, cutting out the programming altogether. That will work for companies that already use spreadsheets and McKinsey consultants.


I hope to retire in about 4 months and I'm counting the days!


Amen! The funny thing is this sounds like a “get off my lawn!” rant, and yet it’s so true if you’ve been around long enough to have seen a few tech cycles. But this also only seems to matter if you are NOT in a “move fast and break stuff” environment.


I agree. I also think the "move fast and break stuff" mindset is one of the worst things to happen in the industry.


> "move fast and break stuff"

But it makes me wonder how people have time to invent all this shit when they need to move fast. Learning a new stack every 2 years doesn't seem like moving as fast as you could.


Ironically the more you strip out the unnecessary layers of abstractions and frameworks the less likely you are to need to use a debugger to trace code as once you know the bug exists you can just see the cause in the source code - bugs you haven't encountered yet are still completely hidden though :D


Abstractions would be fine, if they were solid code. Code that I don't have to write, that I don't have to debug, that I don't have to maintain, is wonderful - if it works.

The problem isn't abstractions. The problem is that too many of the abstractions don't work - they have bugs. (Yes, I know, all code has bugs. I even found a bug in the STL once. Still, that's one bug I have seen in it in 30 years. There are other abstractions that are... less solid.)


I think that a lot of this is related to another thing that's gone wrong in the industry: releasing buggy software has become not just tolerable, but expected. The usual excuse is that it's no big deal because an update can just be pushed out later.


Everyone knows it, yet nobody would waste 5 min of their time writing a new function if he/she can import it instead, along with another 50 dependencies.


On a personal anecdote, the spaghetti code problem has been thorougly fixed. It's been replaced with much worse spaghetti microservices.


The market figured out it's cheaper to hire more developers and pile on more crap than to make existing systems more observable.


With respect I cannot fathom how anything is cheaper doing it this way. I think it’s an easier explanation to say it’s lazier, and too many VC-funded firms have so much cash to burn that they simply don’t give a shit what it costs. Only explanation for giving every programmer they onboard a four thousand dollar MacBook Pro to run friggin text editors and all the server space they can buy/rent to run six layers of abstraction and 5 GB of bullshit frameworks to link a web UI to a data store server side and accomplish whatever task.

Like I’m sorry, some offense intended: if you need something like Node or React to render a static web page: you are an embarrassment to my profession and I wish you’d get a different job.


I still yearn for the days where I could run the whole application in my IDE and step through it with a debugger line by line.

Or even just insert print("foo") like the barbarian I am.

Now I can't attache the IDE because everything is 50 microservices. I can't print-debug because the logging system requires fully formed JSON output and it's clustered and distributed and billed by line and I need to use a Web UI to see my own application's logs ffs.

Get off my lawn.


Couldn't agree more. And you didn't even touch on the pain and misery caused when an amateur hour shop decides to do micro services like big tech.


This screed reads like someone who doesn't do testing. Or who only does end-to-end tests.

With both decent testing and decent monitoring/logging/o11y at each layer, no one should ever have to debug the whole stack at once and can instead focus on the layer with the problem.

I wish we could put the old days of having a tightly-coupled stack of untested cruft behind us.


> This screed reads like someone who doesn't do testing. Or who only does end-to-end tests.

this response reads like someone who's never worked in any size organization

even with testing I've seen all sorts of organizations mess it up, the tests become a negative cause they're not even used properly and just optimizing for "coverage"


I find it a bit amusing to think of the evolution of things, in basic terms... within a program, we have these things called "functions", which have "prototypes" / definitions that define the number and types of arguments (using the C terminology).

We didn't always have those things... in the bad old days, there were just "subroutines" and gotos (without a call stack), and functions in C (pre-C89) with no prototype that you could just call with whatever arguments you felt like... and if the caller and callee agreed on the types, great, if not... maybe it worked, or maybe some crash, who knows, depending on the nature of the argument mismatch (too many vs too few, the types involved, etc).

So, prototypes and functions with a stack were invented to solve a problem, and meant that if the program compiled and linked, then at least all the callers and callees agreed on the prototypes and argument types, and things might work, and if not, then there would be a core dump or a stack trace where we could look at the stack and that would tell us quite a lot about the system/program execution state.

Fast forward to today... and now we throw a lot of those inventions out the window and use "microservices" or "REST" or JSON etc, where there is no prototype that the caller and callee agree on, it's all just unstructured or semi-structured (like the bad old, pre-prototype days in C...), and if the caller and callee don't agree on some vague notion of what the parameters are then chaos ensues, and there is no one place to look at to debug (like a core or stack) because the system state is now spread across many machines and so there are lots of logs to correlate (at best).

A lot of people even describe this as a selling point... "yeah, it's great that we have microservices and loose coupling so we can upgrade the different parts separately!". If a strict schema is in use, like (like xsd, wsdl, protobufs, etc), then it can almost work, because so long as everybody agrees on the schema then sure, the individual parts can be upgraded separately. Oh, but... then how to we change the schema? Oops, now everything needs to be re-released all at once and we're back to where we started.

If the schema/protocol never changes, then it can work... and indeed that's what IP, HTTP, etc., are, they are set in stone for decades and then the clients/servers can change and that's fine, but if you have an elaborate distributed system with either a loose schema, or a strict (and therefore necessarily changing often) schema, which are the popular choices, then you're screwed.

At least in the classical analogy with C/C++/Java whatever, we admit to ourselves that if the function prototypes, arguments, etc. need to change, then that's a recompile, relink, and restart the whole system, not just some parts of it.


Some people say “monolith” like it’s a bad thing.


Stop noticing things!


Fact brother.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: