I get the frustration, but I think this is just the cost of the incredible volume of software that we have.
As software engineers we pride ourselves on our systemic thinking, so it's tempting to look at the situation and say "we can and should do better". But it's an illusion to think of software ecosystems as being designed. The reality is individuals, teams and even huge companies are constrained by ecosystems which are too big to really control and optimize in a systemic fashion.
Useful software today beats perfect software tomorrow. That's why software is not well-optimized: because less optimized software beat it to market at multiple layers of the stack. That's why web apps slowly killed native apps: because shipping binaries to multiple platforms is way harder than loading a URL in a browser (or thin wrapper thereof).
One thing I've learned the hard way in life is to focus on what one can control. We may not like the tradeoffs others have made and the systemic outcomes they led to, but we can't control them. A better tactic is to pick one very specific area you think you can make a dent and focus on that.
> A better tactic is to pick one very specific area you think you can make a dent and focus on that.
Amen. Best practices taught me "don't break extra windows, and repair broken ones as you find them", but I'm finally at a point in my career where I can say, with confidence, that some broken windows aren't worth fixing; you might have enough expertise to know that it should be fixed, while simultaneously lacking the expertise to do so in a cost-effective way.
Case in point: in the iOS app I'm currently working on for a client, my team discovered a bug that was, very clearly, an interest payment on the technical debt the client has been accruing. The "correct" bug fix would entail rewriting a lot of existing functionality (i.e. paying down a big chunk of the principal, on the technical debt). This was not cost effective for us, since we were being paid to fix the bug as written. So we fixed the bug, while introducing functionality that could, in the future, be used to pay down the tech debt principal, should the client desire to do so.
Spoilers: the client will never desire to do so, and therefore we've actually just added cruft to the codebase. But this conclusion is not allowed, in the client-provider relationship; the bug is fixed, therefore contract conditions have been met, therefore the work is done.
The cause of bad software is, simply, that the incentive structures and scheduling required for making good software don't usually exist.
> the bug is fixed, therefore contract conditions have been met, therefore the work is done
Agile-type workflows also push toward this kind of prioritization— focus on story points, requested features, value delivered to the "customer", etc etc. It's obviously important, especially if the task at hand is to rein in a team consumed with grand rewrites and architecture astronaut stuff.
But at a certain point, the pendulum has swung too far the other way, and you do need competent technical oversight— not just someone who assigns you a sprint or two to "pay down debt", but who engages meaningfully in understanding what the debt is, how it came about, and what the ongoing cost is of carrying it; and who can actually set milestones on the cleanup effort and communicate to the larger organization how it was valuable work to have done.
> But at a certain point, the pendulum has swung too far the other way, and you do need competent technical oversight— not just someone who assigns you a sprint or two to "pay down debt"
yep, if you are truly agile, you will be continuously improving the whole system you are working on otherwise it gets harder and slower to "deliver stories" to your customers... its a balance to be sure.
Oh definitely, but that requires everyone to be on the same page about that, that every feature has a built in overhead of whatever% time allotted to general maintenance. Otherwise it's just a prisoner's dilemma where the one guy who doesn't do any of that work will be at the top of the list for promotion because he gets 30% more done than anyone else, as measured in story points.
> but I think this is just the cost of the incredible volume of software that we have.
But it's not. The author gives specific modern examples of individual people absolutely destroying our concepts of how fast an app should be:
>> Work Martin Thompson has being doing (LMAX Disruptor, SBE, Aeron) is impressive, refreshingly simple and efficient. Xi editor by Raph Levien seems to be built with the right principles in mind. Jonathan Blow has a language he alone develops for his game that can compile 500k lines per second on his laptop. That’s cold compile, no intermediate caching, no incremental builds.
Jonathan Blow's language compiles 500K lines of code in 1 second! It takes me 20 minutes to do a full build of our monorepo, which is nowhere near 500K lines of code. And it's not even compiling and optimizing and linking, it's just transpiling typescript to javascript. It's ridiculous. It's ridiculous to think we can't do better.
Another case in point, modern games can render millions of polygons now (with UE5) at 60fps. And a modern web browser can hardly render some of these static web pages on my PC with an RTX 2080 and 48GB of RAM at 60FPS. Something is seriously wrong with our ideas of "performant code" if we think this is acceptable.
I worked on both games and browsers, and I’ve seen this comparison being made before.
Browsers are way more complex than what meets the eye. There are requirements for high quality text, page consistency, lots of different content types, fine 2D graphics, network ops, and multiply low power consumption requirements on top. WebRender is somewhat a game engine.
Real games are happy to cut all these corners, for the sake of bring impressive, and they mostly rely on hardware-accelerated functions.
I think web browsers could get framerates on typical webpages to 180 fps if it were a priority. I have found some sites that should be renderable at super high framerates that struggled to keep an average of 60 fps.
One reason that browsers aren't super fast is that it will always be possible to make a super slow site. Speeding up sites like HN is really not very important to battery life or user experience.
Also the necessary complexities of the browser make it much harder to get 100% out of your hardware. I would guess that browsers are 3x slower than they could be on most CPU/GPU bound tasks.
If there was an incentive for web devs to develop faster-loading and more responsive web pages then they would, e.g if Google search weighted those pages higher in search ranking.
(they won't, because then Google would have to rank their own web apps lower)
> I think web browsers could get framerates on typical webpages to 180 fps if it were a priority. I have found some sites that should be renderable at super high framerates that struggled to keep an average of 60 fps.
Wouldn't it be better if most sites were only redrawn upon user interaction, remaining static most of the time apart from that? Think of the battery life savings!
> Browsers are way more complex than what meets the eye. There are requirements for high quality text, page consistency, lots of different content types, fine 2D graphics, network ops, and multiply low power consumption requirements on top. WebRender is somewhat a game engine.
Even in computer graphics you had standards like the full OpenGL and OpenGL ES, a more cut down version that was optimized for mobile devices and low power graphics processing.
Over the years, in regards to the web, we had Wireless Application Protocol (https://en.wikipedia.org/wiki/Wireless_Application_Protocol) and more recently Google AMP (though it didn't quite succeed for a variety of reasons), but apart from that we still push sites that do way too much to devices that don't have much processing power at all.
I'm not saying that browsers are the problem here, but rather the way we develop web solutions as an industry is completely insane! Most sites out there could look way more minimalist and use some safe subset of HTML/CSS/JS that would be optimized for better battery use, minimal interaction while retaining a professional feel and accessible, well engineered UX.
And yet, that's not what we have. I can't have 20'000 tabs open in my browser, grouped by month/year/site/arbitrary group, with full text search across all of them, retrieving the full and usable contents of them from HDD/SSD in seconds. On mobile, if I don't have a modern device and a browser like Opera, I can't even reliably open 100 tabs without everything slowing down to a crawl, especially without stuff to black trackers, ads, autoplaying videos, garbage scripts etc.
> And it's not even compiling and optimizing and linking, it's just transpiling typescript to javascript. It's ridiculous. It's ridiculous to think we can't do better.
I don't do (much) frontend development, so I haven't had a chance to use it, but it looks like esbuild does do better: https://esbuild.github.io/ -- are you guys able to use that?
I agree with the sentiment, though! And it's part of the reason I like Go: build speeds are fast, especially incremental builds (which is almost all the time, given how it caches compiled packages).
I prefer web apps over native for most things. I only trust a handful of native applications on my computer (neovim, Postgres, compiler tooling, stock Linux utilities, etc)
Corporate applications are highly suspect to me. I’m not sure why.
I much prefer running things in a sandbox like the browser. I also find that my resource usage is lower when I run Zoom, Slack, Reddit, etc in a browser vs their resource-hungry native applications.
> but I think this is just the cost of the incredible volume of software that we have
This may obviously be a personal stance, but I will never accept the excuse of scale for anything. If you can't do it at scale, then you can't do it. Quit scaling and making everything worse. This applies to software, population, consumerism, consumption, etc.
Useful today is a matter of time preference, and as all software is now essentially free, the cost is computer time you already have available. It may not be a law like a law of physics, but it is an observed law of behavior.
Meanwhile, I'm using a 13 year old desktop running Debian Unstable and it feels even faster than when it was new. It's subjectively faster than an M1 Mac Mini, despite the Mac beating it in benchmarks. I attribute this to two main causes: ruthlessly deleting/disabling everything I consider bloat (e.g. animations, text anti-aliasing, gradient fills, rounded corners, transparency, compositing, Javascript, anything wireless etc.), and using a low-latency gaming keyboard/mouse/monitor. Performance is available if you want it. But I think most people prefer aesthetics.
I think the biggest factor is animations. Switching from an old Android phone (with animations turned off) to a brand new iOS (at least at the time) felt like a downgrade at first, because every interaction with the UI was so much slower than I was used to. Clicking a button didn't actually make the thing I wanted to appear immediately, but instead it faded in over 200ms or something, making the phone that had 10x better specs feel much slower.
To be honest, I doubt disabling anti-aliasing makes any perceptible performance difference itself (although it gives me sharp text without needing 4K, which probably does). But your comment shows the important of customization. I disabled things I personally considered bloat. Other people will consider different things to be bloat. Software should give you that freedom (and Free Software gives you the ultimate freedom to customize).
Actually, I would be interested to learn how much time you've invested in customization, especially if there were points when the system became temporarily unusable.
Almost certainly more time than the customizations have saved me. I still consider it worth doing, because I enjoy doing it, and it makes the computer more pleasant to use. I have made the computer temporarily unusable several times, but I have an old netbook too, so I'm never stuck without a usable computer while I fix it.
I don’t disagree with the sentiment of the article. But it really does seem to be dependent on the type of work you’re doing. In 2000, I started writing plugins for Final Cut Pro. You could write “real time” plugins for it! That meant you were working in YCbCr 4:2:2 at 320x240 and 30 frames per second (so quarter-sized frame, with 2/3rds the data that an RGB frame would carry and half as often). 22 years later, I’m still writing such plugins, but on the top-end systems you get multiple streams of 8K uncompressed RGB video at 60fps and you can play back in real time. Some things really have gotten significantly faster. But yeah, using a web browser to do normal tasks like display emails is insanity and very very wasteful.
"Back in the days" I used to think 256KB of RAM on a modified Speccy was already plenty, and dreamt about getting one of those 256MB(!) PCs, which would open a _universe_ of opportunities.
Heh, that was so naive - not because of it being possible in principle, but mostly because of all the bloat described in this article, plus misaligned business incentives.
These days, writing any software that would fit within mere dozens of KBs like the famous .kkrieger (an entire better-than-original-Doom-looking FPS game!), seems almost out of reach. Although it's motivational to dream once again through projects like Redbean that everyone is posting about.
But at the same time, I feel like unless you've lived through those times, it's sort of a "lost art" and is only appreciated by a vast minority these days.
In other words, the whole idea of minimalism in software is not even thought about as something that existed/could exist (you can't imagine a yellow Jeep if you've never even heard of a Jeep..)
Agree with the gist, but some of the examples are so poorly informed that the whole article is tainted.
> Google’s keyboard app routinely eats 150 MB. Is an app that draws 30 keys on a screen really five times more complex than the whole Windows 95?
Calling it 30 keys is silly, and undermines their argument because it certainly isn’t 30 keys.
How many MB is reasonable for features I use like the swipe feature? For Chinese entry? For spelling support?
> Google Play Services, which I do not use (I don’t buy books, music or videos there)—300 MB that just sit there and which I’m unable to delete.
The writer is showing their absolute ignorance of what Google Play Services does - a better name would be Android System Services. Maybe question the need for 300MB, but use some intelligence rather than just making crappy off-the-cuff strawman arguments.
Interesting article. I have thought a lot about this frustration on the job.
However, I’m on the opposite side of the fence.
In summary, there are two fundamental ideas in software engineering that people constantly grapple with especially new grads:
1) If it works, it’s not broken.
Or
2) It’s broken until it’s not.
( also can be seen as “right” vs. “right now” )
Let me explain further;
- first point refers to the idea that if the constraints provided are achieved, there are no tasks to be done.
- second point refers to the idea that there is a correct way to do something.
I’m not going to take long, but i strongly stand in the first camp. There is no value in building kingdoms in details that aren’t required. Could we have it run in 010ms vs 250ms? Could we make this look better? Could we do this in a different way? Etc. none of this matters if it works.
Now; you might ask the following questions:
- what about scalability? What if I’ve done this before and I know pitfalls of xyz?
And my answer is, politely, iterative improvements after a working model if those improvements become necessary. Those improvements are not defacto necessary. And when I say necessary, I mean customers are returning the product, not an engineer can’t sleep at night because of his/her neurosis.
In summary, the world you live in was at-best designed with an acceptable tolerance for error. I call that, “good enough”. There is no other measure than “good enough” when it comes to acceptance criteria of anything.
It sounds like you just don't include performance in your acceptance criteria. You're using a lot of words to basically say that you don't care about performance.
I don't find it acceptable to merge something that I know could be 10X faster with a bit more work. Part of my acceptance criteria is "has reasonable performance".
Performance optimization is contentious because everyone has to draw the line somewhere. You can sink an almost unlimited amount of time making a complex program faster. But I've seen so many devs just ignore it altogether, where half a day of work literally makes the thing they made 10x faster.
> Performance criteria should only be measured by whether or not you lose money/customers from it.
While I agree that performance optimization for no tangible benefit isn't generally useful, I find it quite cynical to think of the loss of customers as the only measure that could or should matter.
If a product's user experience is measurably or subjectively worse, but not enough to drive people away, it's still a worse experience.
That may or may not matter to the owners of the company, and of course putting too much effort into details that don't affect the business should be avoided. It's also a reasonable view that one needn't do better than is necessary for the bottom line.
Some people like to take pride in building good products, though, and care about the experiences of their users. It sounds rather cynical to think that one should refrain from ever making anything better, or that it's somehow wrong to care about users' experience, if it isn't enough to immediately affect the bottom line.
(Also, perceived quality could affect user opinions in the long run and, when compounded with other things, could also affect the bottom line even if the effects aren't immediate. Trying to build products of high perceived quality may be a reasonable strategy, and a part of that might be to use good quality, possibly including performance, as a heuristic. But that's a bit of a different matter.)
My problem with that argument is that the business requirement is based on what we can convince the user to buy. Users won't pay more for a better product if they don't understand it's better.
And one may say that "if they don't make the difference, then the bad products are fine after all", but I think that's shortsighted. By continuously favouring the worse but cheaper design, it has brought the industry where it is today: "better write bullshit that ships now than a good product that users won't buy anyway, because they bought the bullshit from the competitor already".
> iterative improvements after a working model if those improvements become necessary.
This doesn't actually work if you've shotgunned the bloat uniformly across your system.
My personal experience was trying to find a 10% performance degradation that turned out to be a worthless branch in memory allocation that ended up being inlined prolifically throughout the codebase. It didn't show up in any profile, and if the group I was with wasn't ruthlessly disciplined in maintaining performance, such that it's effects were felt immediately, I strongly doubt it could have been reversed after the fact without serendipitous re-discovery.
Modern games can render 100s of thousands of polygons, run a physics simulation, run complex audio processing and mixing, run several post processing effects, and provide consistent multiplayer results in under 7ms (144FPS) on a VR headset (ever played half life alyx?).
There's no excuse for why any algorithm should take more than 100ms unless it's communicating over a crappy network, or operating on obscene amounts of data (think 100s of GB of data). No excuse.
So as a specific example, would you say there's no excuse for not being able decode a 1GB PNG image in less than 100ms? The state of the art algorithms, in Rust and Wuff programming languages, have reached around 500MB/second decode speed on a desktop x86.
If I had specified a 1GB image with a PNG-like compression ratio and allowed you define the data format along with the algorithm, then the challenge would not be so difficult. But the moment the requirement is an image format that's actually widely used...
PNG is a compressed format, so right off the bat it would be weird to have 1GB of compressed image data. According to this[0] out of 23 million analyzed images from web pages, PNGs comprised around 14.4% of those images and averaged around 4.4KB. So this scenario isn't likely to occur in the real world, and if it did, decompressing 4.4KB would happen well under 100ms.
Secondly, PNG is an unnecessarily slow algorithm as we've found out from QOI[1]. From the results, it can encode over 300 megapixel per second in some cases. Now I'm a bit fuzzy on the math, but I believe 300 megapixels means 300 million pixels, and if we assume 16 bytes per pixel (4 bytes RGBA) than that means it can encode at speeds of up to 4 GB/s and achieve very similar compression ratios to PNG. Oh and by the way this algorithm isn't even using multithreading, so I assume it can be sped up even more.
So I wouldn't insinuate that the developers who coded the implementations to decode PNG are programming slow inefficient code, but they're stuck with a bad algorithm from the outset because of the requirements, as you've already pointed out.
I think my original statement still stands.
> There's no excuse for why any algorithm should take more than 100ms
The PNG algorithm is inherently flawed imo. We could do better. The developers who are stuck with this specific compression algorithm have no choice but to do the best with what they've got. But you can always transform the PNG image into a faster lossless format like QOI and then use that. So it's still not worthwhile to give up and say we can't do better since PNG is inherently slow.
Edit: I think my original statement almost stands. Processing 100s of GBs of data in under 100ms is definitely overestimating our current computing capacity. I would probably amend that to something like processing anything under a few GBs of data (3-5 GBs) is definitely possible in under 100ms.
> The developers who are stuck with this specific compression algorithm have no choice but to do the best with what they've got.
Right. The point I wanted to make is that with the exception of game developers, who have a unique freedom to define their set of requirements and design a vertically integrated system to solve them, every developer faces their own version of this.
So I agree games are a useful reference point to conceptualize what's possible after an extended transition plan to migrate ecosystems to new standards. But I don't agree with the framing of "no excuses". It takes heroic efforts and wisdom to break out of ecosystem local maximums. For example, when image transcoding to newer formats has been tried, users hated how other software didn't read it correctly if they tried to save or reshare it. So with the latest attempt (JpegXL) they're aiming to gather industrywide support instead of acting unilaterally.
We should cheer orders-of-magnitude performance improvements in the rare cases when a migration can be successfully coordinated to so do, instead of being unduly negative about the normal expected case when it isn't.
The PNG example is a case of a well-established standard limiting what you can do - basically, improvement is only possible by breaking compatibility.
But this is not the case for the vast majority of slow software. Nor is there any equivalent problem there. It really is, as the article describes, a case of devs not caring to put the time because the users tolerate the status quo.
Not sure, but I smell intolerance for craft, and micromanaging. You’re going on with this “optimize for efficiency” and “good enough” rhetoric but tell me: how are you going to make a bunch of professionals care about your widgets if you keep mortifying their interest in them?
And let’s assume they deliver your acceptance criteria, do you really think you have enough requirements to keep them busy and interested?
Are you going to fire them? No, because you might need them back!
Are you going to let them do what they’re good at? No because that’d be waste since it wasn’t vetted!
So are you going to let them play pool at the office?
if it runs once per hour, then it may be inconsequential; if it runs every 30s, then it might be an extra 1hr of battery life for your phone, but people will still keep using it and not complain. is there a problem? maybe.
30s is a 120 times per hour. Or, 30 sec of difference per hour between 250ms and 0 ms (instanteneous).
To get a hour of difference my phone needs to have battery life of 120 hours. For more reasonable 10 hours of battery life I will get about 5 minutes saved.
I see why no one will complain about that difference.
you might be right, but there's more to power consumption than CPU time. something that takes 250ms vs 10ms might wake more cores and/or prevent them from sleeping, might use high-perf cores instead of efficient cores, it might read from storage instead of from cache, etc.
i've certainly seen a 10x improvement in one bottleneck turn into a 20x-30x improvement holistically, due to reduction in contention, back-pressure, etc; there is almost always a cascading/multiplier effect, in my experience.
> So it’s our mission as engineers to show the world what’s possible with today’s computers in terms of performance, reliability, quality, usability. If we care, people will learn. And there’s nobody but us to show them that it’s very much possible. If only we care.
Software engineers by and large care and yet we can't even get the businesses we work for to care, the stakeholders that performance would matter to arguably the most. I think the average person is even more apathetic about a topic like optimizing app performance.
I have made very different experiences. The programmers don't care either. They just want to pump out features, no maintenance, no documentation, no long-term improvements.
That's not what the programmers want. It's how they're incentivized. If you have to meet your "they're not deadlines but, really, they are" sprint targets, then what can you do but minimally complete the tickets assigned to you and hope not to step on anyone's toes?
It doesn't help that good programmers get flushed out of the industry due to conscience and age, only to be replaced with cheap, shitty ones.
No, in my case that's really what they want. I think our project management would totally give us the time to improve something, if we can argue that it's needed. It's really one or two developers who block everything that isn't the absolute minimum possible amount of work or their own pet idea.
> I think our project management would totally give us the time to improve something, if we can argue that it's needed.
You shouldn't have to argue for it. It's something "project management" should care about as a top-level concern, scheduling time for it. Stability, documentation, tests, performance, security - these are all "features" just as much as "make a button dump a CSV for a user".
I mean, they do to the extent that they can. We have a story template that specifically mentions performance, monitoring, security and documentation requirements. We also have weekly meetings to discuss technical issues and a dedicated backlog where devs are supposed to put purely technical stories, but it is usually empty. When the prevaling philosophy of the dev team (or its most vocal members) is "'good enough' is good enough", then there's not much the project management can do short of forcing a minimum amount of maintenance tasks. Of course, project management is also happy if they have more capacity for feature stories, so the impulse has to come from the dev team.
Often (good) PMs assume that those tasked with making technical decisions implicitly understand all the listed concerns are, in fact, requirements. They are then surprised when the engineers explain they didn’t prioritize those things when doing the work.
"The programmers don't care either. They just want to pump out features, no maintenance, no documentation, no long-term improvements."
That's how people are taught. In most companies nobody cares about quality. getting stuff out quickly is where the money is. Slow, deliberate work doesn't pay.
I also share your experience. Never saw management or PO complaining about developers taking time to optimise. I actually never even saw an experienced developer taking an inordinate time to optimise something. It's always refactoring and rewriting that takes too much time (and often causes performance regressions).
Developers however are quick to point that optimisation is "not in the interest of the business", as can be seen in this very topic, or that something is "good enough" since a lot of people are using.
It would seem this is all just a matter of constraints and objectives.
It's not that we are unwilling or unable to make things faster. For instance, when presented with the task of porting Quake to the GBA, we somehow find the means to make this happen, despite the underlying CPU only operating at 16.78 MHz. Playable frames per second are achievable on that system.
Scaling "GBA Quake"-levels of optimization up, you could probably run thousands of simultaneous clients off a single AMD system today - regardless of GPU capabilities. But, I don't think the market is demanding that kind of experience anymore, especially not at that kind of scale.
The point I would like to make: High quality experiences only seem to manifest when enough adversarial pressure has been applied at design/architecture time. This is not just limited to software.
I think the biggest wave of the bloat started when SPAs became a thing. After that, fronted web developers thought they can write mobile and desktop apps, so the bloat reached mobile and desktops, backed by repurposed web technologies.
I don't blame SPAs (not that i think you are), but it does feel like there is correlation between SPAs rise in usage and with the frontend becoming worse.
Not that it's the tool, but the welder of the tool.
I'll explain, I think that basically a lot of frontend engineers started using SPAs for too many different usages... Like using a SPA instead of a static page for a really simple basic website.. to me this is like when a meeting could have been an email.
Or adding in so much tooling to their site which doesn't really solve any problems except make new problems, so now there's more tooling for that. (read state management, client side routers)
I remember at a <very popular SPA> conf I spoke to the two guys who made a very popular framework for building that popular SPA, they asked me how my serverside project used express.js's router... Like the questions they had were along the lines of "how can you do (insert basic routing thing here) in express?". The only exposure to express they had was a catch all route to their apps main clientside router.
I was stunned... They were aware there was this tool, and ignored it.. instead deciding that they could do better.
You acknowledge that it's the wielder of the tool, but I feel a need to repreent the SPA evangelism strike force because they're just so freaking cool.
The web is a UI framework with features that are actually pretty hard to find elsewhere. It has native support for: tabs, forward and backward, zoom, and coolest of all, links so you can send the current UI view of an application to someone who's never heard of it before and they can open it up in seconds. All of those apply to regular websites too, but SPAs allow you to write actual apps in a way that doesn't want to make you pull your hair out. And it's often possible for a well-designed SPA is indistinguishable from a native site.
I maintain my personal website as a SPA. When I posted a monad tutorial on my site [0] to HN, not one person complained that my site is slow (it isn't) or that it doesn't work with JS disabled (it does [1]). Plus, when you hover over a link, it uses some SPA magic to prefetch the page you're going to visit, so the transition is even faster than it would be on a typical site. This is the archetypical case of "using a SPA instead of a static page for a really simple basic website" and I think it works really really well.
There's definitely some room for improvement. I'd love to be able to make it a bit smaller by rewriting it in Rust through WebAssembly, but it's not quite there yet.
FWIW these features are not unique to the web. WPF offered a page-centric app navigation model for desktop apps, complete with forward/backward, history etc, since 2006 [1]. I haven't seen it actually used much though. And, to be honest, I don't think I'd want a desktop app to use it, unless it does something that is naturally represented in that paradigm (most use cases are not).
There are plenty of desktop apps that are bloated and slow and plenty of web based apps that are fast and light. Thinking of Figma (web based) vs Adobe XD. The former "installs" much quicker, does cooler things, is more useful, more responsive.
The solution to the problems mentioned in the article is not "use technology X", it is about shifting focus on quality.
There's definitely a corelation, but I think it's inverse. I think companies thought that web developers can program mobile/desktop apps now, and web devs were happy to accept the better paying jobs.
At this point newer "native" frameworks (Flutter/Compose Multiplatform/friends) are programmed like video games: a 3D rendering surface with loads of custom logic. Except video games can reach hundreds of frames per seconds and advanced text boxes struggle to animate correctly at 60fps.
One of the signs of an apocalypse/bloat is latency. Watching the industry from the intel 386 era, I’m only seeing it increasing. Every little action takes longer to start. It feels like we (in a larger sense) focused too much on throughput:) The perfect latency device I held was Nintendo DS.
I did the Portuguese translation of this article for Nikita back then and it’s interesting to read this article again in 2022. While things are still bloated, I am very hopeful for the future after seeing many cross-platform initiatives written in Go and Rust. I hope to read this article again in 2026 and see faster, less resource-consuming apps running on our machines.
Yeah there has been some great efficient software coming from the Rust and Go communities, ripgrep, nushell, lazygit, helix to name a few I use everyday
Dear Nikita, you have my signature under (almost) every your word. )
You describe the reason I've became disgusted with programming and IT trends in general and quit being a programmer.
I am a big fan of single header libraries. And libraries that i not just link to, but which i am allowed to reuse and compile myself. Then i gut that library and remove everything i don't need. Less code, less space, less runtime and less potential bugs hopefully.
For example there is a single header library HTML renderer and its fast, but does not run JavaScript.
And there is for example Chromium, which is the complete browser experience, but its 10GB source code, no sense in trying simplifying any of that.
> For example there is a single header library HTML renderer and its fast, but does not run JavaScript.
What library is this and does it support CSS? I'm using RML UI right now, and I'm pretty happy with it but it's pretty bloated imo. I'd love to switch if I could :)
> On each keystroke, all you have to do is update a tiny rectangular region and modern text editors can’t do that in 16ms. It’s a lot of time. A LOT. A 3D game can fill the whole screen with hundreds of thousands (!!!)
Yeah. Just update a tiny rectangle.
And check Unicode validity.
And render the Unicode character
And re-do highlight.
And spawn a box with autocomplete. Or not.
And run the AST analysis.
And depending on language check if there is an extension dealing with it.
List goes on.
I dislike the simplification that the article glosses over. Games are fast because they do one or few things that GPUs are great at. Text editors aren't as fast because they do a bunch of things that aren't as easily ported to a GPU.
Unicode validation runs at gigabytes per second. Text rendering is a tiny, tiny subset of game rendering. Not sure about highlighting, sounds like a mix of parsing and game rendering, so that's likely again to be one process that can run at gigabytes per second and another process that is a tiny subset of a process that often runs at 240Hz these days. Autocomplete, no clue, probably this could also be fast? You're like doing a lookup into an inverted index or a hash table or a range tree or something? So it takes like less than 3 microseconds to decide what the contents of the box should be? Oh, it doesn't, because you decided the way this should work is that several different processes should communicate with each other using JSON-RPC just to figure out what goes in the autocomplete box? Yeah, okay, sounds like a problem with the design then. Notably World of Warcraft does not send json over sockets between multiple local processes to decide how wide the nonempty part of each mob's health bar should be every single frame, and maybe one day we could try to figure out whether there's a reason for that.
You sure about that? I remember UTF8 validation being a major source of slowdown when parsing XML in Rust.
Branches kill CPU perf. And UTF8 has branch on every non ASCII char.
Nvm. Those methods use SIMD tricks, which while neat are hard to generalize.
Also not sure what JSON-RPC is referring to? LSP? Are you arguing that JSON is suboptimal? Like maybe, but it's close enough to rival binary formats. Plus you can parse GB/s of JSON.
Even text editors that forgo LSP have speed issues, unrelated to JSON.
> Nvm. Those methods use SIMD tricks, which while neat are hard to generalize.
I've been writing this sort of parser lately and I don't think people's fear of this stuff is at all proportional to the actual level of difficulty.
Looks like you can use this sort of technique on ~every CPU someone might plausibly be using as a laptop to write code on[0]. If you didn't use an off-the-shelf implementation, you would need 2 implementations of your inner loops to you target ~all x86 machines + M1 macs or 3 if want to go slightly faster on the vast majority of x86 machines that have AVX2 without dropping support for those x86 machines that have a bunch of SSE extensions but not AVX2.
> Also not sure what JSON-RPC is referring to? LSP? Are you arguing that JSON is suboptimal? Like maybe, but it's close enough to rival binary formats. Plus you can parse GB/s of JSON.
I'm arguing that IPC is suboptimal for use cases that do not need IPC.
> Even text editors that forgo LSP have speed issues, unrelated to JSON.
I'm arguing that this is probably not due to the essential complexity of text editors.
As an unrelated toy example of the sort of thing that I'm talking about, recently I'm playing a game in which people implement somewhat fast wordcounts[1]. The reference solution uses std::iostream to put space-delimited strings into a std::unordered_map, then gets them all out into a std::vector and std::sorts them. This implementation works at about 33MB/s for large inputs. It's generally a lot more efficient than the sort of code you'd commonly find in a larger code base just because libstdc++ isn't as slow lots of other things and it uses vaguely correct data structures and algorithms. It seems like someone who never looks at any assembly listings and never uses a profiler but sort of has some vague idea of what data oriented design is could easily come up with a solution that processes large inputs at 200MB/s even if they are paying some minor overhead for the JVM, the CLR, the Go runtime, or something like that.
> Looks like you can use this sort of technique on ~every CPU someone might plausibly be using as a laptop to write code on[0]
Problems with that:
- You still need a fallback and to deal with different archs.
- SIMD is a different way of coding. You can't take algorithm from Wikipedia and just implement it in SIMD.
- Even if you code it to perfectly use SIMD, using those instructions, might result in higher CPU demands.
- You will still lag behind games, because those use CPU+GPU while this heavily taxes CPU. You would need some GPGPU. Which is like SIMD all over again.
> I'm arguing that this is probably not due to the essential complexity of text editors.
Today's code editors are less text editors and more databases of ASTs. They very much are complex by necessity.
> - You still need a fallback and to deal with different archs.
Yes. That's 2->3 implementations or 3->4 implementations to target everything instead of only SSE and NEON or only SSE, AVX2, and NEON. But one of them can be the implementation you already had.
> - SIMD is a different way of coding. You can't take algorithm from Wikipedia and just implement it in SIMD.
I agree, but as I said above I think the level of difficulty is a lot lower than people expect.
> - Even if you code it to perfectly use SIMD, using those instructions, might result in higher CPU demands.
> - You will still lag behind games, because those use CPU+GPU while this heavily taxes CPU. You would need some GPGPU. Which is like SIMD all over again.
I don't think so, because games don't usually use GPGPU. They usually do all of their layout stuff and game logic including lots of matrix math on the CPU and use the GPU for deciding what colors pixels ought to be. This is something a text editor has to use a GPU for too, even if the dev doesn't end up knowing about it.
> Today's code editors are less text editors and more databases of ASTs. They very much are complex by necessity.
I posted about whether they are slow due to the essential complexity of the things ultimately being computed and used for text editor features or for some other reason.
I haven't looked into these Rust SIMD libraries much, but it sounds like they are doing worthwhile things. In 2022 you can also use Google's Highway if you are able to integrate template spaghetti into your code, but it's understandable if one cannot do this.
> It's still dropping down to assembly and having to worry about arcane details like endianess and byte alignment.
You don't have to write assembly. In Zig, you can just use binary and unary operators on vectors and get new vectors without using any intrinsics, but you'll still need to use more than 0 intrinsics for string parsing because you probably want pshufb. In the case of pshufb specifically, and this is representative of most of this stuff, the entirety of the complexity that you pay for having one AVX2 target and one SSE4.1 target is that the SSE4.1 target needs to get by with a half-as-wide pshufb. One way to do that (which might not be the best for all uses of pshufb on the SSE4.1 target) would be to write the wider pshufb in terms of the narrower one. Here it is:
simdjson's structural character indexer just uses a 64-byte-wide version of this operation, so that's 4 independent 16-byte-wide pshufbs, and it will be 4 instructions for SSE4.1, 2, for AVX2, or 1 for AVX-512, but if we do what I've suggested and don't have any particular support for AVX-512, it may end up being 2 there too.
Endianness is not arcane, but everything is little endian anyway. The penalties for unaligned access on the CPUs supporting the sets of extensions I've arbitrarily chosen are modest. You probably won't end up with a slow text editor just because your inner loops are frequently paying 1 cycle of latency for unaligned SIMD loads, but you can certainly fix that if you like.
> Is it? The more you use arcane features of code, the more computation you put out, and hence more heat.
Can you explain to Daniel Lamire that he is incorrect about this topic, and in fact it is more energy efficient to implement memcpy with 1-byte loads and stores than with wider ones? Thanks.
> I didn't say they do. I said they leverage CPU+GPU. And for average program to achieve the same, you need GPGPU.
Average programs want to compute many orders of magnitude less stuff than games though.
The text resonates with me but there isn't much practical advice.
What should I do as a normal developer working on normal business products? I try not to do obviously slow and stupid stuff but it is not the main focus.
It does mention it but just to reiterate: The business value of getting things out is larger than getting things right most of the time.
The architecture are bloated micro-services, the infrastructure is overcomplicated, the response times are order of magnitude longer than they ought to be.
But there is no time to care. The next shiny feature or integration or whole business case is waiting to be implemented and shipped. We cannot afford to spend time on improving things people don't complain about over things that bring more money.
I guess that is where the disenchantment part comes in. Most companies are not in business of engineering stuff well if that is not what makes them money.
Developer's time is much more expensive than CPU's. So much more CPU's time is spent ("wasted") to save some amount of developer's time. This includes developer's s study time, that is, hiring less educated and experienced developers.
In 1990s we had a similar effect: the CPU performance and RAM sizes grew so rapidly that "poorly written" and "bloated" software became "butter-smooth" and "lightweight" in a couple of years. Now we have the cloud instead, when spinning a dozen more instances or upgrading to a slightly more expensive plan gives the performance boost that solves many bottlenecks (much) cheaper than paying a team of 3 developers to spend 6 extra months to do things right.
The endless CPU growth gravy train stopped in mid-2000s. I wonder when the endless cloud scaling train will slow down materially.
It's not just dev time vs cpu time.
The thing the article mentions and what I was trying to echo is that software is slow and overcomplicated despite that. It would be unusable without fast hardware.
But people are used to programs being slow and bloated. People and companies are rarely willing to pay premium for efficient programs. So here we are, gluing stuff together so that we're fast to market with every next feature while collectively wasting lifetimes as our apps load and perform basic actions.
I am not even saying it's wrong. There are plenty of things I'd rather have now and slow than later/never and fast. Nor I am saying given the time I'm capable of making all our software significantly leaner and faster. I am just echoing the disenchantment and some frustrations.
> People and companies are rarely willing to pay premium for efficient programs.
First of all this is just not true. Fundamentally you use a computer because it is fast and remembers things accurately.
Secondly I think generally we're at a point where just removing/avoiding crap and starting things from scratch is more economic, sometimes even regardless of performance. And factoring in it in I think it might become a major selling point even in feature competitive areas that are occupied by gigantic firms.
The other day I noticed (again) the shadows macOS draws around focused windows, and I thought to myself "if I could turn that off and get .003% battery life back, I would 100% do it".
I would never have thought that if I hadn't used Linux (or another FOSS *NIX), where you can customize things down to modifying the code yourself. Hell I had a window manager for a while where the way to customize it was to edit the code and recompile it.
That Alan Kay quote "...because people don’t understand what computing is about, they think they have it in the iPhone, and that illusion is as bad as the illusion that Guitar Hero is the same as a real guitar" really summarizes the thing best I think, but the whole interview [0] really fills out the sentiment. People may or may not think they're "computing" or whatever, but there's a kind of tug of war between mindlessly shuffling around spreadsheets and slide decks and trying to create something that meaningfully improves peoples' lives.
Blame whatever you want for this:
- we don't factor in pollution externalities into costs, so Electron (and web) apps are commercially viable despite their high energy use
- tech oligopolies do illegal things to squelch competition, so competing organizations are in a race to the bottom re: efficiency, dark patterns, addictive features (building "engagement"), etc.; see Writely (Google Docs) vs. MS Office, etc.
- in capitalism you solve everything with money, and money washes away all sins, so all other concerns (efficiency, civil rights, etc.) are mooted
- there's essentially no useful social safety net in the US, and one of the consequences of this is that we need full employment programs for the middle class, who subconsciously know this and put pressures on the market not to obsolete them (no need for humans to captain Excel around anymore), but to let them burrow more deeply into enterprises (make Excel more and more powerful and complicated such that only humans trained over years can use it). This is analogous to the dynamic medical companies face: treatments make you rich, cures make you bankrupt.
> we don't factor in pollution externalities into costs
Suppose this is solved and it turns out your local electricity price literally doubles from $0.10/kW-hr to $0.20. Suppose your Electron app adds 10W to what you'd otherwise use, for 10 hours/day. Your power bill just went up by 1 cent per day, almost unnoticeable to an individual.
I actually agree that to understand why the software ecosystem is screwed up in the ways it is, we should be looking for the economic/legal/otherwise systemic incentives driving the patterns. But that's different from "I can think of a way the world is bad according to my politics, that must be it."
> But that's different from "I can think of a way the world is bad according to my politics, that must be it."
It's not productive to dismiss someone's argument based on their politics, unless I guess you think I'm an extremist (I'm not). Politics is a core part of how we understand our world, our societies, our institutions, and our cultures. I'm open to having my politics changed and I love to discuss them, but I won't be dismissed because I believe climate change is a huge threat, or for my other economic political views.
---
Continuing in good faith, raw kilowatt hours diminish what a potential pollution cost externality regime might do, and I wouldn't say that a system based on them succeeds in factoring in pollution externalities.
CO2 is more brass tacks. The global CO2 budget is ~40 gigatons of CO2 a year [0]. There's ~8 billion people around, making the per-capita carbon budget 5 tons of carbon/year. You could tax this in different ways:
- Software producers have their software measured for energy use, paying a tax per N users per Wh
- Individuals are allotted 5 tons of carbon/year for free; can pay $2/lb from 5-6 tons, $4/lb from 6-7 tons, etc.
- Industrial sector participants are allotted 50 tons of carbon/year for free, etc. etc.
Don't forget about scale. You need to multiply the numbers by 100s of millions of users, times the amount of time these apps are running (3651010), times the amount of apps (idk, per user like 10, 20, 40?). Plus of course the overhead from extra hops over the internet because apps are being request-happy...
The proposal was that if dirty power were more realistically priced, then people would not use Electron apps.
Code that's a hot spot in a data center already does get more attention to its efficiency, and the data centers already tend to be sited for clean power. The one correct bit here is that pricing the externalities would increase that trend of data-center optimization.
(I guess I should add that I support carbon pricing.)
If developer time were really such a huge priority, we would also prioritise moving away from slow-ass compilers/transpilers, frameworks so slow that affect local development, slow IDEs and developer tools, slow CIs. And yet we don't, because inertia rules the industry.
In terms of engineering well, it's really more about doing the right thing at any cost than the wrong thing at optimal efficiency. In some sense we could say that rapid evolution of solutions is therefore focused on the right problem, much more so than navel-gazing on millisecond timers or big O estimates.
Well, in an ideal world. In the actual world, we're using really inefficient software to direct someone to your driveway with a tank of gas because you didn't feel like stopping at a gas station. But I still feel like I had a point in there somewhere.
I don't disagree. It's just that there is rarely time to revisit and improve the right things that work. Often it's left as is (in this context inefficient) and the business moves to the next 'experiment'.
The end result tend to be a portfolio of sluggish features.
Analogies comparing software to most other forms of engineering are not usually very helpful. The considerations and constraints that inform automobile design are totally different than those of a typical mobile app or CRUD web shop.
We know how to build software that’s efficient and reliable. It’s a slow and expensive process.
If anything discourages me about software engineering it’s that so many of us don’t seem to be able understand that the nature of our discipline is different.
In fictional universe of FF VI, there are magical creatures with magical abilities called espers.
What is interesting in this game, is that the evil faction in the game are not the espers, but humans that figured out how to enslave them, extract their life essence and turn it into power.
The empire does not care about the well-being of espers or their personality, they only care about their abilities.
There is a part of the game where you see all the espers in a scientific facility, being drained of their abilities and life, and then thrown into a dumpster when they have no magical power left.
That is the software industry.
The good faction are humans that decide to fight the empire, and discover that espers can voluntarily give their power to humans in a more effective way than what the empire can forcefully extract from espers. They use that power to defeat the empire.
There's a couple of points in the article that I don't see discussed here.
One is the comparison with games. So the article is about other categories of software. Has someone thought of other kind of "gamification"? I sometimes think that I'd love to see some fancy gaming interfaces outside games.
The other is that, forget about solutions, frustration is a very powerful force and it has consequences. Frustration for the users and also frustration for programmers. This used to be a job considered interesting and fulfilling.
It's bit like "shrinkflation" where you get less for the same cost because that reduces producers' expenses.
In software we get the same functionality as 20 years before on similarly priced hardware. But we don't get the speed-benefits we could and should expect.
This may be because software producers prefer making things easy for themselves by using ever larger libraries. It reduces their costs but makes software slower?
It took me 20 seconds to compile and link a program on my PDP 11 with 64k of memory (yes, 64k for the OS plus all the drivers plus user programs). It still takes me around 20 seconds today with 1,000,000 times more memory and ~10000 times faster processors.
An observation I had while working in IT is that programming involves a lot of detail which is hard or impossible to notice from the outside (project managers, testers, users). And the average programmer is quite bad and doesn't have a perfectionist mindset.
For example, let's say someone writes code which reads and parses 1 MB JSON file on every keystroke. From the outside, this is usually undetectable. Whenever someone makes something 10% slower or larger, adds a non-obvious error or implements a feature with 10x more complexity than possible, it's often barely distinguishable from a much higher-quality implementation.
Parkinson's law in software is real. As Jon Blow often says, our computers are basically supercomputers that are astoundingly bogged down by bad software development practices. The computer I'm on is at the very least several dozen times more powerful than the one I had 20 years ago, yet a lot of functionally equivalent software probably runs at about the same speed, certainly not an order of magnitude faster let alone several dozen times faster despite not doing several times more.
The problem is that a lot of software developers don't even realize how bad their software design is because of prevailing norms, and the state of affairs can only be upended by certain developers showing that another way is possible when programming with the hardware rather than against it.
As someone who has to use Visual Studio and ReSharper professionally, I was absolutely blown away when I tried out the (still in alpha) 10x Editor and found it accomplishing in less than a second something that takes ReSharper several dozen seconds, during which it locks up the editor, or that it might even fail to do entirely, having succeeded only in making VS unresponsive (finding all usages of a widely overridden virtual function):
https://www.10xeditor.com/
(Disclaimer: I was impressed enough by the editor's performance to become a paid supporter.)
(Another VS extension I use is P4VS, which is slow, buggy garbage that will lock up VS if you look at it funny. In fact, P4V and the P4 extension for Windows Explorer are also utterly dismal, but Helix Force just don't seem to give a damn.)
Another example is Casey Muratori making a terminal several orders of magnitude faster than the Windows one simply by designing it well, without even going through an optimization process:
https://youtu.be/hxM8QmyZXtg
It's fair to say that a huge amount of modern software is at least an order of magnitude slower than it needs to be. The standards in the software development industry are frustratingly low and there is a deplorable dearth of disruptors to push them in another direction.
It does not stop the folks that do it from calling themselves "engineers".
When faced with the potential for being regulated, large "tech" companies that lobby regulators claim they can regulate themselves. "Tech" companies also believe they can redefine what is "engineering" and hire "engineers" who have no corresponding certifications. In some cases, the companies themselves are the certification providers. History has shown that self-regulation does not work. Nor do blog posts complaining about declining software quality.
A big part of the problem is that the web is full of parasitic mechanics.
Javascript is shit, both the language itself and most of the code written in it, but the person who suffers from shitty Javascript is... you. It runs on your computer--that's the parasitic part--and not that of the entity serving it. So they don't care. Until ad dollars stop coming in, it doesn't matter.
Businesses generally think they can scale up with shitty code and inexperienced engineers and "do quality" later. Can they? Well, enough of them can that billion-dollar companies exist. I'm very skeptical that capitalism's signals indicate anything about true value (moral, creative, or otherwise) but the fact is that the "start cheap 'n' shitty" approach works, and it has worked for a long time. (Low quality software has been an issue for my entire adult life, and by HN standards I'm an elder dragon legend.) So long as it keeps working, it won't die. Markets don't punish what is bad; markets punish what is unfit by the definition set by the market.
What's funny to me today is that people tend to underestimate how fast computers are--or can be. Adding a million numbers? You can do that in a fraction of a second. People are so used to the crapware that capitalism produces that we've accepted these ridiculous delays, not for any good reason.
The old engineering adage is: cheap, fast, reliable - pick 2. Or maybe < 2 with software.
And the end of the day most people don't want to pay what development of fast and reliable software costs and businesses know that. So we end up with the shoddy bloatware that are prepared to pay for.
If you don't like that, start your own business and write software that meets your standards. If you are too purist, you might find it a struggle to survive financially up against less scrupulous competitors.
One of the things this article talks about is web sites being unable to animate things close to 60Hz and wondering how that will go when 120Hz and higher displays become mainstream.
However: Doing everything over a worldwide internetwork on a CPU and OS independent platform that can be used for tasks as diverse as talking to your GPU, or interacting with real time audio/video was not part of the "original plan."
Yes, DOS is more responsive but you can't have 6 image-heavy large documents open on your computer while Webex'ing, observing your weather, having a Youtube video open, and playing a game all during your Webex under DOS.
The article also talks about Windows updates. That's a legitimate concern. Microsoft needs to rip out that system and start completely over.
Look at this article from Microsoft entitled "Windows Updates using forward and reverse differentials" (https://docs.microsoft.com/en-us/windows/deployment/update/p...) - wtf? I don't even know why any of this is necessary for a system that should just overwrite files with later versions. If you wanted evidence that Windows Updates is overengineered, there you go.
I've had my game server for a popular multiplayer online web game running continuously for over two years now. Same Linux process, same cloud VM. Never crashed, never needed to reboot. Node 12, everything done in TypeScript, from the network protocol to the physics engine.
All it took was careful design and development. Not that difficult if you put your mind to it and don't cut corners.
I would switch music streaming services for a client that wasn’t buggy. I think there is an untapped market for a software company that provides high-quality software. Market competition might kick in if a company can pull it off. It's probably more complicated than it sounds, though, because software frameworks don't provide that level of quality.
We could perhaps get better software, but at what cost?
One of the nice things about engineering disciplines is that costs can be reasonably forecast. There are spectacular counterexamples (e.g. Big Dig), but bear in mind those projects were much more than "just" engineering projects.
Software costs seem hard to forecast. Software integrates with other software. Software is created via a whole pantheon of algorithms. The "better" algorithm is often context dependent. That context depends on what else gets integrated. Did I mention we probably won't know at the design phase how things will be integrated?
Other engineering disciplines integrate with things that are well-known: the ground, water flows, the atmosphere. These things don't change. This is nice.
This article makes it sound like "better" is just a matter of spending more. My read is different: spend more and be correct. Correctness _isn't_ just something we know but aren't allowed to implement. In many cases, we don't _know_ what correct is. We have to _find_ correct via exploration and experimentation. Even when we do, we might severely misjudge the effort required to get there.
This complexity isn't just accidental. The world is a complex place. Our human minds act on a small portion of that complexity, based on all sorts of heuristics we aren't even conscious of. They work most of the time. The rest of the time? We're compelled to shoehorn things into our heuristics. This ends increasingly poorly the more complex the underlying mechanism is.
This is a hard argument to make to someone who will not accept that they are not correct. Correct in their criticism. Correct in their diagnosis. Correct in the the behaviors that lead to our condition. Correct in asserting that avoiding those behaviors would result in no unanticipated consequences. Resentful about it because they were correct, and we didn't accept that.
Isn't this a web developers lament? Look at native code or the wonderful world of 3d graphics, speed is piling on. But look at web development with the layers of frameworks to display a text box. The solution seems fairly obvious I would have thought.
the irony of this is that we have all the bloat but we also have to live with decisions that were only made because of the limitations of the past. I'm referring to things like arbitrary limits on filename lengths. And worst of all databases that think it's a good idea to make you delete information by default to "update" it rather than keeping all the changes. Disasterous.
> Windows 95 was 30MB. Today we have web pages heavier than that! Windows 10 is 4GB, which is 133 times as big. But is it 133 times as superior?
This is an interesting question, I am not sure how I would measure this.
Stability/Uptime is probably 100x better than I recall for my windows 95 machine. They have done a ton of important security work since 95, from UAC to process isolation.
Focusing on the little unpleasantnesses distracts from seeing a bigger picture. The net software experience hasnt changed or improved much. There's a lot of fine tuning & apps getting tweaked, but the platform of computing- to users- has been locked at the same fixed boring low-power level for well over a decade. If you want to be disenchanted, do it from a macro perspective.
I used to feel this frustration until I accepted that the economics of software usually lead to different requirements. People are making rational choices, so no amount of attitude or culture change will make a significant difference. Different requirements can be found in different industries, or different fields all together.
I am immediately reminded of this saying:
"Anyone can build a bridge that stands, but it takes an engineer to build a bridge that barely stands". I guess Nikitionsky wants to apply the same standards to software engineering.
- It is a major part both UX and the utility of a program. The success of the iPhone is often attributed to design, but it was also really responsive, fast and simple in comparison to other smartphones.
- We tend to do stuff to "fix" performance on top of things that make the whole system much more complicated and brittle. Caching is the best example here. Caching proliferates and adds complexity to every layer. Another one is adding more machines. Same story.
- Programmer time is valuable. But how about reducing the complexity of a system and make it reasonably fast? Less failure modes means less worry and work. Fast feedback loops are absolutely _essential_ to keep a programmer in a productive flow.
- It costs less. See the two points above.
- It is enabling. There are things you don't even consider doing because everything is slow. You don't have enough currency in the form of time and cycles to do more.
- It is not that hard. As described in the article, it is often just a matter of throwing stuff away that isn't needed. Start with less abstractions and "flexibility". Build something reasonable that cares somewhat about performance and you probably get orders of magnitude faster results.
What hinders us?
People introduced the joke of "just wait a year or two and it's faster" because hardware kept fixing bad performance. This is over. We're drowning fast hardware with slow software, quicker than hardware can keep up. Also many of the improvements are on the level of parallelization and caching. Programs need to be aware of that at some level.
There is the famous quote "premature optimization is the root of all evil". I'm not talking about optimization here. I don't even know how to do optimization outside of trivial things. What I'm talking about is just removing or avoiding unnecessary cruft, and very basic efficiency.
Overhead is not taken seriously. When I read comments like "IO performance dominates web apps" then I wonder if they have ever run a profiler. Do you really need to make that many db calls? Have you designed your data model with respect to access patterns at all, or did you follow some "Best Practices" modelling which is disjoint from what your app is doing? Is all that indirection necessary for you to write maintainable, concise code?
Adding unnecessary cruft is endemic in web development and seems to be part of our culture. We're very much concerned with short term productivity and everything keeps changing quite fast. Part of this is businesses decision makers wanting to think of developers as interchangeable and development as a commodity. I disagree. I think there is a niche for web agencies that make simple, robust and performant things.
Linux will only kill processes if there are no swap left. So always have swap. You can also configure what Linux should do when there are no memory left.
Read on another link...
Voyager had 67kB RAM in it's computers and has reached the end of solar system.
We really need to rethink system design and implementation.
a good general-purpose multilingual text and graphics layout engine does in fact turn out to be a computationally harder problem than reaching the end of the solar system, sorry
This rings true and frustrates me as well. My entire career I get bogged down with being so disappointed with the tools that we use in the software industry, that it totally distracts from building new things.
Perhaps it is my mathematics background that spoiled me. In mathematics, you rarely are concerned with the tools that you have at your disposal. In most cases, they have been designed well, iterated on, smoothed out, etc., and above all, they just work. Aside from the most advanced, cutting-edge work, this is generally true. Of course, there's always work on foundations, different paradigms of mathematics, and more, but I think the point is clear. Maybe an even clearer point is imagining a carpenter who has to rebuild his hammer every day because it stops working, and then imagine that the instructions to build a hammer are practically undocumented scribblings on some napkins.
In software, and actually in engineering as a whole, there is a large percentage of work that goes into solving problems created by other engineers. This isn't all the work, of course, but it is a depressingly large amount.
In general, I am less concerned about all the constraints of software artifacts, e.g., size, slowness, etc. While those are a drag and can be frustrating, I am more concerned with two things: software that works and software tools. The tools that we use in this industry are nothing but antiquated. Software engineers equate programming to text, and thus, all the tooling revolves around text. Most so-called IDEs are simply advanced text editors. Tools like Git are beloved, and in Linus' own words, I'm surprised they made it to adulthood. Even in a text-based language, I absolutely do not care about lines that changed. I care about semantic things, like functions, modules, expression blocks, etc. I want my source-code control and comparison tools to tell me what actually changed and not what characters of changes are present. I could go on and on this topic.
And part of the consequences of poor tools, we have software that works poorly and is just balls of mud stacked on top of more balls of mud that have dried out they're brittle to even touch. But that's just one component of why software doesn't work. The other is how we think about software. Thinking about software as just a thing that a person tells a computer to do is not enough. Software is about three things: (1) computation, (2) communication between humans and not between human and machine, and (3) how to think about things or encode a domain with software. We suck enough as it is with (1), but (2) and (3) are basically ignored at large. Almost all engineering problems in deployed systems relate down to communication issues and poor understanding of the domain at hand. But not only does everyone concentrate on (1), they concentrate on very narrow aspects of (1).
It's almost no surprise that things are as bad as they are in software.
Useful software today beats perfect software tomorrow. That's why software is not well-optimized: because less optimized software beat it to market at multiple layers of the stack. That's why web apps slowly killed native apps: because shipping binaries to multiple platforms is way harder than loading a URL in a browser (or thin wrapper thereof).
One thing I've learned the hard way in life is to focus on what one can control. We may not like the tradeoffs others have made and the systemic outcomes they led to, but we can't control them. A better tactic is to pick one very specific area you think you can make a dent and focus on that.