I disagree. Almost all software I use these days is absurdly slower than it should be and probably a massive drain on collective productivity. I have ssd, fast internet, 64gb ddr4 etc, almost everything I do should be impercetibly instant but instead takes seconds and even minutes.
Yeah of course most of that slowness is not even because of ignorance of the low level stuff but complete disregard of performance aspect of software. If it's even slightly more convenient for a programmer to do something that is very stupid performance wise, they will do it.
Sure developer time costs, but this cannot actually be the reason. Because tooling is also ridiculously slow in exactly same way and wastes countless of hours of that costly developer time.
> I disagree. Almost all software I use these days is absurdly slower than it should be and probably a massive drain on collective productivity.
This is absolutely true, but the problem is usually a lack of due care about performance at higher levels, rather than low level optimization. Things are usually slow because they do entirely too much of the wrong things, than because nobody optimized the things, or the wrong registers were used or what not.
The article presents the case that writing code with a specific style - OOP compared to Data Oriented - is responsible for slowing down performance. That code architecture is primarily the cause, regardless of the actual implementation, because some architectures will never work well with the compiler.
I get that like the major parts of picking algorithms that are N over N^2 is always going to be more important, but it doesn't seem like the article is disagreeing with you at all, just another area where people need to consider their decisions.
But only a strict subset of problems are heavily data oriented. For your 3 button GUI app whether you use a tightly packed AoS, even SoA doesn’t matter over a slow linked list of pointers for that 3 element list implementation doesn’t matter.
(Agreeing) The 3 button GUI app is much more likely to be/feel slow because the UI framework used makes it difficult to get the widgets on the screen quickly than because of data processing.
Maybe it insists on a deep heirarchy of widget inheritence as many do. Maybe the framework wants to load and configure all the possible widgets, even though the application only uses a few. Maybe theres a bunch of images loaded from disk even though they're not used. Maybe it's just the fantastic default compositing system that puts everything a frame behind adding up with everything else.
There certainly are applications where data structures matter, and time spent processing data is significant to user experience, but that's not why the whole computing experience feels slower (although maybe prettier) than 25 years ago, despite capability being so much more for most things (as pointed out in the article, ram is better than years ago, and we certainly have more of it, but ram access takes a lot more cpu cycles now)
I am torn on the issue. On one hand, there is definitely useless technical debt, over-abstraction, no longer useful features slowing down our programs. But at the other hand, we are actually handling every language on Earth properly (or at least we very well should already), instead of just saying that yeah ascii is good enough, somewhat care about accessibility though far from enough, etc.
Also, actual native GUI apps are quite fast imo. Only electron apps are slow, but I really do think that it is mostly due to the web having a wrong abstraction for GUIs, not due to other reasons (of course there are shitty web apps, as well as shitty programs everywhere else)
> Sure developer time costs, but this cannot actually be the reason.
It totally is because we've abstracted the cost of software engineering unto the hardware.
Back in the 90s, people had to basically beg for memory when writing software while modern development is built on the theory of getting the fastest delivery speed at the cost of performance. We outsource the cost of software unto millions of customer's hardware and it's never coming back because going to market quick makes you money.
The point stands anyway. Why are all my dev tools super slow? If my time is important, shouldn’t those be fast and heavily optimized? And don’t try to tell me that 4 billion cycles a second across 16 cores operating on in-memory contiguous values isn’t enough to update a watch window more than 1 time per second.
If my tools are faster than me, there’s no point in making them even faster. In fact, I’d trade some of that speed for a smaller memory footprint.
In your example of the watch window, updating it at more than 30 fps is probably overkill.
One very common mistake is optimising the wrong thing. My advice is always the same: first run a profiler with a real workload (absent that, eu it over your automated tests), then look at what you need to optimise for speed. All the rest should be optimised for readability.
In the article’s case, if counting warriors is so important, keeping a counter at the AntColony class that’s updated every time you write to an Ant would be the obvious solution.
> Why are all my dev tools super slow? If my time is important,
Quantify important into how much you spend on your tooling.
Aside from some key bits of development software at the IC / board layout, almost everything a software developer needs is free, open source or cheap. Noone pays $1000 for an IDE anymore.
How many minutes per developer per day before that level of expenditure pays for itself inside 2-3 months?
Who cares? The premise is false, you can't buy it. There is no "fast tools" you pay for. Every decent shop got their devs an SSD at the first available opportunity because it paid for itself really quickly, cost you money _not_ to buy them. The very definition of a false economy was "no the devs aren't getting new shiny sportsmode SSDs for their boxes."
We all use gcc and clang (for relevant languages), for example because everything else you might pay for sucks a lot harder. Whatever the ideology you claim or deny. Intel literally damaged their customers' products to try to make themselves look better than AMD with theirs! They're a non-starter after showing themselves willing to do that but if you must, yes, their compilers still suck exclusively on intel more than gcc. How is microsoft's compilers standards chasing going? In the 21st century yet or /still/ not? (Oh come one, we're mostly there but for ... yeah). Compiling template heavy C++? Does developer time matter?
(But just quietly, terminals & cmdline tools feel a lot more productive than IDEs to me and I use both in differing circumstances).
I don't have a hard math here. But here's some example: JRebel. It allows for instant class reloading in JVM. It increases one's productivity tremendously. And it's expensive as hell. They used to have free version, they don't have it anymore. I can only judge that they're struggling and forced to squeeze all they can, because developers will prefer free restricted version over paid full-featured version. I know, I do, I don't value my own time and I'll never pay more than $100/year for a tool (and hopefully much less), I already trying hard to justify Idea subscription, because it seems that Idea CE contains almost everything I need.
If developers would love to spend thousands of bucks here and there, paid developer tools industry would flourish. But I think that it's even degraded compared to the past. I remember paid Delphi IDE and I remember paid components and that was a thing back then. I can't remember any popular paid React component right now, despite the fact that much more people are using React compared to Delphi.
People forget that Winamp was instant on a 60Mhz machine with a spinning rust drive. It’s difficult to buy a computer that slow these days. Modern CPUs modulate their speeds by far more than that as part of their moment by moment power management!
When I first bought an SSD and noticed the crazy speed-up, my next thought was that developers will get used to it as the "new normal" and stop caring about IOPS which would slow down SSDs (ruining my blink-of-an-eye super-speed fun) and make HDDs near-unusable. Two to four years later, here we were on Windows, and to some extent, on Linux too.
The amount of effort involved in making sure reads were sequential back in the HDD days was immense. It’s not surprising people dropped that as soon as possible.
Because streaming music is a trivial task, computationally-wise. It could have had some performance impact in 2000, but we're in 2021 now.
And nothing else related to the functional differences between Spotify and WinAMP can explain why Spotify is slow. Done properly, it should feel snappier than WinAMP felt in 2000.
It's comparable only on an abstract level - mostly because there's hardly anybody who really wants/needs "all music in existence over the internet", most people just want to "listen to music that I like".
It really did not for me by that standard. I have access to orders of magnitude more music now that I know I like than I did during the Winamp days and I was a pretty early and savvy user back then so I had more access than a lot of people.
If we're trading music player performance for the massive increase in availability I'll make that trade.
But really, 99% of the value-add of Spotify over Winamp is stuff that happens server-side. Your Spotify client doesn’t have a database of all the songs, nor the ML models for computing recommendations. As far as playing streaming music goes, Winamp would happily play an m3u (mmmm soma.fm).
Don’t get me wrong, the discovery and search features in Spotify have brought me a ton of value. But wow is the client resource usage dramatically disproportionate for the added functionality
And the kicker is, the music client is a completely separate concern from a song database, or a recommendation engine.
This is actually a point in favor of directly comparing WinAMP and Spotify - the important "value-add" bits of Spotify all happen server side, so they should have exactly zero impact on client performance.
You could go to the shoutcast website, get the shoutcast links for the radios you were interested in, then make a playlist that could be dealt with just like any other playlist.
Yeah, this is the weirdest thing to me: Windows NT 3.51 on a Pentium was much snappier perceptibly than even my new M1 MBP (developers are already expanding to the new performance envelope).
And didn’t render millions of polygons at 60+ fps, while having many GBs of assets like textures, so I don’t see how is it relevant.
One could write those programs in a truly inefficient manner in a not too performant language and it would run without problem. Today’s computers are really fast.
The problem is that, instead of the speed of modern computers translating to perceptible performance improvements for normal users, it translates to applications that use more resources.
Whether or not the new features make up for the general perceptual slowness of modern computers is a somewhat open question.
I would agree with you, but in terms of games it is simply not a great example. It may be questionable why would we want photorealistic games when 2D, visible pixels are good enough, but games are not known for being inefficient.
Windows Server 2019 has a much snappier UI then Windows 10 I've found (I recently spent a ton of time automating installs for both and boy did that become noticeable).
Then you look at the Windows 10 list of crapware and it becomes a lot clearer.
Vista was a worse abomination on 2 GB of RAM. It's likely that if I installed a build of Windows 10 from 2 years ago on the machine in question, it would be faster than a greased lightning in comparison.
Then you look at the Windows 10 list of crapware and it becomes a lot clearer.
Is there an actual causal relationship? And which crapware specifically? Asking because I only have access to a bunch of Windows 10 Pro machines which already don't seem to have most of crapware on there (i.e. often I see threads here where people complain about all kinds of ads and other things I never even knew existed) and the rest disabled (as far as I can tell) and still it's less snapy than this one Windows Server instance on comparable machine. (and sadly no single modern OS feels as snappy as even Windows XP SP2 for a similar level of functionality on not-so-recent hardware)
The difference is the list of running background services. Kernel and drivers are the same, libraries are the same, but server doesn't have (ex) the AllJoyn Router Service for Smart Things Control, the 4G LTE Push Notifications Service, the WAP Push Message Service, the Fax Service, Windows Image Acquisition, and so on. Candy Crush doesn't run on startup, these do.
The subjectively fastest, which is to say the most responsive or lowest input latency, computer I've ever used was a Macintosh 512ke upgraded to 2.5 megabytes of memory, running the system software and apps off of a 1.5 megabyte RAM-disk. When you double-clicked, say, MacPaint in the Finder, it was loaded before you finished the physical mouseup. This was when that hardware and software was still current.
Spotify runs on all of my devices. In the morning I can ask my smart speaker to play songs that Spotify presumes I like, continue from my car with a voice command, see what my friends are playing or play their playlists. I'll gladly wait for the ~5 seconds it takes for an app to start if it delivers all this.
Of course IRC is much faster, it's also few orders of magnitude simpler than Slack. For work related communications I prefer Slack to IRC, but for chatting with friends IRC does just fine. One simple protocol vs hundreds of APIs that provide extremely rich content. Once again, Slack takes few seconds to launch on a modern machine, it's not that bad considering it does so much.
Yeah shit is slower, shit is also way more connected and complex than in the 90s.
Slack is definitely faster than my IRC client since it's not running over ssh to a host. You can't actually run IRC locally or you'll miss all the chat from when you're not signed in.
I dont know if you've noticed, but most programmers actually do care about performances. They just are bad at it. Software are slow precisely because programmers dont get it: optimising all the code will make slow software. They spend their budget optimising 99% of the code and end up with no more money to make that last 1% fast. For real (game engine programmer speaking). Plus optimising all the code make all the code unreadable which drains even more money from budget. Just as OP said.
Usually simpler code is better. I used to see a lot of mistaken "high performance" code where someone has unrolled all of the loops because they think unreadable code goes faster, though maybe people have gotten better about this? (OpenSSL is an example here.)
Unfortunately not all kinds of slowness only happen in hotspots. This is true for CPU cycles, but if an occasional task uses all memory, it's going to mess up everything downstream as well.
I wonder where the trade-off is for loop unrolling.
Like, unrolling a `for` loop that only has 5 iterations makes sense. But if you have 100 iterations, then the larger memory footprint of all the code might actually make it slower than just keeping the `for` loop.
Programmers may care about performance, but project managers certainly don't and they always have the final say
LOL no. Offer a dev choice between a framework that is trending on Twitter, vs one that is 100x faster but "legacy" (meaning more than a year old) and she'll choose the first one every time, and the project manager won't know the difference, or care if he did.
If it's good to develop with, I don't care one whit whether a software framework is 2 months old or 20 years old. Besides, not everyone works on greenfield projects and can pick what framework they want.
I’m with both of you and I don’t think what you two are saying is mutually exclusive. I’m kind of a performance nut (often working with teams, I’m the guy that introduces everyone to a profiler) and, in my N=small experience, PMs are opposed to spending time on performance optimization precisely because most people are bad at it and it takes an undefined amount of time for unknown performance gains.
When I take an hour to profile and analyze some request and say “I think we can cut this thing’s response time by 80% by optimizing $X, which will result in an overall page load time reduction of 3s”, that’s way easier to sell than “I want to figure out why this page load is so slow” and then reporting back a week later with “shrug I optimized 5 things and it didn’t get faster. Dunno.”
Faster = more ad revenue. If your app/site is too slow, people will stop using it and won't see your ads. Also, search engines will detect that and downrank you, meaning even less ad impressions. It has been studied many times, and yet, developers don't do it.
Simply, performance is expensive and for most businesses, the additional revenue is not worth the additional expense.
Almost all software I use these days is absurdly slower than it should be and probably a massive drain on collective productivity
Productivity, energy usage, e-waste in landfill as people are forced to upgrade in order to do simple tasks with the current software... The world is paying a high price for programmer laziness.
Is it programmer laziness, or insane demand for software due to many factors, mostly profit?
There's insane demand for games, and those programmers are able to optimise their code, insane demand for trading software and those programmers are able to optimise their code. But webdevs just don't care. I look at pages now that do nothing more in functionality than the same app would have done 20 years ago, yet they are slower now, and I have literally 1000x more CPU power. It's insane.
Optimized games have a well known impact on sales. I'd assume the same with trading software. Putting out a blanket statement that web devs are lazy is just silly. Developers rarely have much say in how much time they can dedicate to features and the app as a whole, that usually boils down to PMs and management. If the org as a whole doesn't value performance or doesn't think it matters, then that will get reflected in the product regardless of what the developers actually want.
And yet developers working on front-end code to run in those browsers are waiting seconds for their linter or test suite or transpiler to process a few thousand lines of JS code, on a modern PC. Not a few million lines, a few thousand. That is orders of magnitude slower than it should be, and it's a cost that hits huge numbers of developers many times every day. Quite often, that awful performance is because the slow tools are written in a certain language also used in those "fast" browsers, and far too many developers are still giving this a pass because they incorrectly assume some magical runtime and JIT will make up for using a language that was never designed for high performance work and writing large, complicated programs.
The problem with the NPM ecosystem is really dire because know-nothing programmers who write JS plug in to that ecosystem and write bad code, and then the know-nothing programmers who don't write JS come along, too, and say look how bad JS is.
JS is plenty fast and can be used for "large, complicated programs"[1] just fine. The problem with most JS written today is in programmer practices—the way that the community associated with NodeJS pushes one other to write code (which is, ironically, not even a good fit for the JavaScript language). It turns out that how you write code actually matters (which is the central point of the article here to begin with...)
These are related: the faster the browser and the typical internet connection is, the bigger sites will be (absent some force to prevent developers from spending the entirety of the new performance budget.)
I have seen progressively worse performance from even the best browsers, and page load times that should be instant often literally take minutes or never load unless I completely kill the browser process and return. The slowdown is nearly inexorable, with occasional improvements in some versions before resuming the dismal trend.
Simple word processing and spreadsheets are so laggy on keystrokes as to be almost unusable, and by unusable, unresponsive on a level hundreds of times worse than DECADES ago, on computers orders of magnitude less powerful. Simple cursor movements are so laggy that I must set aside my train of thought to attend to the tool.
And this is on a very solid CAD-level computer, FIOS connection, etc.
It is disgusting and unforgivable. I quit software career 15 years ago for a new industry in no small part because I saw this trend of ever more complex "tools" & "frameworks", etc. creating a situation of building castles on shifting sands, with serious declines in the ability to reason about debugging, performance, or security. It is only worse now, probably exponentially. Worse yet, it seems to have yielded no perceptable "programmer productivity".
It is one thing to architect and program to take advantage of upcoming advances in hardware. It is quite another thing to ignore it and assume that the hardware builders will save you from the bloatware that you foist on the world without a serious thought.
> Are you joking? I have seen progressively worse performance from even the best browsers
Where is the evidence that you're seeing poor performance from the browser itself and that the source of the problem does not lie in the difference between what the server is sending down the tubes today compared to what it was sending 10 years ago?
I agree that what the server is 'sending down the tubes' is a huge part of the problem.
But this is the root cause we're discussing - programmers selecting tools for their convenience (and worse yet, cool factor), instead of FIRST considering the responsiveness of the system as they design and code.
Optimization as an afterthought is about as good as security as an afterthought - anything from a complete waste of time to a disaster.
There are indeed pages that load like lightning, so it can be done (e.g., HN takes about 1.5sec to create a new window and load, so not exactly lightning, but usable), but many are horrible, and clearly due to bad programming.
For starters, when I see a page that loads code from 25 different sites that need NoScript privs to even display, that alone is pretty questionable - license and manage your own damn code (for the sake of minimizing dependency alone!). Twitter is particularly egregious in the last year or so, a new page taking 10sec-?? to load, and the LAST thing that loads is the list of posts -- the same load times it would feel so much more responsive if that was the first to load, and the other navigation, news, etc. panels loaded later while I was reading. That is a many bad programming choices.
It's not. Someone explained that browser makers spend billions on top talent to make browsers fast, and you posted a flippant comment — "are you joking?" — about your observations that performance of browsers is getting worse.
Now you're talking about stuff that web developers do on the pages that you visit.
Browsers are an example of software that is fast because companies have put effort into making them that way, instead of not caring. That's the claim made by the person you responded to. Dispute it, if you want, but don't make claims and then change the subject or shut down inquiry into the things that you're saying.
I'm using very underpowered laptop (Dell 3410) and web is extremely fast for me. There's some serious problem in your OS or your network if websites take minutes to load. Dreadful gmail or youtube loads in 1-2 seconds for me and then works instantly. Simpler websites like HN loads in a fraction of second.
You're both right - but about different kinds of software.
I believe the GP post was thinking an application which has some heavy computational kernel to it, where most of the processor time (and other resources) are spent - with a lot of other code for UI, configuration, some parsing etc.
And you seem to be talking about everyday desktop applications: Browser, mail client, word processor, instant messaging client, and maybe even your IDE as a programmer. Those kind of apps don't have that kernel of hard computational work - their work is much more diffuse.
I spent years optimizing physics simulations from different domains (weather, fluid dynamics, all sorts of stuff). Even there you have to carefully pick the parts you want to optimize to get the best outcome for the resources you invest in optimization. It's completely infeasible to make everything crazy fast with a blanket approach.
You're speaking directly past your parent's point. I've spent years optimizing physics simulations, more abstract discrete math stuff, etc. myself. There, especially there, it's usually pretty easy to identify what should be the hot loop, by eye. Easier with a profiler if the project gets big. In scientific computing, it's very common to find that 99% of the time is spent on a scant few lines of code. Unless somebody bungs up the I/O and you end up spending 99% of the time there instead, but I digress.
But a whole OS, with a browser, network stack, dozens of applcations, etc., it's pretty common to find hundreds or thousands of things which are all more or less equally sucking performance. So on one hand, it's a much harder problem. On the other hand, front-end people have this attitude that performance doesn't matter because hardware is fast enough and it's better to write in the most abstract language possible because changing diapers makes your hands smell gross.
I disagree that everything would be slow. Yeah, there are websites with so many JS bloat that they can make the fans start up in my laptop, but otherwise, your computer does a shitton of things, most of which is essential complexity.
To-the-point software is actually really fast, like gcc/clang does tons of optimizations compared to what compilers did a few decades ago, and I doubt you could realistically optimize it by a significant percentage in the general case. Same for databases. Browsers themselves are also pushing the boundaries of hardware, they can display static HTML/CSS ridiculously fast even though it is not ideal for layouting. Pretty much only the top of the abstractions “suck” sometimes, but there are really fast JS apps as well.
Yeah of course most of that slowness is not even because of ignorance of the low level stuff but complete disregard of performance aspect of software. If it's even slightly more convenient for a programmer to do something that is very stupid performance wise, they will do it.
Sure developer time costs, but this cannot actually be the reason. Because tooling is also ridiculously slow in exactly same way and wastes countless of hours of that costly developer time.