Often the initial consumers would be enterprises instead of casual users. There are numerous enterprise use cases where higher bandwidth and lower latency would be worth the cost. Some that come to mind are in financial services and ML inferencing. I can imagine high-mem compute instances of cloud service providers being an obvious place for these.
Also, going for 2 to 1 seconds is pretty huge if you're doing some operation hundreds or thousands of times a day.
However, all those people who complained that the Atom editor took too much memory are about to experience a new world when they buy their next computer.
I used Emacs back when people joked “Eight Megs and Constantly Swapping”
To be fair. Electron bloat doesn't grow with application size. Only incompetence does. By using electron you are basically forcing your application to need around 200-300MB of RAM no matter how trivial it is, but that's all there is to it. Poor application performance has more to do with bad application development. Nothing prevents you from e.g. building Atom in a way that lets you view files bigger than 2MB with good performance or building a Slack client that doesn't leak memory. I can run lots of tabs in Firefox with good performance but if each tab was using its own browser instance I would run out of memory very quickly.
A casual user like you ends up using quite complex compute on the cloud, when you watch a video on youtube, scroll a newsfeed, and make an airline booking. Some of the advanced complexities of those tasks, when programmed well by good engineers, can become feasible with this.
All the Machine Learning related buzzwords exist mainly because they have only now become computationally feasible. You never know what will come next.
On the whole, I find these 20% performance memory upgrade (leading to a fraction of that in real-world performance) more obnoxious than anything, but I'd love ECC!
I have collected about a dozen single bit (corrected) errors in my home cloud, and seen 2 uncorrectable errors.
I also have come across about a dozen 'bad' sticks of ram which show errors in memtest86, about half of which only have 1 bad row, and 2 of which only show errors after multiple passes which I assume is thermal based.
I do however build all of my computers from pieces from the recycling, so the statistics may be a bit off of the norm...
When you think about the logic of what you said, it is pretty silly.
Without ECC RAM, how would you know that you had a single bit flip? How would you know that you needed ECC RAM?
When you talk to people who run server systems, you'll find there's plenty of bit flips. This expertise is getting harder to find though, as more people run systems in the "cloud" where there's no visibility into the physical error statistics.
> Without ECC RAM, how would you know that you had a single bit flip?
Errors, glitches, or files with unexplained errors. If there is a bit flip in code, more likely than not that'll be an invalid instruction, jump, or something like that, so it would crash. If it's in data, often there's some algorithm involved (compression, linked list, js/css/html, whatever), so that would either find invalid data and crash or display an error, or display at least a wrong pixel to a certain degree (but, true, I'd have to spot a single color channel being off and there's, let's say, a 4/8 chance of the significance being too low to really see; but this only applies to a raw bitmap case). Data on disk should also become corrupted (I'm thinking of image or video editing, data processing like gigabytes of port scans that I recently processed, etc.) if it was inadvertently modified in RAM. There are a ton of ways to notice this, though admittedly also a certain number of cases where one wouldn't.
I see your point though, like, if the error is silently corrected by the software or I just hit retry and don't figure what the error must have been (also because it won't be reproducible), I'm unlikely to find out that I need ECC. Maybe I should introduce random bit flips on a raspberry pi (so I don't corrupt my actual production filesystem) and use that for an evening for browsing / programming / other usual activities to either prove or disprove the theory that I'd notice if this happens with any sort of regularity.