ECC memory can't eliminate the chances of these failures entirely. They can still happen. Making software resilient against bitflips in memory seems very difficult though, since it not only affects data, but also code. So in theory the behavior of software under random bit flips is well... Random. You probably would have to use multiple computers doing the same calculation and then take the answer from the quorum. I could imagine that doing so would still be cheaper than using ECC ram, at least around 2000.
Generally this goes against software engineering principles. You don't try to eliminate the chances of failure and hope for the best. You need to create these failures constantly (within reasonable bounds) and make sure your software is able to handle them. Using ECC ram is the opposite. You just make it so unlikely to happen, that you will generally not encounter these errors at scale anymore, but nontheless they can still happen and now you will be completely unprepared to deal with them, since you chose to ignore this class of errors and move it under the rug.
Another intersting side effect of quorum is that it also makes certain attacks more difficult to pull off, since now you have to make sure that a quorum of machines gives the same "wrong" answer for an attack to work.
I don't think ECC is going to give anyone a false sense of security. The issue at Google's scale is they had to spend thousands of person-hours implementing in software what they would have gotten for "free" with ECC RAM. Lacking ECC (and generally using consumer-level hardware) compounded scale and reliability problems or at least made them more expensive than they might otherwise had been.
Using consumer hardware and making up reliability with redundancy and software was not a bad idea for early Google but it did end up with an unforeseen cost. Just a thousand machines in a cosmic ray proof bunker will end up with memory errors ECC will correct for free. It's just reducing the surface area of "potential problems".
When I said consumer hardware I was meaning early Google literally using consumer/desktop components mounted on custom metal racks. While Intel does artificially separate "enterprise" and "consumer" parts, there's still a bit of difference between SuperMicro boards with ECC, LOM features, and data center quality PSUs and the off the shelf consumer stuff Google was using for a while.
I don't know if AMD really intended to break Intel's pricing model. Their higher end Ryzen chips you'd use in servers and capital W Workstations don't seem to have a huge price difference from equivalent Xeons. Even if they're a bit cheaper you still need a motherboard that supports ECC so it seems at first glance to be a wash as far as price.
That being said if I was putting together a machine today it would be Ryzen-based with ECC.
Small nit, the PRO version of Ryzen APUs do support ECC[0], also ASRock has been quoted saying that all of their AM4 motherboards support ECC, even the low end offerings with the A320 chipset.
The CPU chip can do it. Some motherboards bring out the pins to do it, but they're often called "workstation" boards and cost 2x the price of a standard desktop motherboard.
ECC memory itself is overpriced. $60 for 16 GB DDR4 without ECC, $130 for 16 GB DDR4 with ECC.
Because if you give consumers a choice between having ECC or LEDs on otherwise identical boards with identical price, most will go for the LEDs. In reality the price isn't even the same because ECC realistically adds to the BOM (board, modules) more than LEDs do. So the price goes up with seemingly no benefit for the user.
As such features that are unattractive to the regular consumer go into workstation/enterprise offerings where the buyer understands what they're buying and why.
It really isn't. It was a hypothetical choice between 2 models, with ECC or LEDs, at the same price. Hypothetical because most boards don't offer the ECC support at all, and certainly not at the same price.
> LEDs are a great opportunity to increase profit margin, so I'm not sure about your price conclusions
You confused manufacturing costs, price of the product, and profit margins. LEDs cost far less to integrate than ECC but command a higher price premium (thus better profit margins) from the regular consumer. Again supporting my statement that even if presented with 2 absolutely identical parts save for ECC vs. LEDs the vast majority of consumers will go for LEDs because they don't care or know about ECC.
> It really isn't. It was a hypothetical choice between 2 models, with ECC or LEDs, at the same price. Hypothetical because most boards don't offer the ECC support at all, and certainly not at the same price.
You're making a claim about what people would choose. If you have no related data, and logic could support multiple outcomes, then a claim like that is basically useless.
> You confused manufacturing costs, price of the product, and profit margins.
I'm not sure why you think this.
> Again supporting my statement that even if presented with 2 absolutely identical parts save for ECC vs. LEDs the vast majority of consumers will go for LEDs because they don't care or know about ECC.
Sure, if you don't tell them that it's ECC they won't pick the ECC part.
If you actually do a fair test, and put them side by side while explaining that one protects them from memory errors and the other looks cooler, you can't assume they'll all pick the LED.
When people never even think of ECC, that is not evidence that they wouldn't care or know about it in a head-to-head competition.
My claims are common sense and supported by the real life: regular people don't know what ECC is, and those who do find the problem's impact is too minor to get palpable benefits from fixing it. Why are you being pedantic if you aren't actually going to bring arguments at the same level you expect from me?
> If you actually do a fair test, and put them side by side while explaining that one protects them from memory errors and the other looks cooler, you can't assume they'll all pick the LED.
Isn't this exactly the kind of claim you yourself characterize one paragraph above as "useless" because "you have no related data, and logic could support multiple outcomes"? Sure, if people were more tech-educated then my assumption might be wrong. But people aren't more educated so...
The benefits of LEDs are hard to miss (light) all the time. The benefits of ECC are hard to observe even in that fraction of a percent of the time. Human cellular "bitflips" happen every hour but they don't visibly affect you so you also consider it's not an issue that demands more attention, like constant solar protection. People aren't keen on paying to solve problems they never suffered from, or even noticed, especially when you tell them they happen so often still with no obvious impact. Unless they have no choice, like OEMs selling ECC RAM only devices.
Sell me ECC memory when my (actual real life) 10 year old desktop or 5 year old phone never glitched. Sell me ECC RAM when my Matlab calculations come back different every time. See the difference?
> When people never even think of ECC, that is not evidence that they wouldn't care or know about it in a head-to-head competition.
Well then, I guess none of us has any evidence except today people buy LEDs not ECC RAM. Educate people or wait until manufacturing process and design are so susceptible to bitflips that people notice and it will be a different conversation.
> My claims are common sense and supported by the real life: regular people don't know what ECC is, and those who do find the problem's impact is too minor to get palpable benefits from fixing it. Why are you being pedantic if you aren't actually going to bring arguments at the same level you expect from me?
Regular people aren't given the choice! The things you're quoting about the real world to support your argument are incompatible with a scenario where someone is actually looking at ECC and LED next to each other. And I'm not being "pedantic" to say that, it's a really core point.
> Isn't this exactly the kind of claim you yourself characterize one paragraph above as "useless" because "you have no related data, and logic could support multiple outcomes"?
A claim of a specific outcome is useless. "you can't assume" is another way of phrasing the lack of knowledge of specific outcomes.
> Sure, if people were more tech-educated then my assumption might be wrong. But people aren't more educated so...
It's the kind of thing that can go on a product page. But first someone has to actually make a consumer-focused sales page for ECC memory, and the ECC has to be plug-and-play without strong compatibility worries.
And just like when LEDs spread over everything, it's something that you can teach people about and create demand for with a bit of advertising.
> Sell me ECC memory when my (actual real life) 10 year old desktop or 5 year old phone never glitched. Sell me ECC RAM when my Matlab calculations come back different every time. See the difference?
That's a clear picture of one person. But "never glitched" is a very dubious claim, and you can't blindly extrapolate that to how everyone would act.
I think they may have been referring to the actual mainstream retail availability of ECC RAM. I can buy non-ECC RAM at almost any retailer that sells computers. If I need non-ECC RAM right now I can have it in my hands in 30 minutes. ECC on the other hand I pretty much have to buy online. Microcenter stocks a single 4GB stick of PC4-21300, and I can't think of a single use case where I'd want ECC but not more than 4GB.
You're right, rereading the parent post with that angle makes it clearer that they were complaining about the unavailability of memory and other hardware.
It would definitely be great to have more reliable hardware generally available and at less of a price premium.
1. Single bitflip correction along with Google's metrics could help them identify algorithms they've got, customer's VMs that are causing bitflips via rowhammer and machines which have errors regardless of workload
2. Double bitflip detection lets Google decide if they say, want to panic at that point and take the machine out of service, and they can report on what software was running or why. Their SREs are world-class and may be able to deduce if this was a fluke (orders of magnitude less likely than a single bit flip), if a workload caused it, or if hardware caused it.
The advantage the 3 major cloud providers have is scale. If a Fortune 500 were running their own datacenters, how likely would it be that they have the same level of visibility into their workloads, the quality of SREs to diagnose, and the sheer statistical power of scale?
I sincerely hope Google is not simply silencing bitflip corrections and detections. That would be a profound waste.
ECC seems like a trivial thing to log and keep track of. Surely any Fortune 500 could do it and would have enough scale to get meaningful data out of it?
It's not just tracking ECC errors, which as you point out is not hard, but correlating it with the other metrics needed to determine the cause and having the scale to reliably root cause bitflips to software (workloads that inadvertently rowhammer) or hardware or even malicious users (GCP customers that may intentionally run a rowhammer.)
There was an interesting challenge at DEF CON CTF a while back that tested this, actually. It turns out that it is possible to write x86 code that is 1-bit-flip tolerant–that is, a bit flip anywhere in its code can be detected and recovered from with the same output. Of course, finding the sequence took (or so I hear) something like 3600 cores running for a day to discover it ;)
Nit: not for a day, more like 8 hours, and that's because we were lazy and somebody said he "just happened" to have a cluster with unbalanced resources (mainly used for deep learning, but all GPUs occupied with quite a lot CPUs / RAMs left), so we decided to brute force the last 16 bits :)
Also, the challenge host left useful state (which bit was flipped) in registers before running teams' code, without this I'm not sure if it is even possible.
Sure, all's fair in a CTF. That story came to me through the mouths of at least a handful of people, who might have a bit of an incentive to exaggerate given that they hadn't quite been able to get to zero and might be a just a little sour :P
The state was quite helpful, yes–for x86 it seems like a "clean slate" shellcode would be quite difficult, if impossible, to achieve as we saw. However, I am left wondering how other ISAs would fare…perhaps worse, since x86 is notoriously dense. But maybe not? The fixed-width ones would probably be easy to try out, at least.
Maybe being notoriously dense is not a bad thing? While those ModRM bytes popping up everywhere is annoying as f* (too easy to flip an instruction into a form with almost-guaranteed-to-be-invalid memory access), at least due to the density there won't be reserved bits. For example, in AArch64 if bit 28 and bit 27 is both zero the instruction will almost certainly be an invalid one (hitting unallocated area), and with a single bit flip all branch instructions will have [28:27] = b'00...
Right, I was saying that the other ISAs would do wore because they aren't as dense and will hit something undefined much more readily. But the RISCs in general are much less likely to touch memory (only if you do a load/store from a register that isn't clean, maybe). From a glance, MIPS looks like it might work, since the opcode field seems to use all the bits and the remaining bits just encode reg/func/imm in various ways. The one caveat I see is that I think the top bit of opcode seems to encode memory accesses, so you may be forced to deal with at least one.
> Making software resilient against bitflips in memory seems very difficult though, since it not only affects data, but also code.
There is an OS that pretty much fits the bill here. There was a show where Andrew Tanenbaum had a laptop running Minix 3 hooked up to a button that injected random changes into module code while it was running to demonstrate it's resilience to random bugs. Quite fitting that this discussion was initiated by Linus!
Although it was intended to protect against bad software I don't see why it wouldn't also go a long way in protecting the OS against bitflips. Minix 3 uses a microkernel with a "reincarnation server" which means it can automatically reload any misbehaving code not part of the core kernel on the fly (which for Minix is almost everything). This even includes disk drivers. In the case of misbehaving code there is some kind of triple redundancy mechanism much like the "quorum" you suggest, but that is where my crude understanding ends. AFAIR Userland software could in theory also benefit provided it was written in such a way to be able to continue gracefully on reloading.
At some point, whatever's watching the watchers is going to be vulnerable to bitflip and similar problems.
Even with a triple-redundant quorum mechanism, slightly further up that stack you're going to have some bit of code running that processes the three returned results - if the memory that's sitting on gets corrupted, you're back where you started.
> At some point, whatever's watching the watchers is going to be vulnerable to bitflip
One advantage of microkernels is that the "watcher" is so small that it could be run directly from ROM, instead of loaded into RAM. QNX has advocated that route for robotics and such in the past.
Minix may not be the best example of the type. While it is a microkernel, it's real world reliability has been poor in the past. More mature microkernel operating systems like QNX and OpenVMS are better examples.
> While it is a microkernel, it's real world reliability has been poor in the past.
Nitpick/clarification: it currently supervises the security posture, attestation state and overall health of several billion(?) Intel CPUs as the kernel used by the latest version of the Management Engine.
If ME is shut down completely apparently the CPU switches off within 20 minutes. Presumably this applies across the full uptime of the processor, and not just immediately after boot, and iff this is the case... percentage of Intel CPUs that randomly switch off === instability/unreliability of Minix in a tightly controlled industrial setting.
Anyone have any idea why there haven't been any open-source QNX clones, at least not any widely known ones? Even before their Photon MicroGUI patents expired, the clones could have used X11.
I used to occasionally boot into QNX on my desktop in college. It was a very responsive and stable system.
Hypervisors are, to a first approximation, microkernels with a hardware-like interface. All of this kernel bypass work being done by RDBMSes, ScyllaDB, HFTs, etc. is, to a first approximation, making a monolithic kernel act a bit like a microkernel.
There are well known open source microkernels, like Minix 3 and L4. Probably not that attractive.
Why something hasn't been done is always a hard question to answer, since to succeed a lot of things have to go right, and by default none of them do. But one thing is that microkernels were more trendy in the 90s - r&d people are mostly doing things like "the cluster is the computer", unikernel, exokernel, rump kernel, embedded (eg tock), remote attestation since then (I'm not up to date on the latest).
Thinking about it a bit more, QNX clones might suffer from something akin to second system syndrome. There's a simple working design, and it likely strongly invites people to jump right to their own twist on the design before they get very far into a clone.
> Minix may not be the best example of the type. While it is a microkernel, it's real world reliability has been poor in the past. More mature microkernel operating systems like QNX and OpenVMS are better examples.
You might be referring to the previous versions. Minix 3 is basically a different OS, it's more than an educational tool - in fact it's probably running inside your computer right now if you have an Intel CPU (it runs Intel's ME chip - for better or worse).
Yes, but this is the entire principle around which microkernels are designed: making the the last critical piece of code as small and reliable as possible. Minix3's kernel is <4000 lines of C.
As far as bitflips are concerned, having the critical kernel code occupy fewer bits reduces the probability of a bitflip causing an irrecoverable error.
Yes, I understand this -- basic risk mitigation by reducing the size of your vulnerability.
(I'll archaic brag a bit by mentioning I used to be a heavy user of Minix - my floppy images came in over an X25 network - and saw Andy Tanenbaum give his Minix 3 keynote at FOSDEM about a decade ago. I'm a big fan.)
Anyway, while reducing risk this way is laudable, and will improve your fleet's health, as per TFA it's a poor substitute, with bad economics and worse politics behind it, than simply stumping up for ECC.
I'll also note that, for example, Google's sitting on ~3 million servers so that ~4k LoC just blew out to 12,000,000,000 LoC -- and that's for the hypervisors only.
Multiply that out by ~50 to include VM's microkernels, and the amount of memory you've now got that is highly susceptible to undetected bit-flips is well into the mind-blowing range.
Oh i'm not saying it's the single best solution, I guess I got carried away in argument - It's simply a scenario where the concept shines, yet it's entirely artificial scenario and I agree ECC is the correct way.
I'm surprised that the other replies don't grasp this. This is the proper level to do the quorum.
Doing quorum at the computer level would require synchronizing parallel computers, and unless that synchronization were to happen for each low level instruction, then it would have to be written into the software to take a vote at critical points. This is going to be greatly detrimental both to throughput and software complexity.
I guess you could implement the quorum at the CPU level... e.g. have redundant cores each with their own memory. But unless there was a need to protect against CPU cores themselves being unreliable, I don't see this making sense either.
At the end of the day, at some level, it will always come down to probabilities. "Software engineering principles" will never eliminate that.
My first employer out of Uni had an option for their primary product to use a NonStop for storage -- I think HP funded development, and I'm not sure we ever sold any licenses for it.
You need two alpha particles hitting the same rank of memory for failure to happen. Although super rare, even then it is still correctable. You need three before it is silent data corruption. Silent corruption is what you get with non ECC with even a single flip.
Where are you getting this from? My understanding is that these errors are predominantly caused by secondary particles from cosmic rays hitting individual memory cells, and I've never heard something so precise as "you need two alpha particles". Aren't the capacitances in modern DRAM chips extremely small?
The structure of the ECC is at the rank level. This allows for correcting single bit flips in ranks and detecting double bit flip in ranks. So when you grab a cache line each 64bit is corrected and verified.
Bit flips can happen, but regardless if they can get repaired by ECC code or not, the OS is notified, iirc. It will signal a corruption to the process that is mapped to the faulty address. I suppose that if the memory contains code, the process is killed (if ECC correction failed).
> I suppose that if the memory contains code, the process is killed (if ECC correction failed).
Generally, it would make the most sense to kill the process if the corrupted page is data, but if it's code, then maybe re-load that page from the executable file on non-volatile storage. (You might also be able to rescue some data pages from swap space this way.)
If you go that route, you should be able to avoid the code/data distinction entirely; as data pages can also be completly backed by files. I believe the kernel already keeps track of what pages are a clean copy of data from the filesystem, so I would think it would be a simple matter of essentially pageing out the corrupted data.
What would be interesting is if userspace could mark a region of memory as recomputable. If the kernel is notified of memory corruption there, it triggers a handler in the userspace process to rebuild the data. Granted, given the current state of hardware; I can't imagine that is anywhere near worth the effort to implement.
> What would be interesting is if userspace could mark a region of memory as recomputable.
I believe there's already some support for things like this, but intended as a mechanism to gracefully handle memory pressure rather than corruption. Apple has a Purgeable Memory mechanism, but handled through higher-level interfaces rather than something like madvise().
> You probably would have to use multiple computers doing the same calculation and then take the answer from the quorum.
The Apollo missions (or was it the Space Shuttle?) did this. They had redundant computers that would work with each other to determine the “true” answer.
The Space Shuttle had redundant computers. The Apollo Guidance Computer was not redundant (though there were two AGCs onboard-- one in the CM and one in the LEM). The aerospace industry has a history of using redundant dissimilar computers (different CPU architectures, multiple implementations of the control software developed by separate teams in different languages, etc) in voting-based architectures to hedge against various failure modes.
In aerospace where this is common, you often had multiple implementations, as you wanted to avoid software bugs made by humans. Problem was, different teams often created the same error at the same place, so it wasn’t as effective as it would have seemed.
Forgive my ignorance, but wouldn't the computer actually reacting to the calculation (and sending a command or displaying the data) still be very vulnerable to bit-flips? Or were they displaying the results from multiple machines to humans?
If you use multiple computers doing the same calculation and then take the answer from the quorum, how do you ensure the computer that does the comparison is not affected by memory failures? Remember that all queries have to through it, so it has to be comparable in scale and power.
Raft, Paxos, and other consensus algorithms add even more overhead. Imagine running every Google query through Raft and think how long it will take and how much extra hardware would be needed.
ECC memory is just as fast as non-ECC memory, and only cost a little more.
Your comment sounded like "your recursive definition is impossible".
I am totally for ECC and was flabbergasted when it went away. But the article makes sense since I remember Intel pushing hard to keep it out of the consumer space. The freaking QX6800 didn't support ECC and it retailed for over a grand.
That would be a death sentence for the current administration. It's always the same game. Whoever is in power does EVERYTHING imaginable to make themselves look good and unload all the fallout into the next term(s), which is often held by the opposing party, and if not, the game of deferring will simply continue... Until at some point it simply can't and we get a big bang.
Why are people still thinking that taxing high earners is an option? The world is ruled by rich people, why would they tax themselves? Doesn't make any sense. Democracy is nice, until all the choices you have, exhibit the same underlying core values that you disagree with.
Normally all successful parties will aim for the middle, where there are most voters. The ones that don't, are marginalized and perhaps have some chance at succeeding during times where voters go astray and just vote some extremists out of sheer frustration (like it happened in Germany a couple of times). But that never lasts. This seriously limits democracy, as you can be sure that some values will never change, because there is no party with enough votes to make these changes.
Instead, what we need is Democracy 2.0, where the people vote on individual packages, not parties. Switzerland may be the only country where that works out quite well.
> Why are people still thinking that taxing high earners is an option? The world is ruled by rich people, why would they tax themselves? Doesn't make any sense. Democracy is nice, until all the choices you have, exhibit the same underlying core values that you disagree with.
for the most part, "high earners" are not rich people; they are the upper-middle class. the richest people make the vast majority of their income from investment returns, and are often able to avoid having it count as "income". the very rich are quite happy to have taxes increased on high salaries in exchange for maintaining the status quo re investments.
> Instead, what we need is Democracy 2.0, where the people vote on individual packages, not parties. Switzerland may be the only country where that works out quite well.
this much I can agree on. why do I have to choose between guns and abortion?
My counterpoint to Democracy 2.0 is dead simple: If we could buy a car that was $2,000 cheaper (but had no seatbelts), many Americans would.
Humans are bad at some things. Understanding complicated situations where they aren't experts is one of them - Statistics, safety, complicated systems, etc.
Also, reddit is in many ways democracy 2.0 and they upvote literal fake news all the time.
Information overload leads to people seeking succinct information and answers. The upvote/downvote systems ubiquitous today are easily manipulated to spread misinformation, propaganda, and fake news.
We need to do away with upvotes and downvotes. It’s manipulated everywhere it exists.
It won’t happen. And the reason is: It outsources community curation. A couple of moderators can manage millions of posts, because they only need to pay attention to the outliers. It also allows the moderators to easily fall back on “it’s what the community wants” when the core values change.
And even when not explicitly manipulated, the wrongness is baked into the system. At least once a week, I catch myself rewriting a comment not to improve it but to increase my chances of getting upvotes. How many claims do I make, how many positions do I believe, simply because I know somewhere in my subconscious that they'll get upvotes?
On that note, Congress has had a rider problem forever. It's unconscionable how legislation get clumped together to a) force a vote on some generally positive thing while b) avoiding accountability and c) sneaking all kinds of evil opposites into the same package.
America fairly recently actually taxed high income earners and companies, so politically it’s very possible. Increasing the AMT by say 5% while raising the minimum threshold to say 500k would have vast popular support and a huge backlash from rich people. Determine who wins that fight is just another political battle.
but politicians play a game of semantics to accomplish nothing and appease their base
in tax contexts, "income" means about 3 different things, but to angry people it means one thing: "everything you earn that year"
the reality is that "income" is a subset of earning types, and the rich people anyone actually cares about do not earn much "income" and are therefore continually exempted from the populist fury
taxing high income earners just hurts the upper middle class and business owners
it is entertaining but disheartening to see so much energy put toward the same result over and over
Democracy is still majority rules, which means a huge part of the consumers of government are still forced to accept conditions they don't like.
There is also that mafia bit about having to give away part of your earnings or you go to jail.
We need to decentralise the system so that fewer majority votes are needed (ideally, zero) and politicians hold less power (ideally, zero).
Most state functions can be privatised trivially.
Some are more complicated but it doesn't seem like we're moving in this direction at all.
Every year the government spends more of the money stolen from taxpayers and keeps getting bigger and more powerful (and probably more corrupt).
Read up on SELTIC (the Service, Efficiency, and Lower Taxes for Indianapolis Commission) and how their privatization initiatives dramatically improved Indianapolis.
The cost of public health care is staggering and the quality is abysmal.
If you can afford it, if you have something serious and you don't want to wait for ages or get poor equipment or lack of testing, you'll go private even in Europe.
I'd say private health care in the states is very successful as well (high quality and research) but the prices are ridiculously inflated because of government / insurances.
Perhaps my perspective is skewed by living in the Midwest. By virtue of some private interests, I interact with quite a few very conservative people and perhaps the ‘average man on the street’ is already more culturally conservative than on the coasts.
I maintain my position though. I really have never come across a person in conversation who would take an absolutist position that government action is always going to be better than private enterprise, but the opposite absolutist position is somewhat common.
It say it’s an article of faith because the people I’ve spoken to are not up
for discussion about it. They have established their belief and nothing will cast a shadow of doubt upon it.
> I interact with quite a few very conservative people and perhaps the ‘average man on the street’ is already more culturally conservative than on the coasts.
You’re claiming that all these people you’ve met are anarcho-capitalists? Are you under the impression that cultural conservatism means anarcho-capitalism?
> but the opposite absolutist position is somewhat common
You’ve provided no evidence that it is more common. In my experience, the opposite is true.
I also never meet anyone who has such an extreme view on things.
Here in Europe, we believe that certain things should be public: education, roads, health care, social security, basic research, while other things should be private, like smartphones, cars, ...
And I think for most of the things we like public, there is a good evidence that it should be. I.e. Euro style health care systems spend less money per patient while achieving a higher average lifespan when compared to whatever system the US has.
In general, the argument for private systems is that since they have to make a profit, they will be more efficient. In reality, they have to make a profit, which public systems don't have to. Public systems however have the same price pressure than private systems, since people put this pressure up via democracy.
The biggest issues I see is that public systems are less capable of big innovation vs. small process improvements. This can be tackled via the basic research done at public universities + the private sectors work to bring these to market.
Also, if you follow Peter Thiele Line of thought on how every business should strive for monopoly, so they can take monopoly profits and "focus on the product instead of the competition" - a public service that has granted this monopoly by law would be perfect: they have no competition, so they can focus on making the best product, and they don't have to make a profit, which means they can operate at cost.
Now, I don't think it's as easy as that, but in the end, if you stop being an extremist, you can mix and match all kinds of system to use what is best, instead of striving for something that is ideologically pure.
tl;dr: I want my smartphone from private companies competing for the best product, I want my water from a public utility.
Saying what all european think is a bit of a stretch.
I'm European and I think zero things should be public. I absolutely don't like the European Union and where it's going.
The problem is that if you don't have any incentive to make ends meet because you're spending someone else money, slowly but surely you'll have inefficiencies in the system.
Too many employees, someone making sure a public contract goes to a friend for more money than it should, politicians expensing flights right and left, a service being provided which nobody uses (think about some bus lines).
The same happens in large organizations where who's footing the money doesn't have visibility. The government is the ultimate giant corporation - with the added bonus of not having to make a profit!
We can't say for sure how much money we're wasting, but government spending always grows bigger and I don't see public services improving or providing more value over time.
The last pandemic highlighted how much worse public health got in many countries in Europe. How did that happen and where did the money go?
An inefficient government starts eroding public services, spending the same and offering less, until there's a big crash and then there is nothing.
We saw a bit of that in Greece, we'll probably hear more from Italy soon.
I think we're years away from a collapse similar to the one experienced by eastern European countries post Soviet Union.
I grew up in a communist country, everything was state owned, everything was crappy (I mean quality wise) and generally out of stock but hey, it had amazing price (dictated by the state of course), theoretically anyone could afford it. That resulted in huge lines waiting for small stock drops (not unlike the new PC hardware situation with scalpers these days, which is funny) of chicken, electronics, bananas. Of course, it was even better if you knew someone working at those stores, then you'd get a heads up or they'd "reserve" some for you.
So in general I'd say I've seen government fail to run plenty of businesses. A more modern example would be PG&E which is in this weird situation where it's so regulated that it's almost government run but it's technically a private entity. There are also plenty of pure private entities that haven't run well (but at least, such entities end up being bankrupt, a healthy alternative for non-profitable private businesses).
So in my experience, seeing both private and publicly run companies fail, I've started to think that this aspect of it (being privately or publicly controlled) is not what makes the company run well but it's one of the many factors that can contribute to it. I think a more important factor are the interests, skills, will and agenda of those that control it, regardless of whom they answer to (voters or board).
And let's not discard the particularities of each business. Some enjoy a natural monopoly, others face tough competition. These are more important aspects contributing to efficient use of resources than, again, who the CEO answers to.
> Instead, what we need is Democracy 2.0, where the people vote on individual packages, not parties. Switzerland may be the only country where that works out quite well.
This is hard and inefficient in a large country with an underlying population that lacks critical thinking.
We could instead use a multi-party system where multiple voices are raised in the house and senate votes.
The voices actually are raised, in the hearings. You can watch some of them on C-SPAN. The floor of a chamber with 100 or 435 members isn't really suited for debate, but it does happen. The vote is the very tail end of a long deliberative process.
Most people only look at that tail end because the deliberation is dull, but if you want to know what it looks like, much of it is available to the public.
The base asks via their votes. If they re-elect the legislator, the legislator will continue to vote the same way.
Most legislators have high approval ratings among their constituents, and practically unanimous among their party in their district. It's always other people's legislators who are the problem.
> The base asks via their votes. If they re-elect the legislator, the legislator will continue to vote the same way.
No. People are left voting for their representatives from a set of 2. With this, most people vote for one because they hate the other and not because they like their policies.
In terms of representative democracy, American two party system is just infantile.
> what we need is Democracy 2.0, where the people vote on individual packages, not parties. Switzerland may be the only country where that works out quite well.
This works terribly in CA, FYI.
I don't think there are any easy solutions to the political equilibrium we're in.
Some of the ballot initiatives have had deleterious effects on California.
Proposition 13 putting a yearly property tax increase cap means that people who have owned property for a long time are basically not paying their fair share of local taxes. That tax advantage can be passed from generation to generation. It also has the perverse incentive to encourage cities to increase commercial and industrial zoning over residential zoning, with the effects on property prices one can see today.
Proposition 47 meant that many crimes under a 950$ threshold were considered misdemeanors instead of felonies. This is one likely factor for the large increase in the amount of car break-ins and shoplifting in some areas, such as San Francisco.
Growing income inequality will lead to authoritarianism and then eventual revolution. We need a win:win situation and taxing the super rich is usually the best option.
Well for one, natural sweeteners interfere with the bodies insulin response. I.e. it will overproduce in anticipation, but then your blood sugar drops into oblivion, because well, you didn't actually eat sugar. Is this bad? Probably not if you don't overuse them.
Besides that? I can tell you my body is very sensitive and artificial sweeteners (I literally tried all different kinds) make hell break lose in it, much worse than sugar could ever be. So perhaps they aren't such a great replacement after all? This usually applies to everything humans try to lazily swap out in foods. Oh so I am vegan? Keep these meat replicas coming, only they are like 100 times worse for your health than the real thing...
> Artificial sweeteners are related to non-alcoholic fatty liver disease: Microbiota dysbiosis as a novel potential mechanism
> Non-alcoholic fatty liver disease (NAFLD) is a systemic and wide-spread disease characterized by accumulation of excess fat in the liver of people who drink little or no alcohol. Artificial sweeteners (ASs) or sugar substitutes are food additives that provide a sweet taste, and are also known as low-calorie or non-calorie sweeteners. Recently people consume increasingly more ASs to reduce their calorie intake. Gut microbiome is a complex ecosystem where 1014 microorganisms play several roles in host nutrition, bone mineralization, immune system regulation, xenobiotics metabolism, proliferation of intestinal cells, and protection against pathogens. A disruption in composition of the normal microbiota is known as ‘gut dysbiosis’ which may adversely affect body metabolism. It has recently been suggested that dysbiosis may contribute to the occurrence of NAFLD. The aim of the present study was to investigate the effects of ASs on the risk of NAFLD. The focus of this review is on microbiota changes and dysbiosis. Increasing evidence shows that ASs have a potential role in microbiota alteration and dysbiosis. We speculate that increased consumption of ASs can further raise the prevalence of NAFLD. However, further human studies are needed to determine this relationship definitively.
Last part from the article: "But maybe I’m imaging things. Maybe the reason progress stopped in 1996 is that we invented everything. Maybe there are no more radical breakthroughs possible, and all that’s left is to tinker around the edges. This is as good as it gets: a 50 year old OS, 30 year old text editors, and 25 year old languages. Bullshit. No technology has ever been permanent. We’ve just lost the will to improve."
Yeah, that's obviously bullshit. But we also didn't lose the will to improve. The leap from where we were 20 years ago, to where we are now, isn't mind-boggling. I still remember my Turbo Pascal days, my Delphi days. Sure things to improve... But overall this experience was sufficient. I wouldn't really have trouble implementing any of the projects I worked on in the past 10 years with Delphi, or even Turbo Pascal. It would sure suck, and take longer, but it wouldn't be a dealbreaker...
This is the raw definition of a non-revolutionary progress.
The problem is that the next step of software engineering is incredibly difficult to achieve. It's like saying "Uh math/physics didn't change a whole lot since 1900". Well it did, but very incrementally. There is nothing revolutionary about it. Einstein would still find himself right at home today. That doesn't mean progress was stalling per se.
It means that to get "to the next level" we need a massive breakthrough, a herculean effort. Problem also is that nobody really knows what that will be... For me, the next step of software engineering is to only specify high-level designs and have systems like Pluscal, etc. in place that automatically verify the design and another AI system that code the low-level implementation. We would potentially have a "cloud compiler" that continuously recompiles high level specs and improves over time with us doing absolutely nothing. I.e. "software development as a service". You specify what you want to build, and AI builds it for you, profiting from continuous updates to this AI engine world-wide.
Generally this goes against software engineering principles. You don't try to eliminate the chances of failure and hope for the best. You need to create these failures constantly (within reasonable bounds) and make sure your software is able to handle them. Using ECC ram is the opposite. You just make it so unlikely to happen, that you will generally not encounter these errors at scale anymore, but nontheless they can still happen and now you will be completely unprepared to deal with them, since you chose to ignore this class of errors and move it under the rug.
Another intersting side effect of quorum is that it also makes certain attacks more difficult to pull off, since now you have to make sure that a quorum of machines gives the same "wrong" answer for an attack to work.