>"pushing hidden agendas and framing actually matters"
Thank you very much, this is exactly the point, americans cheer the "free health care" and do not see the point that for example a swiss family has to pay 15-20% of their income every month for compulsory health insurance this is not a choice..one HAS to pay it, additionally, dental stuff for example is not included if it is not from a sickness.
It's absolutely NOT free and a big burden for most citizens who live in country's with Universal healthcare.
Important context here is that Americans pay on average higher healthcare costs than any country with universal healthcare. We have to pay more and get worse service for that payment. Not only are we paying more, there's more parties involved, more hoops to jump through and it's extremely frustrating.
It is mostly an enormous wealth transfer from the young to the old, which now constitute the majority of voters in the western democracies.
Meanwhile both midwifes and children doctors are severely under-financed, even common treatments require months of waiting time because the reimbursements do not even cover the costs.
Considering that most countries are in a demographic crisis and will go insolvent with 100% certainty due to the mismatch between net payers and their future financial obligations, this is insane.
Simpler algorithms are usually faster for small N, and N is usually small. Big O assumption is fantasy world where N is always large, and constant factors that slow down an algorithm can be ignored.
For example, it is common in sorting algorithms to do an n² algorithm like bubble sort when the list to sort is small (e.g. <50), whereas when it's larger, we do an n log n algorithm like merge sort.
The issue is that merge sort does recursion, which comes with some extra cost, so that an n² algorithm beats an n log n algorithm, provided n is small.
You can (and probably should) add a threshold to recursive algorithms like mergesort so that they don't end up doing recursive calls on very small arrays.
For arrays smaller than the threshold, insertion is faster than bubble sort. And if you really don't want recursion for large arrays to begin with, you have heapsort or Shell sort.
I don't think there is any practical case where using bubble sort makes sense, except "efficiency doesn't matter for my use case and this is the code I already have".
As in one of volumes of Knuth's The Art of Computing, Heap sort is in place and achieves the Gleason bound so can be regarded as not beaten by quick sort.
The article doesn’t explain it, but the reason why you want a large space to measure loudspeakers is that your measuring frequency range is determined by the time it takes until the first reflection arrives, which you can exclude by using a windowing function in your measuring software. You only want the direct sound emanating from the DUT, not the reflections (unless you’re interested in room correction).
The larger the space, the lower the frequency you can still measure.
> "Europeans can say almost anything they want, both in theory and in practice."
David Bendels has been threatened with prison time and sentenced to seven months of probation for a Twitter meme [0]. It is the harshest sentence ever handed down to a journalist for a speech crime in the Federal Republic of Germany.
This is the tweet, poking fun at the German minister of the interior Nancy Faeser (the sign says "I hate free speech"):
If grandma can't tell that the picture is edited, then it's no longer a meme, it's slander.
The comedic value would be even higher if it was an obvious tongue-in-cheek edit. Given it's professionally and seamlessly edited, then it's too ambiguous to be a meme and thus should not be protected as free speech.
If you think the standard for free speech should be delineated by what the most clueless members of society can grasp, then you're effectively anti free speech.
He claims it was poking fun. The court found differently.
> Bendels claimed the meme, posted by his newspaper's X account, was satirical.
> But the judge in the case said during the verdict that Bendels published a 'deliberately untrue and contemptuous statement about Interior Minister Ms. Faeser (...) that would not be recognizable to the unbiased reader and is likely to significantly impair her public work'.
If a picture of Nancy Faeser holding a "I hate free speech" sign can be ruled to be a "deliberately untrue and contemptuous statement", satire has become effectively illegal.
That was a well written essay with a non-sequitur AI Safety thing tacked to the end. His real world examples were concrete, and the reason to stop escalating easy to understand ("don't flood the neighbourhood by building a real dam").
The AI angle is not only even hypothetical: there is no attempt to describe or reason about a concrete "x leading to y", just "see, the same principle probably extrapolates".
There is no argument there that is sounder than "the high velocities of steam locomotives might kill you" that people made 200 years ago.
> the high velocities of steam locomotives might kill you
This obviously seems silly in hindsight. Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk. Just people being good Bayesian agents, trying to ride the middle of the exploration vs. exploitation curve.
Maybe it makes sense to spend some percentage of AI development resources on trying to understand how they work, and how they can fail.
> Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk.
In the case of asbestos, this is incorrect. Many people knew it was deadly, but the corporations selling it hid it for decades, killing thousands of people. There are quite a few other examples besides asbestos, like leaded fuel or cigarettes.
People thought that the speed itself was dangerous, that the wind and vibration and landscape screaming by at 25mph would cause physical and mental harm.
The progress-care trade-off is a difficult one to navigate, and is clearly more important with AI. I've seen people draw analogies to companies, which have often caused harm in pursuit of greater profits, both purposefully and simply as byproducts: oil-spills, overmedication, pollution, ecological damage, bad labor conditions, hazardous materials, mass lead poisoning. Of course, the profit seeking company as an invention has been one of the best humans have ever made, but that doesn't mean we shouldn't take "corp safety" seriously. We pass various laws on how corps can operate and what they can and can not do to limit harms and _align_ them with the goals of society.
So it is with AI. Except, corps are made of people that work on people speeds, and have vague morals and are tied to society in ways AI might not be. AI might also be able to operate faster and with less error. So extra care is required.
It is socialised healthcare.
reply