I think regulations can work when they enforce very well-supported, long-established best practices, such that if you don't do them it amounts to negligence.
They might also work better if they say “you can't do it that way, which is known to be unsafe,” as opposed to “you must do it this way, which is the only safe thing.”
Note also that regulatory standards are not the only mechanism in the law to create safety. Liability law can be very effective at creating safety, by giving the right incentives to the right parties, but liability law doesn't tell anyone what to do—only what will happen to you if you cause harm.
> The living standard of kings hasn't quite changed since 2000 BC.
Not true. Ancient/medieval kings still got smallpox, peed in chamber pots, and shivered in the winter cold. They couldn't hear the music of great performers who had died, or travel to another continent for a weekend trip. Etc.
Arthur C. Clarke's First Law: “When a distinguished but elderly scientist states that something is possible, they are almost certainly right. When they state that something is impossible, they are very probably wrong.” https://en.wikipedia.org/wiki/Clarke%27s_three_laws
Thanks. I think Clark overstates things a bit (unless you take a very extreme form of what "impossible" means), so I prefer my version. It also seems possible that engineers are more likely to hedge their bets by calling things "unworkable" or "impractical" because what actually is and isn't possible in theory is more of the realm of the researcher than the practitioner.
This is a truly weird claim to have made at least 85 years after the idea entered the public sphere and at least 19 years after a phone that was intended to be carried by a person not a car was demoed by Motorola.
Did he also poopoo the idea of a public internet and wait until the 80s to do so?
It's not always assumed to be exponential. It was assumed exponential by Paul Romer because long-term economic growth is pretty exponential. Much subsequent work has followed that model, but not all of it.
Yes, but if your data is actually exponential, the linear segments are not going to be better approximations than an exponential curve. That's what's going on here
I'm not sure that's true in general, nor even frequently. In fact, I'd say it's provably false in general.
The big issue is that you get MANY more curve-fitting parameters to play with if you use a piece-wise linear model vs. an exponential model. (You get to choose HOW MANY breaks to make, what the slope is for each section, and WHERE to make the breaks.)
So... Let's say you created some synthetic data using an underlying exponential plus a normally distributed random number. Obviously, the BEST predictive model is an exponential one. However, for any arbitrary number of observations, I guarantee you there's trivially at least one piece-wise linear model that will have less error than the exponential one. Consider the one that is simply a straight line between EVERY point. Obviously that has zero error compared to the exponential model. Yet, it has very little predictive power compared to the exponential model.
Now, that's not what was done here... but there's actually quite a few parameters in the form of where to make the breaks and how many to make. Doesn't seem like a fair comparison.
The paper does cross-validate the models, and I am told that cross-validation properly penalizes overfitting with too many parameters… but I don't understand the statistics well enough here.
For any sampled data you'll get guaranteed 100% fit by making it pieceways constant with N fragments where N=number of data points. It says nothing about the function you sampled, it's just a way to cheat by overfitting.
Yeah like the article mentions, they are basically making an analogy to the idea of “punctuated equilibrium” from evolutionary biology. Here’s a good exploration of how punctuated equilibrium works, vs the alternative which is called gradualism.
> Henri Poincaré famously described them as "monsters" and called Weierstrass' work "an outrage against common sense", while Charles Hermite wrote that they were a "lamentable scourge".
I wonder if there is a really long compound German word for "an achievement whose greatness is best measured by the degree to which it disgusts experts in the field."
Not a bad description of the history of analysis. Turns out function spaces are absolutely full of gross things that don't quite fit nicely into your theory.