Hacker News new | past | comments | ask | show | jobs | submit login
Analyzing the historical rate of catastrophes (bounded-regret.ghost.io)
68 points by Hooke on Dec 5, 2023 | hide | past | favorite | 23 comments



In the list of catastrophes, the Mongol Wars should probably be included: https://en.wikipedia.org/wiki/Destruction_under_the_Mongol_E...

Some estimates are that 11% of the world's population was killed during this time.


Agree that it was a catastrophe by the article's definition, but the author specifically says 'since 1500', which excludes the Mongol Wars (from what I understood from your linked page)

My first thought when I started reading TFA was that the list of catastrophes to consider would be biased because more recent events have better records. Maybe that's why the author decided on the 1500 cut off?


The table includes the Plague of Justinian and the An Lushan Rebellion both of which occurred before 1500.


Author here. The Mongol invasions are in the .csv in the appendix (and represented in the scatter plot), but weren't included in the table because it restricts to events that lasted less than a decade.

If you restrict to the single "worst" decade then the Mongol invasions would have been high enough to make the list, but I didn't want to start making too many manual adjustments to the data, so I left it as-is.


Makes sense. Thanks for explaining. I really appreciated your in-depth quantitative analysis in the article.


Interesting approach. Doing this for events recent enough to be numbers to estimable is a well-chosen tactic - it gives a basis for estimating earlier millenia.

Leaving out wars, there's a lot of nasty that happened to humanity before 500BCE of course, including countless wars, but also including the Late Bronze Age collapse (around 2200BCE), which it looks like might have been worldwide. Dating tech has advanced greatly recently, so physical causes might eventually be known (quakes, volcanoes, climate change, floods...). But estimating deaths is hard.

P.S. Those who've avoided book 'Earth in Upheaval' for some reason (FO Velikovsky?) might consider checking out the first few chapters full of voluminous evidence of past, non-human catastrophes. Regardless of 'explanations', it's an impressive laundry list. (Now available in AI-read audiobook form.)


For a narrower but deeper treatment of violence (and war) alone, Pinker's https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Natur... is well worth a read.


While it's appealing to believe Pinker, his treatment of statistics and probability of war has been debunked. For a mathematical analysis of the history of war see Cirillo and Taleb's paper "On the statistical properties and tail risk of violent conflict" (https://www.fooledbyrandomness.com/violence.pdf).

From the paper: "Accordingly, the public intellectual arena has witnessed active debates, such as the one between Steven Pinker on one side, and John Gray on the other concerning the hypothesis that the long peace was a statistically established phenomenon or a mere statistical sampling error that is characteristic of heavy-tailed processes, 16] and [27]–the latter of which is corroborated by this paper."


Unsure, but maybe related to newer considerations/nuance:

Scaling theory of armed-conflict avalanches https://www.santafe.edu/news-center/news/avalanche-violence-...

Podcast: https://complexity.simplecast.com/episodes/39-9ugXDtkC


Skeptically.

Pinker has been called out for cherry picking by numerous other authors, and particularly Graeber/Wengrow who are a duo of academic anthropologist and archaeologist respectively. Another is Christopher Ryan. In both cases well reasoned counterarguments are poised against Pinker's reasoning.


The only thing I know about the book is from this article https://acoup.blog/2021/10/15/fireside-friday-october-15-202... which recommends Azar Gat, War in Human Civilization instead.


Self-replication is only dangerous because it creates more physical "stuff" to cause harm, exponentially fast. But the exponential is the dangerous part, not the replication. A software program replicating itself might seem dangerous until you realize that it's limited by the hardware of the single machine it runs on, so you can only really call it one total "organism." Otherwise a fork bomb would have already killed the human race.


TIL that the Black Death killed more than 1/4 of the human race. That’s just unbelievable.


I think technology has had significantly more effect on catastrophes than the author gives it credit for. You could argue that weapons tech significantly increased casualties in world wars. Global travel allows pandemics to spread globally. I'm sure there are other examples.

However, it also has ameliotating effects that are barely touched on here. Vaccines are the most obvious example. And as the author mentioned, the near elimination of famines is largely the result of technology.

While I can absolutely see the potential for AI to precipitate a catastrophe, to me it has more in common with technologies that have prevented or ameliorated them.


Has there been a single instance of a self replicating ai? The article seems to think so, but try as I might none of the image generators, chess engines, llms, or linear regression models I’ve used or seen has even once copied itself to another location let alone run itself.

The idea of ai as a novel self replicator is cool and appears in movies and books, but doesn’t seem to exist outside of fiction. The other article referenced seems to dream of a future 2030 AI with all the capabilities one can imagine isn’t supported by any reasonable projections for AI technology. It might as well be a warning about all the dangerously weaponizable portable fusion reactors that could exist if ITER development is super successful. In this respect, AI seems like an unlikely driver of catastrophe as defined in the near term.

Putting even a 5-10% increase in rates of calamity due to this technology, which has no evidence to support it, while discounting all other technologies including nuclear weapons claiming there’s too little data is not reasonable. The reality is, we don’t know what risk value to assign. We won’t know for some time.

Just leave out the AI bit from the otherwise reasonable looking statistical analysis, and you’ll be left with a more intellectually rigorous and useful work.


A computer virus is self-replicating and can contain any arbitrary code to execute. There may be size constraints that make this impractical currently, but today's gigabyte is tomorrow's terabyte.


Think about today’s LLM’s. What would it take to replicate a GPT-4? It would take more GPUs than are available on the market. The rate of AI replication is limited by the rate of GPU production which absolutely will not grow exponentially. We are safe.


You cannot "estimate" the probability of a catastrophic event, i.e. a Black Swan. All you can say is that it's possible, and over a long enough time period, things like that will happen. As well as things you never imagined.


A catastrophe is not synonymous with a black swan event. Your friendly insurance company estimates the probability of catastrophic losses on a regular basis.


But they can 'write uncovered checks' by insuring risks they can't cover:

No / smaller catastrophes: do the math, cash the premiums, do the payouts.

Big catastrophe: insurer would go under, but as long as it doesn't happen they can still cash the insurance premiums & everything is a-okay.


Exactly. They can declare bankruptcy. Or they can refuse to cover it, so that the legislatures have to get involved. Implicitly, the government is covering the Black Swan events.


With an "act of God" exclusion. They insure you for specific events. The probability of those can be calculated.


Isn’t this a tautology? If you can estimate a specific event, it is a possibility that your worldview has to accommodate. By definition, black swans are things you didn’t think of.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: