Agree that it was a catastrophe by the article's definition, but the author specifically says 'since 1500', which excludes the Mongol Wars (from what I understood from your linked page)
My first thought when I started reading TFA was that the list of catastrophes to consider would be biased because more recent events have better records. Maybe that's why the author decided on the 1500 cut off?
Author here. The Mongol invasions are in the .csv in the appendix (and represented in the scatter plot), but weren't included in the table because it restricts to events that lasted less than a decade.
If you restrict to the single "worst" decade then the Mongol invasions would have been high enough to make the list, but I didn't want to start making too many manual adjustments to the data, so I left it as-is.
Interesting approach. Doing this for events recent enough to be numbers to estimable is a well-chosen tactic - it gives a basis for estimating earlier millenia.
Leaving out wars, there's a lot of nasty that happened to humanity before 500BCE of course, including countless wars, but also including the Late Bronze Age collapse (around 2200BCE), which it looks like might have been worldwide. Dating tech has advanced greatly recently, so physical causes might eventually be known (quakes, volcanoes, climate change, floods...). But estimating deaths is hard.
P.S. Those who've avoided book 'Earth in Upheaval' for some reason (FO Velikovsky?) might consider checking out the first few chapters full of voluminous evidence of past, non-human catastrophes. Regardless of 'explanations', it's an impressive laundry list. (Now available in AI-read audiobook form.)
While it's appealing to believe Pinker, his treatment of statistics and probability of war has been debunked. For a mathematical analysis of the history of war see Cirillo and Taleb's paper "On the statistical properties and tail risk of violent conflict" (https://www.fooledbyrandomness.com/violence.pdf).
From the paper: "Accordingly, the public intellectual arena has witnessed active debates, such as the one between Steven Pinker on one side, and John Gray on the other concerning the hypothesis that the long peace was a statistically established phenomenon or a mere statistical sampling error that is characteristic of heavy-tailed processes, 16] and [27]–the latter of which is corroborated by this paper."
Pinker has been called out for cherry picking by numerous other authors, and particularly Graeber/Wengrow who are a duo of academic anthropologist and archaeologist respectively. Another is Christopher Ryan. In both cases well reasoned counterarguments are poised against Pinker's reasoning.
Self-replication is only dangerous because it creates more physical "stuff" to cause harm, exponentially fast. But the exponential is the dangerous part, not the replication. A software program replicating itself might seem dangerous until you realize that it's limited by the hardware of the single machine it runs on, so you can only really call it one total "organism." Otherwise a fork bomb would have already killed the human race.
I think technology has had significantly more effect on catastrophes than the author gives it credit for. You could argue that weapons tech significantly increased casualties in world wars. Global travel allows pandemics to spread globally. I'm sure there are other examples.
However, it also has ameliotating effects that are barely touched on here. Vaccines are the most obvious example. And as the author mentioned, the near elimination of famines is largely the result of technology.
While I can absolutely see the potential for AI to precipitate a catastrophe, to me it has more in common with technologies that have prevented or ameliorated them.
Has there been a single instance of a self replicating ai? The article seems to think so, but try as I might none of the image generators, chess engines, llms, or linear regression models I’ve used or seen has even once copied itself to another location let alone run itself.
The idea of ai as a novel self replicator is cool and appears in movies and books, but doesn’t seem to exist outside of fiction. The other article referenced seems to dream of a future 2030 AI with all the capabilities one can imagine isn’t supported by any reasonable projections for AI technology. It might as well be a warning about all the dangerously weaponizable portable fusion reactors that could exist if ITER development is super successful. In this respect, AI seems like an unlikely driver of catastrophe as defined in the near term.
Putting even a 5-10% increase in rates of calamity due to this technology, which has no evidence to support it, while discounting all other technologies including nuclear weapons claiming there’s too little data is not reasonable. The reality is, we don’t know what risk value to assign. We won’t know for some time.
Just leave out the AI bit from the otherwise reasonable looking statistical analysis, and you’ll be left with a more intellectually rigorous and useful work.
A computer virus is self-replicating and can contain any arbitrary code to execute. There may be size constraints that make this impractical currently, but today's gigabyte is tomorrow's terabyte.
Think about today’s LLM’s. What would it take to replicate a GPT-4? It would take more GPUs than are available on the market. The rate of AI replication is limited by the rate of GPU production which absolutely will not grow exponentially. We are safe.
You cannot "estimate" the probability of a catastrophic event, i.e. a Black Swan. All you can say is that it's possible, and over a long enough time period, things like that will happen. As well as things you never imagined.
A catastrophe is not synonymous with a black swan event. Your friendly insurance company estimates the probability of catastrophic losses on a regular basis.
Exactly. They can declare bankruptcy. Or they can refuse to cover it, so that the legislatures have to get involved. Implicitly, the government is covering the Black Swan events.
Isn’t this a tautology? If you can estimate a specific event, it is a possibility that your worldview has to accommodate. By definition, black swans are things you didn’t think of.
Some estimates are that 11% of the world's population was killed during this time.