Hacker News new | past | comments | ask | show | jobs | submit login

That same Wikipedia article casts some doubt about whether the question "How many angels can dance on the head of a pin?" was really a question of serious discussion.

From the same entry:

> However, evidence that the question was widely debated in medieval scholarship is lacking.[5] One theory is that it is an early modern fabrication,[a] used to discredit scholastic philosophy at a time when it still played a significant role in university education.




It's really just short hand for rationalist debate in general, which is what the scholastics were engaged in. Once you decide you _know_ certain things, then you can end up with all kinds of frankly nutty beliefs based on those priors, following a rock solid chain of rational logic all along, as long as your priors are _wrong_. Scholastics ended up having debates about the nature of the universal intellect or angels or whatever, and rationalists today argue about super human AI. That's really the problem with rationalist discourse in general. A lot of them start with what they want to argue, and then use whatever assumptions they need to start building a chain of rationalist logic to support that outcome.

Clearly a lot of "effective altruists" for example, want to argue that the most altruistic thing they could possibly be doing is to earn as much money as they possibly can and horde as much wealth as they possibly can, so they'll come up with a tower of logic based on far-fetched ideas like making humans an interplanetary species or hyperintelligent AIs, or life extension or whatever so they can come up with absurd arguments like: If we don't end up an interplanetary species, billions and billions billions of people will never be born so that's obviously the most important thing anybody could ever be working on, so who cares about that kid starving in Africa right now. He's not going to be a rocket scientist, what good is he?

One thing most philosophers learned at some point is that you need to temper rationalism with a lot of humility because every chain of logic has all kinds of places where it could be wrong, and any of those being wrong is catastrophic to the outcome of your logic.


> Once you decide you _know_ certain things, then you can end up with all kinds of frankly nutty beliefs based on those priors, following a rock solid chain of rational logic all along, as long as your priors are _wrong_.

This mechanism is exactly why the more intelligent a person is, the more likely they are to believe very weird things. They can more easily assemble a chain of rational logic to lead them to whatever it is that they want to believe.


If we’re rationalizing animals (instead of rational), then it follows that the more power/ability someone has to be rational, the more power they have to retcon (essentially) what they want to do as okay. (Rationalize it)

Very much a double edged sword.


The Effective Altruists using that logic (which seems to be the most prominent of them) are no better than the Eugenicists and Fascists of the 1930s. They start with a flawed axiom, mix in a staggering lack of self awareness of their own bias, and strive for power.

Fuck em.


The point is that religions at the time had a logical framework which the scholars liked to interrogate and play with the logic of even if that served no real world purpose. Likewise, fighting about doom vs accel when current day Gen AI is nowhere close to that kind of stuff (and hasn't shown it can ever be) is kind of pointless


Sure, but the point of the phrase is that the question itself is a waste of time.


The answer is: One if it's the gavotte




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: