As it relates to the title: basically the author doesn't understand why altruism was difficult for evolutionary biologists and game theorists to explain, and instead of learning about those fields and the questions they raised then answered, they call for an "explanatory inversion" where their own beliefs are correct by default.
The titular question is maybe only interesting if you've been steeping in post-modernism for too many semesters and are trying to blow some epistemic bubbles to see which way is up.
It's worth noticing that a rational mind can believe an unlimited amount of true things without needing to un-believe anything. The same is not true for non-truths. Non-truths conflict with truths and other non-truths.
> a rational mind can believe an unlimited amount of true things without needing to in-believe anything
Sure, if minds were logic engines pre-loaded with true axioms. But none of us believes only true things from the get-go, so the capability of a mythical “rational mind” doesn’t offer much guidance for humans who need to decide whether or not to discard an existing belief.
Well. Except for those true things that are contradictory with other true things. Like quantum mechanics and Newtonian physics.
Or any one of the open questions that exist in science of the same sort - this is true in chemistry but not biology, but seems to be true in their own reference frame. This creature wanted to do this versus had a chemical reaction. So are free will and determinism not true because they conflict? What if I observe one or the other? Does that make it more or less true because I'm the observer? Or both or neither?
And not all non- truths conflict with all other non-truths, just some of them. Not to mention the bit about a rational mind believing true things is nonsensical unless you make that your definition of truth. Given an unlimited amount of truths there will be some incredibly large (smaller infinite) subset that conflict.
I just gave two interrelated examples off the top of my head.
No, it was a fair point, just over-optimistic in regard to absolute truth. We can't be certain that anybody believes, knows, or has even heard of, any true things. However, we're pretty good at collecting together a widely-believed set of ideas where none of those ideas conflict with any of the others so far, at least not in most people's conscious awareness apart from in the minds of a few stray dissidents who may turn out to be brilliant or wackjobs. Once in a while a popular idea is shown to conflict with the rest and we have to revise our sketchy collection of tentative theories, to keep them all in apparent agreement and hence looking true. That's rational.
Sometimes, as in physics, bashing them all into line is beyond anyone's ability, for now.
> basically the author doesn't understand why altruism was difficult for evolutionary biologists and game theorists to explain
The author doesn't say that altruism was easy to explain, what he says is that the very idea to start explaining altruism is not obvious for many.
> The titular question is maybe only interesting if you've been steeping in post-modernism for too many semesters and are trying to blow some epistemic bubbles to see which way is up.
Oh no, not only that. I'm very interested in the topic. In explanatory inverted topic and in not inverted: why people believe lies and why people believe truths. Or sometimes I replace "why" with "how" in these questions, because sometimes I'm really baffled at people's ability to ignore evidence, and sometimes I seek rational tools to deal with some problems.
Moreover I totally agree with the premise that people believing bullshit is their default state. Frans de Waal wrote a lot about primates and from that follows the idea that "political thinking" (motivated reasoning, self-aggrandizement, status-seeking, tribalism, and social conformity) is what we got from evolution. Everything else, like rational reasoning, scientific method, calculus are artificial cultural things. You can reason rationally if you learned how to, but even then your rational thoughts will run on a substrate that is evolutionary tuned for self-aggrandizement, status-seeking, tribalism, and social conformity.
But I also like the article because the very idea of the explanatory inversion is new for me and it makes sense. It is possible, that what is new for me is the term to refer to the already known phenomena, but I'm not sure. I need to think more about it.
The author approach makes absolute sense if you want to persuade other people to believe "true" things. You need to stop thinking about why your beliefs are true, and start thinking about why people resist persuasion. BTW I can recommend a book about it: How Minds Change: The Surprising Science of Belief, Opinion, and Persuasion.[1]
> It's worth noticing that a rational mind can
There are no rational minds in the Universe. Or at least we don't know any.
Most of the article isn’t even about altruism. It is only an example. The author doesn’t take any particular position. Quoting from near the end:
> The truth is not self-evident. Given this, those who claim to know the truth, including misinformation researchers, should have more intellectual humility than they often have.
No it isn't. It's about misinformation. (It starts with "poverty is the default", then discusses default states generally, working round to "misinformation is the default".)
Near the end it makes the proposal that when faced with a lying rogue, although we should robustly defend the value of rationality, we should also allow that the rogue may be a) more correct than we know, b) somewhat honest although wrong, and c) only human. There a hint of "... but I will defend to the death your right to say it."*
This is the final paragraph:
> Given this, the real epistemic challenge for the twenty-first century is not to combat misinformation, except insofar as doing this helps us achieve a deeper, more fundamental goal: maintaining and improving our best epistemic norms and institutions, and winning trust in, and conformity to, them.
*I wondered who said this: Wikiquote says it was one Evelyn Beatrice Hall, penname Stephen G. Tallentyre, very loosely paraphrasing Voltaire.
As it relates to the title: basically the author doesn't understand why altruism was difficult for evolutionary biologists and game theorists to explain, and instead of learning about those fields and the questions they raised then answered, they call for an "explanatory inversion" where their own beliefs are correct by default.
The titular question is maybe only interesting if you've been steeping in post-modernism for too many semesters and are trying to blow some epistemic bubbles to see which way is up. It's worth noticing that a rational mind can believe an unlimited amount of true things without needing to un-believe anything. The same is not true for non-truths. Non-truths conflict with truths and other non-truths.