Hacker News new | past | comments | ask | show | jobs | submit login
Effective altruism is not effective (philosophersbeard.org)
152 points by animalcule on April 14, 2021 | hide | past | favorite | 240 comments



This article completely mischaracterizes the beliefs of most of the people quoted or referenced, then engages with those beliefs only via asserting the opposite without any supporting argument. I'm disappointed.

For example:

> The most charitable explanation of Singer’s dismissal of political action is that he is trying to sell being an altruist and he thinks a consumer -hero version is the one people are most likely to buy. Singer and other effective altruist philosophers believe that their most likely customers find institutional reform too complicated and political action too impersonal and hit and miss to be attractive.

Interestingly, the author quotes part of Singer providing an argument against the effectiveness of institutional reform, but does not himself provide an argument for it, just an assertion that political change is "the most obvious and powerful tool we have." (I think that's far from obvious!) Instead, he jumps straight to accusing Singer of arguing in bad faith. This is actually the opposite of charitable.

For another example, I'm deeply confused about how the author of this piece could cite Will MacAskill and Toby Ord, then write:

> The underlying problem is that effective altruism's distinctive combination of political pessimism and consumer-hero hubris forecloses the consideration of promising possibilities for achieving far more good.

Ord and MacAskill co-founded an organization, 80,000 Hours[1], which advocates mostly not for effective giving (which the author derides as a "consumer hero" approach) but rather for spending your career working on one of the world's most pressing problems; notably including for instance several types of policy change.

EDIT: and I missed this one the first time around:

> One could spend at most a few tens of millions of dollars on anti-mosquito bed nets before returns start dramatically diminishing because everyone who can be helped by them already has one.

A single bednet charity, the Against Malaria Foundation, has literally already raised 10x this amount without substantially diminishing returns: https://en.wikipedia.org/wiki/Against_Malaria_Foundation


> Ord and MacAskill co-founded an organization, 80,000 Hours[1], which advocates mostly not for effective giving (which the author derides as a "consumer hero" approach) but rather for spending your career working on one of the world's most pressing problems; notably including for instance several types of policy change.

That’s not a universally true statement about 80,000 hours. I took their career choice questionnaire and was told to work as a well paid Software Engineer and to donate a percentage of my income (which is what I was already doing — I’m fond of some of Effective Altruism). You could argue that is classic Consumer Hero advice.

I know you qualified your statement but I just want to emphasize it as I think it leaves room for criticism.


For more concrete data: If you look at their current "key ideas" page,[1] they go over 4 categories of high-impact careers (notably including government/policy) and then say "if you think none of the categories above are a great fit for you, we’d encourage you to consider earning to give. It’s also worth considering this option if you have an unusually good fit for a very high-earning career."

This post[2] suggests 80k's key researchers think about 15% of people interested in EA would be the best fit for earning to give, while 10% of people attending an EA-themed conference were perfectly planning to.

I don't think criticizing effective altruism based on the assertion that it's mostly about earning-to-give is reasonable given those numbers or the framing in 80k's "key ideas" post.

[1]: https://80000hours.org/key-ideas/#career-categories [2]: https://forum.effectivealtruism.org/posts/LrKFNQxjETPvzXQcv/...


> I took their career choice questionnaire and was told to work as a well paid Software Engineer and to donate a percentage of my income

The good thing about earning to give is it works no matter how niche your skill sets are.

What's an efficient way to deploy an expert microchip designer, when so few global poverty NGOs need microchips designed?


I got that same advice circa 2015. However I'm under the impression they started moving away from "earning-to-give" recommendations around 2015-2016.

It's still there mind you, but research and policy are prioritized higher. I'm not sure what else they would recommend to someone who is ill suited for these other careers.


Yeah, that article is pretty weird. To add on your last point:

> > One could spend at most a few tens of millions of dollars on anti-mosquito bed nets before returns start dramatically diminishing because everyone who can be helped by them already has one.

That's a feature. It's half of what makes "effective" part of "effective" altruism effective. The fact that charities aren't infinite money sinks allows everyone to donate by a simple algorithm of:

  do {
    let charity = runQuery("
          SELECT *
          FROM charities
          ORDER BY effectiveness DESC
          LIMIT 1;")
    donateTo(charity);
  } while(charity != null && hasMoneyToHelp());
The idea being, charities that can do most good now get the funds, and as they saturate and hit diminishing returns, they stop being the most effective ones, so other charities takes their place. Repeat until there are no more charities remaining, thus no more problems to solve.

(People may have slightly different definitions of "effective", or one may prefer to do e.g. LIMIT 5 and pick one of the top 5 at random - the idea still works, the slight variance is a hedge against uncertainty.)

It's an obvious idea, and it's how people approach other aspects of their lives if they care about the outcome, so why not donating too?

The article continues:

> > This points to the limits of the individualistic consumerist approach to ending poverty. The best – most beneficial – choice you can make as an individual spending $50 or even $5,000 is different from the best choice you should make if you have several hundred billion dollars to spend.

There's only one entity with several hundred billion dollars to spare, and that's US Government. Other than that, this is why people donate to charities: charities exist to pool money, to turn ten thousand $50 donations into a single $500 000 force that can be better used than if everyone tried to apply their $50 directly at the problem.


Effectiveness of charities has exactly the same problems as efficient market hypothesis: markets can remain irrational longer than you can remain solvent. Or in other words, charities can remain full of shit for longer than you have money for.


Yes, and the same rules apply:

- Don't invest more than you can comfortably lose.

- Don't throw away the entire concept just because it's not perfect.

- Get better at telling who's full of shit, or delegate it to someone you trust.


Don't 2 & 3 contradict each other in this case? Throwing away the yoke of institutional giving is predicated upon recognizing that overhead and start up costs eat up transferable dollars.


They're complementary.

#2: Just because there are charities or companies that do bad things, doesn't mean the concepts of charities and companies, or philanthropy and markets, are fundamentally broken. There are good and bad charities/companies, even though it may be hard to tell which is which.

#3: You can make your donations more effective if you get better at telling which charity is good (same for investments and companies). There are diminishing returns on effort here - particularly if you aren't planning to specialize in charity evaluation. In that case, you can convert this problem into an easier problem of finding trustworthy specialists in charity evaluation, and donating to places they consider good/effective.


That's why we have charity evaluators like GiveWell. https://www.givewell.org/


Now consider how long a government can remain irrational.


> but does not himself provide an argument for it, just an assertion that political change is "the most obvious and powerful tool we have." (I think that's far from obvious!)

Argentina, Venezuela and Zimbabwe are basically nations that destroyed themselves through bad political reforms. If there had been a way to prevent those reforms you could have prevented millions of people from slipping into poverty and needing micro interventions.


Effective altruists would generally be open to the idea that political reforms might theoretically be the best way to spend money in some cases.

But in the case of despotic or otherwise terrible leaders, the problem is that it's unclear what to do to stop them even if you have control over a large military, let alone if your only resources are a relatively small amount of money and the time of a relatively small group of people.

It remains an unsolved problem how to evaluate causes like political advocacy that have very long-term or very uncertain effects. Combine that with the rancor and division that talking about contemporary politics causes and its no wonder EA as a movement chooses to eschew political advocacy for the most part. I guess the overall sentiment is that the money some people donate to feed the Democratic or Republican party machines could probably be spent better.

That said, I've have seen (relatively) small EA grants given to organizations advocating and acting to improve governments in undeveloped countries. For example, the first one I came across was this: https://www.givewell.org/research/incubation-grants/innovati...


Countries, not nations. There are many nations living in Argentina and Venezuela, some do have political representation, some do not. e.g.: the Mapuche nation in Argentina lacks political representation.

Then, Argentina is working extremely well, except for themselves, that is.

They are in a treadmill of unpayable debt that works in the following way:

- The left is tasked with buying people's complacency with borrowed money.

- The right is tasked with giving away sovereignty: privatization and military bases.

People are expected to pick a side and stay busy fighting over which side is right. Meanwhile, the country is taken over. Divide and conquer.

It is working extremely well. There are now US military bases near Ushuaia, Neuquen and the Guarani acquifer, the 3 most strategic locations in Argentina. Everything that can be privatized has been privatized, and provisions have been made so that Argentineans can never pay off their debt, so that they can continue to lose their sovereignty. What should they privatize next? the sky is the limit.


Have they tried selling themselves back to Spain?


Argentineans are happy with their limbs and they surely are not in need for illegitimate children, so no.


"the most obvious and powerful tool we have." is not the same "if it worked then it would be useful in some cases".

Author makes extremely strongly claim and expects to believe it without any support.


One problem with politics is that it's adversarial: if a political cause has opponents, spending resources on that cause can turn into a war of attrition (e.g. the amount spent on US election campaigns)


It has to be adversarial, because politics by its very nature is how people control the behavior of other people.


> Argentina, Venezuela and Zimbabwe are basically nations that destroyed themselves through bad political reforms. If there had been a way to prevent those reforms you could have prevented millions of people from slipping into poverty and needing micro interventions.

Now imagine if the people championing those bad reforms had simply stayed out of politics? The more powerful a tool is, the more cautiously it should be wielded.


Yes, but how much would such a reform cost and is it even possible with only money? Being effective means counting how much good you’re doing per one dollar.


Maybe political change would be the most obvious remedy but it's track record of helping the poor is gives ample reason to look for change elsewhere.


Political change has done pretty well at helping the poor in Scandinavia.


Yep. It's just bad. He's essentially advocating for effective altruism to become another lobbying group or NGO.


But effective altruism is an NGO, isn't it?


I think you are mischaracterising the argument. His argument is that effective altruism can not eliminate poverty, it's inherent in the concept. You also dis not show that he is the one who mischaracterises the "believes" (maybe you mean arguments here?). I think he gave a reasonably accurate description of effective altruism, what in the description do you believe is wrong?

You have not actually engaged with the actual argument, you simply assert that change through political activism is not the "obvious most powerful tool we have". I think you need to back that up, I would say history at least tells us that the largest changes in wealth distributions have come through (often violent) political action.

Regarding the bednet argument, you are simply nitpicking on the numbers. The argument still stands at some point you end up in a position of diminishing returns, i. e. When everyone has a net giving nets is not helping anyone.


> I think he gave a reasonably accurate description of effective altruism, what in the description do you believe is wrong?

I disagree that this article give an accurate impression of the EA. The main point of effective altruism is that people should use evidence in choosing which charitable causes to devote their time to. I don't feel like the author sufficiently engages with this point; instead he attacks Singer for not coming up with a satisfactory standard for what percentage of one's wealth to donate, and laments that EA isn't political enough.

EA arose based partially based on the observation that people do most of their giving to, for example, local churches and schools than to truly desperate people in other parts of the world. People also tend donate their effort to local and relatable causes. The argument isn't that buying Malaria nets is going to eliminate all the evil in the world, the argument is it's a better use of money than other charities, and that we should use evidence to determine how to expend our resources.


Well, I disagree the whole premise of Singer is that we can't change things through the political process so instead we should use "effective altruism".

Now if we are donating to charity should we follow the principles of EA? The answer to that is probably yes, but that's a different question and is not the point raised by the author.


> the whole premise of Singer is that we can't change things through the political process so instead we should use "effective altruism".

(I'm assuming "I disagree" was meant to be a separate sentence)

I think Singer would say that it's often but not always hard to change things through the political process. But he hardly shuns politics entirely; he often speaks and writes about political issues, and he ran for Australian Senate in 1996.

But anyways Singer's beliefs about the effectiveness of political causes isn't the central point of Singer's EA advocacy, it's just one piece of his beliefs.


I don't know how "at some point you end up in a position of diminishing returns" in this case is an argument against EA. In the book Doing Good Better one of the core ideas of EA is defined as investing in problems that are neglected, which prevents investing in diminishing returns. Whenever the malaria nets run into diminishing returns I am sure that GiveWell will start ranking AMF lower. If you are arguing that we should not invest in problems that have diminishing returns EA strongly agrees with you.


GiveWell is literally a priority queue of charities. The moment AMF starts getting diminishing returns, it'll drop down the list. If some other organization figures out an even cheaper way of saving lives, it'll overtake AMF on the list.

It's a simple and obvious system. I'd say the only two things to potentially quibble about is whether or not one likes their sorting function, and the lag time between resorting.


His argument is that effective altruism (EA) is unable to eliminate poverty and therefore it is ineffective? I mean it's an pretty empty criticism, and can be used for any sort of altruistic philosophy given that poverty is a quite an intractable problem. Eg. "Your altruism is not effective unless you work to eliminate world poverty."

Effective altruism boiled down, is getting the most value out of altruistic work as I understand it. Obviously if every child in malaria endemic areas had a net, donating to malaria foundations would not be the most effective thing to do. If we discovered an asteroid that would destroy 20% of the population, EA would probably dictate channeling of all resources to that cause.


No, the burden of proof lies on the person claiming that something is "obvious".

And nitpicking the numbers is incredibly important if said nitpick happens to span orders of magnitude.


It does not fundamentally change the argument, so nitpicking does not change how valid an argument is.


> I would say history at least tells us that the largest changes in wealth distributions have come through (often violent) political action.

History shows just the opposite. The largest changes in wealth distribution in history have come about through industrialization, starting in Britain and then spreading to other parts of the world. Purposeful political action has most often been harmful to these developments, with limited exceptions (such as 19th-century Japan, and other East Asian countries in the 20th century).


> [...] but does not himself provide an argument for it, just an assertion that political change is "the most obvious and powerful tool we have."

Maybe because vastly more people were lifted out of poverty (and associated issues) by political decission to switch to some version of free market economy than by any kind of individual charity?

Author probably didn't state it because he assumed it's a common knowledge.


This reads like a strawman perspective on the concept of 'effective altruism.'

The 'effective' part in the term is doing a lot of the heavy lifting when it comes to the definition. It seems reasonable to focus on actually 'effective' actions with good outcomes, rather than comparatively ineffective virtual signalling or martyrdom.

Becoming rich and then helping millions of people with your means is more impactful than being poor and helping dozens. Both should be admired and encouraged, but one is clearly more 'effective' than the other.


>Becoming rich and then helping millions of people with your means is more impactful than being poor and helping dozens.

It's also more self-serving for the better part of it (as your focus is to become rich), and might never reach the second part (actually becoming rich and helping others).

It's also bogus: not many can become rich, both for reasons for circumstance making from much harder to nigh impossible for billions, and also for purely logical reasons: wealth is relative, and if much bigger numbers of people were rich, it would get lost to inflation.

So this effectively gives a pie-in-the-sky idea for what to become and when to help, to most people, as opposed to encouraging them to help dozens (or just a few) here and now.


Sure. So it's possible to focus on getting rich and then never give back. There's a solution to that, which is continuing to give a percentage of your income.

Some people are quite charitable and give 10%. I'm a little more selfish and give 1.5%, but this is already currently helping people.

Is it really pie in the sky idealism if (statistically) I've saved around 6 lives to far, (and helped many more). Still early in my career, and lots of time to go. I hope to cross 100 someday. But I started just as I graduated and have kept it up every year.

So I suppose the next question in my rebuttal would be to ask what alternative you suggest?


Imagine you could make people give 30 % of their income... almost like a tax... but that would be political, not effective.


In lots of western countries the total tax rate of work (tax wedge) is much more than 30 %.

According to [1] OECD average is 36 %. In my EU country it is 46 %.

[1] https://www.oecd.org/tax/tax-policy/taxing-wages-brochure.pd...


Who cares if it's more self serving as long as they do get the the second part? There is such a thing as win-win outcomes.

  "It's also bogus: not many can become rich"
No need to become rich. You can have a big impact and save many lives on a dev salary. EA isn't premised on becoming a billionaire, as you seem to incorrectly believe.


But that assumes that 1. A significant portion of the people who want to get rich to help people actually get rich 2. Once they are rich they actually help 3. The process of getting rich doesn't make others poorer. (example does Bezos donating some portion of his wealth do better than him actually paying people more, or have the part owners of amazon?)

All these points are far from obviously true. In fact I can come up with many examples where they obviously are not.


Regarding (1) - Becoming rich isn't part of the premise of EA. Getting a $100k job is often sufficient to have extra cash that can save real lives.

Regarding (2) - There are examples of successful EA in action. See the FTX founder.

Regarding (3) - Getting rich can't make people poorer (on net) unless you're exploiting negative externalities, which we would both agree need to be addressed by government. It can transfer wealth from A to B (e.g., by deprecating one industry and replacing it with another), for sure, but I don't see that as automatically bad.


> Getting rich can't make people poorer (on net) unless you're exploiting negative externalities,

You mean, the easiest ways to get rich? You have to be pretty principled not to take those.


I don't against EA in general in the comment (I do have tons of reservations against it, but not time to write about it here atm). I respond to the grandparent's comment, specifically the part I explicitly quoted -- which does mention becoming rich.

That said:

>Who cares if it's more self serving as long as they do get the the second part?

People like me, who don't consider it being altruistic or not as orthogonal to the success of the second part.


> being altruistic or not as orthogonal to

Are you using 'orthogonal' to mean "unrelated to", or do you mean "in opposition to"?

Because depending on which meaning you intend, your final sentence can be read in two very different ways. If it's the first, I'm not completely sure you and fighterpilot are necessarily disagreeing? If it's the second meaning, then it's a somewhat bold claim to say that personal benefit in any form invalidates altruism.


People like me, who don't consider it being altruistic or not as orthogonal to the success of the second part.

>Are you using 'orthogonal' to mean "unrelated to", or do you mean "in opposition to"?

I'm using it to mean "unrelated to" (canonical CS meaning of "orthogonal" as well).

But, here's the catch, note that I used a double negative in my phrasing. so, I'm not saying they are othogonal, I'm saying they're not orthogonal. That is, I don't consider "being altruistic or not as orthogonal to the success of the second part" (the second part being the "helping" we were discussing).

So, it's not that "personal benefit in any form invalidates altruism", it's that lack of altruism works against effectivly helping people.


>It's also bogus: not many can become rich, both for reasons for circumstance making from much harder to nigh impossible for billions, and also for purely logical reasons: wealth is relative, and if much bigger numbers of people were rich, it would get lost to inflation.

Yes, uninvested wealth is zero sum (cash and cash equivalents) and that is why we have inflation, but invested wealth can be positive sum.

If you use your uninvested wealth to invest into a poor country then you end up doing more good than bad.


Not to mention that becoming rich usually requires participaing in some kind of scheme that transfers money from vast number of people poorer than you into your pocket.


Not necessarily, most wealth isn’t zero-sum. Everyone benefits from a new miracle drug, or a new industrial technique, even those who become rich from it, and those who don’t.

The modern world compared 1700 is a great example on joe wealth can (but doesn’t always, see Saudi Arabia) benefit all.


Sure. Sometimes the scheme is introducing a new miracle tech to the masses of poor people and it benefits them. However some additional money beyond the cost of the tech is taken from them in return.

Other times the scheme is just cheating people or restricting their access to hoarded limited resource.


The miracle drug is a good example. Sure everyone benefits if it is not expensive. Take the Covid vaccines for example, there are many locations (e.g. In India) that could produce vaccines, however the companies (and the Western countries) do not want to open up the patents, even though we would all (even us in the 1st world) benefit, because eliminating covid as quick as possible would reduce the chance of mutations.


A lot of people seem to think that Bill Gates' influence is largely responsible for the Oxford vaccine being patent-encumbered.


A lot of people believe Q-anon. This is such a shitty, drive by way of asserting something. Do you believe it? If so then make the case, but the way you've dropped this steamy innuendo here without even taking a position on it yourself contributes less than nothing to the discussion.


I haven't looked into it enough; I just remember Cory Doctorow talking about it and thought it was relevant. https://pluralistic.net/2021/04/13/public-interest-pharma/#g... (citing https://khn.org/news/rather-than-give-away-its-covid-vaccine...).

The “a lot” came from the search results for “gates oxford covid patent” (I'd've expected people arguing against it if there was contention in opinion – this isn't the same as “it's fact”), though further investigation (the thing I didn't do; sorry) reveals that I only found two other articles: https://criticallegalthinking.com/2021/02/09/patent-capital-... and https://criticallegalthinking.com/2021/02/09/patent-capital-... the rest of the results are mirrors.

You're right that it's a rubbish thing to write; I'll put more effort into comments in future.


How about this source:

https://www.washingtonpost.com/washington-post-live/2021/02/...

MS. GATES: Well, we, along with many other partners, supported Oxford in saying what really needs to have happened there, they had incredible science, but they had never brought a vaccine to market. And so, they needed to partner with a pharmaceutical company who had expertise bringing a vaccine to market. And so, it was us and many partners that said that's a partnership that ought to happen. Ultimately, Oxford made that decision.


I can't imagine why. Maybe be because Melinda Gates said it herself that they advised Oxford to find strong commercial partner for their vaccine instead not patenting it?

https://www.washingtonpost.com/washington-post-live/2021/02/...


It's not about where it comes from, it's about what you do with it. If you put it to good use then it is perfectly fine.

The high corporate savings rate of the last decade tells us that most of that money is just sitting somewhere doing absolutely nothing.


You might be confusing wealth with money ?


Both of your examples are individualistic, what about thousands or millions of poor people collectively fighting for something?

The total impact of collective action is much greater than the sum of its individual parts.


I don't think individualist effective altruism is incompatible with collective effort.


Sure, but it doesn't fit into the traditional framework of "effective altruism". Collective efforts-based altruism is just very different in so many ways.


Why not? A collective, efforts-based, altruistic organization needs funding. If such a organization could demonstrate that their efforts lead to good outcomes (improve happiness, save lives), then effective altruists would donate to them.

A legitimate argument is that not everyone can make tons of money and donate because someone has to do the work. But there are plenty of "effective" organizations that are not yet overfunded.


The thing is that money without involvement is poison to collective movements, besides a certain point. Money and involvement is much better.

If all you want is to distribute, then sure. Collective movements aren't even the best at straight distribution, so someone trying to maximize the marginal utility of their dollar probably won't even donate. But collective action is generally not just about distribution, and moreso about fixing the structure of society to make distribution from rich donors unnecessary.


I agree that both money and involvement is necessary. But in our current world, nonprofits have a lack of money, not lack of involvement. Nonprofits talk all the time about how difficult it is to fundraise. And all the charities GiveWell recommends still have a lack of capital.

I think EA's would 100% want to fix the structure of society, if that method was resource efficient. If you believe that changing the structure of society is more resource efficient (in terms of time or money) than donating to AMF, GiveDirectly, or Deworm the World, please publish your analysis.

How much {money, time, etc.} would it take to convince a government or people to adopt a certain policy? What would the benefits of that policy be? How much pushback would you get from opponents? What are the risks? If you can successfully make an argument that changing a policy would be more resource efficient than current efficient charities, that would convince EAs to direct more resources to politics.


I don't see how altruism can be a distributed concept. It seems purely individual, an inside-out action.

The idea of any collective, plural, outside-in action may be something, but not altruism.


Any time humans get together to solve a problem, band together to form a charity, coordinate over a social network to send out PPE, etc, they are not simply allocating optimally at the margins, they are aggregating resources and spending them toward a directed goal more efficiently. These are all examples of altruism. If you narrowly scope altruism to "marginal donation optimization", then yes, effective altruism is indeed a fairly trivially optimal way to allocate these resources.


I accept that your definition is a common one, but one of two attributes strike me as crucial for an unambiguously pure altruism:

- total anonymity, or

- ultimate sacrifice, as in the mother or warrior who lays it down for another.

Anything less is open to the usual questions of motive.

As with Dada, altruism doesn't survive intellectual evaluation.


> As with Dada, altruism doesn't survive intellectual evaluation.

Sure and I'm not as interested in discussing the philosophical ideal of altruism. My interest in charitable work, as I suspect many interested in EA feel, stems from trying to do the greatest good with my limited allocation of resources, be that money, time, knowledge, manual labor, or otherwise. In that regard I'm unconcerned about the usual moral philosophical questions about motive and goodness.


Sounds fine, practical and worthwhile, if not, well, you know...


Neither of these requirements make sense. First, with activism, being known pretty often costs you. And with second, you are not altruistic if you dont die?

(Plus, people who helped the right cause and did not died have done more good then those who died for bad cause. Ultimate sacrifice for something bad does not make you better.)

> Anything less is open to the usual questions of motive.

But then the focus is on "if someone theoretically learned about me existing, do I leave a space for that person to attribute to me some intentions?" And the answer to that should be "who cares".


People help each other altruistically, and these networks of interchanged aid, called mutual aide, eventually allow for collective action.


In therory you’re right. But in reality you’re not.


I'd assume you mean that being poor and helping dozens is more effective?

The impact that you have when you are close to the person is far more likely to be a good one than if you're disconnected from them


A single poor person may be able to have a large impact on ~dozens of people, but even a small impact on millions of people is likely more valuable and 'altruistic' in terms of effect.

But again, both paths should be encouraged. There will always be relatively few 'rich' people with the means to help millions.


Only if the impact is positive. It's already hard enough to make a positive impact on people you care about...


> Becoming rich and then helping millions of people with your means is more impactful than being poor and helping dozens.

Is it though? E.g. Bill Gates got insanely rich because his company engaged in anti-competitive behavior, thus causing lots of harm. You can easily make a point that often the damage that is caused to others if larger than the gain for the individual, so even if they spent 100% of their wealth, they couldn't make up for the damage they have caused.

In that case, maybe it's better to look at medicine and the Hippocratic Oath: first, do no harm.

When a hedge fund analyst meddles with food supply in Ethiopia and makes a killing while a few thousand starve, but then builds a school to educate a hundred kids in Ethiopia... how much good has he done?


But the "effective" part means more than just that. From their site:

> It is a research field which uses high-quality evidence and careful reasoning to work out how to help others as much as possible.

This assumes that reasoning and evidence are sufficient to determine what will be most effective. Compare this to the Buddhists, for example, who believe that the networks of causality are so complex and intricate that trying to make such determinations (at least, on any humanly reasonable scale) are doomed due to missing a lot of the picture as well as generally incorporating many faulty assumptions.

In other words, what is ultimately most effective at ending suffering may not fit into the metaphysical framework that EA implicitly adopts.


This is a common philosophical problem. EA fits in a philosophical framework broadly known as utilitarianism (which is also popular among rationalists, a group known to be fans of EA). Buddhism's ideas of moral philosophy often lie in a mixture of what is commonly known as Divine Will Ethics and Virtue Ethics. Utilitarianism is quite popular among modern STEM discourse, but there are moral alternatives out there that have been sufficiently explored. The idea that decision making may just be too complex to be effectively optimized is one of Utilitarianism's long-standing critiques (e.g., in the trolley problem, what happens if you save the 3 people over the 1 person, but one of those 3 people ends up becoming (or is) a dictator who orchestrates a genocide.)


The trolley problem situation seems pretty straightforward under utilitarianism if you add probability into it, I've never found the "what if one of the 3/4/5 could have been Hitler?" argument particularly convincing. The dictator assumption would apply to all humans born, and we as a species clearly don't punish birth sufficiently if the dictator risk was that heavily weighted.

Axiom 1: Human life has value, saving a life is thus positive utility.

Assumption 1: We know nothing of the people tied to the trolley -> All humans are probabilistically equal.

Thus, the utilitarian presses the button and saves 3 at the cost of 1.


I'm not sure what probabilities have to do with anything here. It sounds like you're applying a prior that each individual on each side has some "badness" that is uniformly distributed. In effect, a uniform distribution is probably one of the weakest priors and leads to some of the highest variance results (save perhaps Jeffery's Prior). I would go so far as to say assuming uniform probability of "badness" is probably a bad assumption on the way humans actually discriminate on other humans.


Probability is the reflection of the state of your knowledge. If you know someone might become the next Hitler, but you don't know who that might be, or if they're even on the tracks right now, then this is reflected by assigning the same probability of "hitlerism" to all the people on tracks.

And even if you had some suspicions, once you honestly factor in your certainty about predictive value of your suspicions, it multiplies down to nothing - so in practice, an uniform prior is a pretty good choice.

Yes, it's true that all this stuff is not computable in practice, that it's too complex - but it's not really an obstacle! Everything in life is like that. When you're cooking and a recipe calls for 200g of an ingredient, you're fine with anything between 180g or 220g, and don't care how much of it is water. A restaurant may need to do 200g ± 5g. A chemical plant may want to do 200g ± 0.5g, after first demoisturizing. Nobody tries to hit 200g to ± 1 dalton, because that would be ridiculously complex to do, and require to control for phenomena that most people don't even know exist.

We've been trading precision for compute time ever since humans first figured out that things can be measured. The solution is what it's always been: calculating using heuristics and approximations, while keeping tracks of the bounds of those heuristics and approximations.


> And even if you had some suspicions, once you honestly factor in your certainty about predictive value of your suspicions, it multiplies down to nothing - so in practice, an uniform prior is a pretty good choice.

Does it? You seem to be marginalizing across the entire population, but it's eminently obvious that most humans do _not_ see a human and erase all aspects of their being and then consider them to be one human in a uniformly distributed pool. Humans make snap judgements based on hundreds of subconscious factors all the time. Most humans look at a person and immediately bucket them into a cohort using some mix of physical attributes and prior beliefs of these attributes. Once we're in a cohort, relative probabilities can change. In this particular example, what happens if one of the 3 happens to be wearing military fatigues? What if they have a well-known authoritarian political symbol on their clothing?

Anyway, lots of intelligent philosophers have argued against utilitarianism and have brought up flaws in utilitarianism, such as Utility Monsters, and then there's the issue of even being able to impose a utility-based order between disparate things such as "tasty candy" and "rocks underneath my feet". Feel free to read those on the internet.


> Most humans look at a person and immediately bucket them into a cohort using some mix of physical attributes and prior beliefs of these attributes. Once we're in a cohort, relative probabilities can change.

That indeed happens, but going by the gut isn't a bullet-proof moral philosophy. Snap judgements are something we can evaluate and consciously calibrate when we're not in the heat of the moment.

> lots of intelligent philosophers have argued against utilitarianism and have brought up flaws in utilitarianism

Yeah, I read some of those, many are really solid objections. I'm not arguing that utilitarianism is the be-all, end-all moral philosophy - just that it's a surprisingly good heuristic that can be pushed quite far before it starts to go "off the rails". And primarily, I'm arguing that working with approximations and heuristics is something we know how to do - not being able to compute a perfectly precise answer isn't a valid argument against a method.


> (e.g., in the trolley problem, what happens if you save the 3 people over the 1 person, but one of those 3 people ends up becoming (or is) a dictator who orchestrates a genocide.)

Presumably by this logic, you'd have no opinion on whether a midwife should save a newborn baby's life, as it's unknowable whether the baby will one day orchestrate a genocide?


Well this gets into the absurdity of relative utility of goodness altogether. If I'm comparing a child's happiness from eating a candy to the unhappiness of stepping on a pebble, what are the relative magnitudes?

Regardless, there are lots of criticisms of utilitarianism, and utilitarianism is far from the ascendant moral philosophy, so feel free to find others online.


I'll bite. Probability comes into play. The vast majority of babies do not in future orchestrate a genocide, but instead live lives worth living, and in turn contribute in some way to shared human progress and wellbeing. So the expected value of the baby's life is positive, the baby should be saved.

The very low chance of a future genocide does not weigh highly in this decision.


Thanks, but elsewhere Karrot_Kream says "I'm not sure what probabilities have to do with anything here" so I'm really hoping to hear their non-probability-based thinking.


My point is that you can have both, but rich people have the means to be more 'effective' with their altruism (EDIT: across a wider population).


Why the fuck are you worrying about "metaphysical frameworks?" The things EA mostly focuses on are people dying of easily preventable starvation and disease. There is no need to bring up highly abstract ideas about "metaphysics" (whatever you mean by that - morality isn't generally considered to be in the field of metaphysics) when the suffering is so blatantly concrete.


At the EA conference I attended, one of the talks that got people excited was about false vacuum decay of the universe (which would end all life everywhere), and why therefore QFT research was the most effective. Even if that's not their major focus now, it fit nicely into their framework.


I wonder how much of becoming rich is made possible only because people are exploited and thus remain poor.


I heard about EA from a person I respect, and so that really colored my perception of it because I think he, specifically, wouldn't do such a bad job. But having some time to think about it, that's really the problem with it: we have a need for altruism to begin with largely due to the concentration of resources (=> power) in the hands of a few to wield for everyone else. It's a cute example and certainly no substitute for a more comprehensive argument, but: we have enough food for everyone on Earth - it is purely a property rights issue that we can't connect more hungry mouths with more uneaten food. There's no R&D to be done, there's no unsolved problem with a bounty on it: the commodification of food has guaranteed a huge amount of waste we can do nothing about because the market over-produces for big brained logistics reasons; and a bizarre web of property and tort law means it's even illegal to go diving into a dumpster for some of this wasted food, even if you're starving, because that's trespassing. And we wouldn't want a _lawless society_, would we? /s

The problem is is ultimately the concentration of power (in whatever form it takes, money or otherwise) in the hands of a few. EA claims that doing more of the thing that causes these huge problems like hunger and homelessness will actually somehow cause less of it. And if you think I'm being reductive, fine, but I would ask that you point out what I'm reducing and why it shouldn't have been.


> it is purely a property rights issue that we can't connect more hungry mouths with more uneaten food. There's no R&D to be done, there's no unsolved problem with a bounty on it

Yes, it's a property rights issue.. and more. The problem is motivating those who are holding on to the food to distribute it to those who need it.

Clearly if those who need it are providing valuable service to food producers and distributors, there wouldn't be a problem.

The issue is really, how do we motivate the distribution of food (and other goods and services) to those who need it but can't justify to the producers and distributors and provide for it?

One solution is definitely charity. You pay the producers and distributors to distribute it to the needy. However, this quickly becomes a system that is easy to game. Often it's easier to convince the donor that work has been done instead of actually doing the work.

Another solution is government welfare, but it really comes down to the same problem - the objective becomes just convincing those asking for it.

The central issue is that those who need it but can't justify it are essentially powerless to motivate others to provide for them. IMHO the only reasonable way is to help these people become net producers in the economy or have more political power.


> The issue is really, how do we motivate the distribution of food (and other goods and services) to those who need it but can't justify to the producers and distributors and provide for it?

People shouldn't have to justify their existence based on some other group's favorite economic model. That seems weird and needlessly cruel.


This is something that becomes truer when you start really thinking about it. Private property the way we have it right now is just a social construct, for it not to be simply theft it has to be socially efficient. That is to say, if people are given stewardship over land that really isn't inherently anyone's, it's a reasonable expectation that they don't use it to make food for it to be thrown out.


> EA claims that doing more of the thing that causes these huge problems like hunger and homelessness will actually somehow cause less of it. And if you think I'm being reductive, fine, but I would ask that you point out what I'm reducing and why it shouldn't have been.

How is it "the thing", unless "the thing" is supposed to be, what, people using common sense? And how is it "the thing that causes these problems" when those problems are the natural state of being? People don't have food by default, people don't have housing by default, the huge amount of food being produced these days is in no way the state of nature. The argument of EA is closer to: doing more of the things that currently feed 95% of the world is the way to feed 100%.


The argument of EA is closer to: doing more of the things that currently feed 95% of the world is the way to feed 100%.

How do you know it's not doing more of the things that currently leave 5% without food?


>The argument of EA is closer to: doing more of the things that currently feed 95% of the world is the way to feed 100%.

That is purely a political choice. Just grow the food supply faster than the population. Most developed countries have solved this on the population side. If every EU country had 0.5 - 1 billion people all of them would run into food problems.


By actually looking at how food gets produced.


So this is not a general property of EA, but only true for this field, as verified by a domain expert?


The same logic applies generally; the point is to apply the same things that have worked in every other field of human endeavour.


Obesity is a far bigger problem in the world today than hunger, even in developing nations. 11% of the world suffers from hunger still, down from 40% 3 decades ago. The kind of starving absolute poverty is rapidly disappearing. In first world countries, the unequal distribution of wealth mostly means who gets to live in larger houses, travel more, go to restaurants more, send kids to better schools, etc, rather than who gets all of their needs.


Boiling everything down to power is too reductive.

People are poor because they're incapable of the value provision required to not be poor. This can come down to a number of causes: low intelligence (perhaps due to malnutrition as a child), discrimination, mental health issues, disabilities, geography, attitude, education, etc.

But it's scarcely a question of power. Centrally it's about capability to provide value to others who can reciprocate with money.

This is the premise of the EA proposal that you're missing. If I get a job as a software engineer, this in itself has value to society even if I don't wish to donate any of the money, since I'm being compensated for value provision to someone else (who is in turn being compensated for value provision). The EA on top is just a nice bonus.

Now, negative externalities are of course a real thing, and they need to be taxed so that payment for services is more correlated with true value provision to all stakeholders.


This article illustrates a big issue with people opposed to EA - they severely underestimate the risks associated with political change.

Of course, we can just assume everything goes well, we agitate for political change, we get a change for the better, everything is great.

But the risk of a different outcome never seems to be taken into consideration. For instance:

We believe the conditions in our country are unfair. We start agitating and take down vital institutions in the name of change. Our movement gets hijacked by a military strongman who happens to be charismatic and liked by the people, and now we're under a brutal military dictatorship.

Oops.

History shows the likelihood of the second kind of outcome is, at the very least, non-ignorable.

I would like to see more analysis on what causes bad outcomes of this kind, and to see them taken seriously, instead of this handwaving where poltical action always leads to good outcomes.


I'd push back that a lot of the anti-political-activism stuff I see from EA people seems even more hand-wavy. Political activism has such enormous potential that any argument against it needs to be exceptionally strong. Now don't get me wrong, there are totally bits of conventional wisdom in political activism that I think are the exact kind of ineffective-but-makes-me-feel-good stuff that drive EA people nuts. I have hot takes about the oversaturation of phone-banking and door-knocking in presidential elections (although I'm searching for more evidence one way or the other before I make that argument in earnest).

But the problem with arguments like "there's a non-ignorable chance that a military strongman could hijack our movement" is that that's like, three orders off from the kinds of problems that actually happen when you try to do political activism. I'm sure some activists are worried about that in the abstract, but the way more salient issues have to do with things like fundraising, cultivating movements in places where they don't exist yet, vetting candidates (particularly in local races where people come out of nowhere), forming coalitions between groups that don't think they have much in common, etc.


Political activism has such enormous potential

For good and bad, and it's rarely obvious which policies are "correct". For example, it's possible for two entirely well-meaning people to come to opposite conclusions on issues like nuclear power or international trade. They could both spend their time and money trying to influence governments and end up with a net effect of zero (really negative after transaction costs), or they could pay for malaria nets and almost certainly save lives.


Leaving the politics to the non-well-meaning people? I suppose this wouldn't change much if their influence were a net zero, too, but I don't believe it is.


The problem is most (not all) political activism is rather domestic, symbolic, parochial and tribal compared to the possibility of reducing millions of out-of-sight preventable deaths in poorer countries. And the people engaged in it don't realize that, their focus is on what's been in the domestic news cycle and what's important to their immediate surroundings.


>that's like, three orders off from the kinds of problems that actually happen when you try to do political activism

I think this just goes to show how people who live in places with stable political institutions take them for granted and don't value what they have.

That you believe that political agitators who aim to destabilize institutions will face no problem greater than issues with fundraising or movement-building just goes to show that the conditions you live in are exceptionally stable.

In most of the world, movements for change being taken over by malicious, self-interested players is a very real, very serious risk.


So, political activism doesn't necessarily mean ending capitalism and demolishing the system. Like, granted, when you play with matches you risk getting burned. But a lot of effective political activism is more about getting petition signatures and pressuring the state legislature to pass an ordinance banning factories from pumping heavy metals into the water table. Stuff that makes a difference, but is monumentally boring (and admittedly, stuff I personally haven't done enough of).

[Edit - And before someone makes a dig about environmentalism-for-its-own-sake: In the above hypothetical, the factories are near an aquifer and there are concerns about drinking water.]


And EA isn't against this kind of thing, but political activism often tends not to be low-hanging because people are very invested in political activism. If you can set up an organization that can consistently identify low-hanging, high-impact activism, then EA will be all for it.


Well, political activism doesn't have to be about that, but EA isn't opposed to activism per se. EA is just, by its very nature, opposed to political activism that believes the only way we might make things any better is demolishing the current system and somehow building the perfect one from scratch - because this kind of activism tends to entail the belief that any kind of charity-based solution is a bad thing because it might make people stay longer under the unacceptable system instead of revolting as they rightly should.


> This article illustrates a big issue with people opposed to EA - they severely underestimate the risks associated with political change.

I think there is very little risk associated with political change, but the risk associated increases with the magnitude of political change. The fact that this even comes up makes me feel like most folks haven't actually tried to organize for a political cause. Yes, there are hot-button issues of our time, such as housing, healthcare, and UBI. But the vast majority of issues, from Celiac's sufferers to folks with nut allergies, are often a case of lack of awareness and resources rather than anything else.

There's also ways to derisk political participation, but that's a separate conversation.


> We start agitating and take down vital institutions in the name of change. Our movement gets hijacked by a military strongman who happens to be charismatic and liked by the people, and now we're under a brutal military dictatorship.

This risk is inherent in any political action (or inaction), and it could happen just as easily with a citizenry of Effective Altruists as with a bunch of scrooges. Charity and authoritarianism are orthogonal concepts. (IMO this risk is worth braving as we attempt to improve the human condition.)


>This risk is inherent in any political action (or inaction)

Is your premise here that all actions have the same risk of hijacking by a malicious player? I reject this premise. Solid institutions are usually an effective safeguard against this sort of thing, and radical political action tends to have as its goal to destabilize institutions.


Institutions, like laws, solidify existing power structures, whatever they may be. Though ideologically antipathetic toward each other, the bureaucracies of the U.S. Library of Congress and IngSoc's Ministry of Truth would be more similar than you might think.

I think we're disagreeing about the implied premise that trying to solve a charity problem (e.g. ending homelessness) necessitates radical political action, and therefore carries radical risk of regime change. I agree that radical political action carries a higher risk of regime change, but I don't agree that altrustic policies are remotely that radical. For example, I believe that solving homelessness wouldn't break the bank nor would it significantly affect our institutions or culture (except likely increasing somewhat the size and power of a few psych/medical and housing departments).

Glad we're both attuned to the risk of radical shake-ups though. :)


The thing is that, if you don't propose radical political action, there is little to no reason to be opposed to EA. The only reason I can think someone might be opposed to EA is that they believe the only way society might be improved is through revolt and total reform, and therefore any kind of charity-based solution would just be stopping The People from rising up as they rightly should.


Our movement gets hijacked by a military strongman who happens to be charismatic and liked by the people, and now we're under a brutal military dictatorship.

This risk is inherent in any political action (or inaction)

There is certainly a lot more risk of this when your charity is spending large sums of money in countries controlled by dictatorships, sums of money that in part go to taxes to fund those governments.


I think this argument actually illustrates a big issue with EA: for many people the "worse outcome" is no worse than the status quo.

I'm not going to argue that an authoritarian theocracy or military dictatorship would be an improvement on the corporatist kleptocracy (or however we want to negatively describe the status quo). But political change is mostly not carried by those already doing moderately well under the status quo, even if they may often end up being the one implementing it.

There's always a risk of existing power structures swooping in and opportunistically inserting themselves when the dominant ones are torn down but the problem isn't the tearing down of those structures but the continued existence of those that swoop in.

This is btw why many leftists advocate for intersectionalism rather than hoping racism, sexism etc will go away on their own once the class hierarchy is defeated.


>for many people the "worse outcome" is no worse than the status quo.

Is it really, though? In most cases where the worse outcome did happen, people were convinced they had nothing to lose and went with whatever radical idea was being proposed, then when things went pear-shaped they found out they did have a lot to lose.

An extreme example is the Khmer Rouge.


> Is it really, though?

Yes, it is. If you're an unhoused diabetic living in the US without access to insulin, it literally doesn't matter if the country is run by Ivy League graduates or uncultured generalissimos. Note that I said "many", not "most". I'm not even making a claim about proportions but in the US alone there is a large quantity of people for whom the status quo is either a death sentence or at least entails a lifetime of suffering. This isn't even taking general trends (e.g. the income gap, militarization of the police, or Climate Change) into account.

Note that I said "worse outcome", not "worst outcome". Yes, almost all bottom-up political change is brought about with "populism" (i.e. saying things are bad and promising that they can be changed for the better) that doesn't mean that all populism is deceptive or false.

Generally drastic changes only happen when the situation is already sufficiently dire precisely because the status quo (no matter how undesirable) feels like a safer bet than a leap into the unknown. This is why these changes are often so drastic and the means to achieve them (by necessity) so violent: incrementalism has failed and left them no other choice.

Whether the outcome is better or worse mostly has to do with what power dynamics will remain in place when the existing dominant power structure is gone.

The Cambodian Civil War is messy and complicated but portraying it as "people wanted political change and got the Killing Fields" is reductionist to the point of satire. The government was on a left-leaning (and even pro-communist) progressive path leading up to the fraudulent 1966 election and then suddenly snapped to the violently repressive hard right only for the monarchy to intervene by imposing "centrism", alienating both sides. Note that all this went on while the US was also wrestling for control in the region and China was trying to shake off the Soviet Union. These things don't happen in a vacuum but even without the Khmer Rouge Cambodia was unstable and spiraling into chaos.

Dropping in the Khmer Rouge as if Pol Pot is a meaningful representation of all political change is absurd, even if you only consider drastic changes (i.e. revolutions, overthrows, secessions and annexations). Outside of imperialist conquest almost every nation on Earth, past or present, is the result of such struggles.

The foundational mistake is believing that the status quo is stable and permanent because it never is and never has been before. Even episodes in history that are considered "stable" (like the Roman Empire/Republic) are tumultuous and only seem somewhat stable because we're looking at them from a bird's eye view often thousands of years later.


Well, I'm pretty sure EA isn't really opposed to moderate change or stuff like organizing for, I don't know, people who need insulin or anything like that. The people who usually oppose EA are those who think that there is no solution to societal ills except fully restructuring society into something completely different, and who usually will oppose any kind of charity-based solution on grounds that it might make people tolerate the current conditions longer, thus delaying the necessary complete revolution.


Okay, this got a bit longer than I intended so I split it up into different points:

1) But society has already been fully restructured into something completely different. It's foolish to believe that we can get from one local optimum to a higher point without interruption if the only reason we're at the local optimum is that the previous one was violently interrupted in the first place. It's also foolish to believe that history is a linear trajectory towards something good or better, though this is indeed what most schools tend to teach because it avoids the difficult questions.

2) The status quo is unstable and will inevitably lead to accelerationism. You can not overthrow injustice without violence as those benefiting from the injustice have no interest in giving up those benefits without violence. Maintaining the status quo if it is unjust is in itself a violent act. The status quo isn't stable, most of us are just sufficiently detached from its injustices that it feels stable and safe to us.

3) That political action is violent doesn't mean it has to manifest as a revolution or civil war. We like to glamorize the suffragette movement as a non-violent political struggle that mostly consisted of women holding up signs but suffragettes were literally performing actions we would today label as terrorism. We like to appeal to MLK as the voice of reason and imagine that he ended racism by talking in a kind voice but his protests were literally portrayed as riots at the time no differently from how we portrayed the 2020 BLM protests today. The US itself was created in a violent revolution and had to fight a war against its own people to prevent a secession only a century later. Not every instance of social progress requires a complete overthrow of the entire system but the more intrinsic an injustice is to a system, the more violent will those benefitting from it defend it against anyone trying to change it and the more violence will be necessary to overcome this resistance.

4) Revolutions and civil wars are often not caused by a desire for things to better but as a reaction to governments actively making things worse. Cambodia is actually a great example in that the sharp turn to the ultra-right was a direct response to incremental progressivism that preceded it and this in turn necessitated a violent revolution because there was no way to recover from this shift without it. Incrementalism had not only failed but was violently halted and reversed by those benefitting from the existing power structures. By trying to undo the incremental changes they drove progressives into the arms of the Khmer Rouge who had positioned themselves as the opposition and were now seen as justified in overthrowing the system as a whole.

5) The question isn't whether we should oppose charity, the question is whether charity can solve problems created by the power structure that enables a tiny minority to engage in charity in the first place. And the answer is clearly no because it hasn't and there's no reason to believe it will. Charity is a form of wealth redistribution but it is extremely inefficient in more ways than just the obvious bureaucracy involved in maintaining charity organizations to develop and implement social works projects to actually help people in defined and limited ways. That's just food stamps with extra steps. The problem is that charity is itself replicating the top-down hierarchical dynamic by forcing "interest groups" to compete for scraps: whether the money goes to breast cancer research, education for people living in poor areas, treatment of AIDS patients, or exterminating the mosquitos that spread Malaria is entirely up to the market, i.e. the few extremely wealthy shopping for a philanthropic cause to support while saving taxes, indirectly transferring money to business partners, or legally bribing officials.

---

I don't oppose harm reduction in itself. Voting for the less bad political party is probably better than allowing the more bad political party to win. Billionaires funding technological innovation is probably better than billionaires spending the same money on union busting or propping up totalitarian governments. But the question isn't whether we want the revolution or not, the question is when the revolution will come and who will win. And how we prepare for that.


My response to this is straight from your own post: "It's also foolish to believe that history is a linear trajectory towards something good or better". You seem to state that we need violent revolution to get from one local optimum to a "better" one, but violent revolution can also lead us to worse ones.

It's foolish to believe that we actually know a surefire to fix injustice, or to build a system which will not contain "flaws" in its power structure. It's even more foolish to believe we can build a system with no power structure, like certain movements do.

On the topic of injustice: While I agree that you can't overthrow injustice without some form of violence, my issue here is that people often create more and bigger injustice when trying to fix injustice, because the bad in the status quo blinds them to the good. People often take things for granted which are not obvious and only realize they had them when they lose them.

Also, justice is subjective. There are visions of justice where you can create a "more just" world where everyone is worse off. What then?

---

Addendum: I'd just like to say that I appreciate that you clearly put effort into your arguments, and actually talked things out with me even though we clearly hold different view points. I probably won't reply to this thread anymore, as I think I've said all I have to contribute to this topic for now. I will read if you reply, though.

Thank you for debating this complex issue openly with me.


FWIW

> It's foolish to believe that we actually know a surefire to fix injustice

That's why I'm not an accelerationist. I think (specifically in the US and most Western countries) incrementalism is preferable and building dual power is a more fruitful path than attempting a revolution. But when incrementalism fails, direct action can become necessary. Incrementalism wouldn't have achieved universal suffrage, wouldn't have abolished racial segregation, wouldn't have decriminalized homosexuality and will not end capitalism.

Incrementalism rarely manages to topple hierarchies, it mostly just shifts them or tips them in one direction or the other.

I'm an anarchist but I don't fully believe a purely anarchic society is possible at a global scale today. I think mutualism or democratic confederalism is a more pragmatic goal but that's certainly not something we can jump into just by overcoming one power structure alone.

But as the saying goes, "it's easier to imagine the end of the world than the end of capitalism" (or unjust hierarchies in general, for that matter).

Thanks, I appreciate it.


A simpler answer is that these people aren't actually leftists.


I'm not sure whom you are referring to when you say "these people".

Intersectionality (which btw refers to almost the exact opposite of what most people think of as "identity politics") is absolutely compatible with (and arguably necessary[0] for) leftism.

Class reductionists OTOH certainly think of themselves as leftists and can be instrumental to leftist politics. They however run the risk of also being useful to other political permutations (e.g. they might be more open to "red-brown alliances", which generally lead to non-leftist "red" ideologies like national bolshevism and ultimately fascism).

Not every intersectionalist is a leftist (e.g. liberals often acknowledge intersectionality to some degree but don't see private property as a social power structure) and not every leftist is an intersectionalist. I would argue that leftism without intersectionality is leftist, but in an incomplete form. That said, "leftism" is a somewhat arbitrary label as its original meaning simply referred to opposing the monarchy in France, so policing who gets to call themselves that is probably pointless.

[0]: An example for how racism and class warfare can intersect is how the widespread racism in the early 20th century US prevented Black workers from joining unions, resulting in them becoming strikebreakers and thus undermining the collective bargaining power of those unions.


I'm using the definitions of left vs right as being how much power should the lower classes get ("opposing monarchy" kind of already fits, though AFAIK after the revolution farmers quickly lost most of their power to the profit of citizens).

It has been pointed out decades ago that capitalism would try to subvert some of the left by evacuating the class issues to the profit of ethnic issues and then pitting ethnic groups against each other. In fact you could say that this dates at least back to Marx' "On the Irish question."

IMHO this is what is happening currently - it's really sad to see supposedly left parties to ally themselves with people pushing ethnicist and sexist ideologies (but where white or man = "worse" instead of the previously common ethnicist and sexist ideologies where it was "better" - that "all lives matter" ended up being considered an ethnicist slogan is probably making Martin Luther King roll in its grave !)


That's why I said intersectionality is distinct from identity politics. IDPOL is actually the norm in most modern politics (e.g. NIMBY).

As for Marx, Engels and the Irish: their position wasn't that Ireland was only divided along class lines which merely "happened to" mostly follow the British-Irish divide, at least not as they learned more about Ireland and British colonialism in Ireland. Instead they argued that Ireland should be granted independence and then join a socialist Wales, England and Scotland as a socialist federation. This is literally an intersectional understanding of the problem: Britishness and Irishness played a part, but so did the class divide.

BTW, you're mixing jargon by referring to Marx while also talking about "the lower classes". Marx only knows two classes: the working class (who have to perform wage labor to survive) and the owning class (who rent out their private property or pay workers to use it to produce goods for surplus value). The distinction of "citizens" and farmers is as meaningless in a Marxist context as you claim other distinctions to be.

You're right that capitalists will try to pit groups against each other and this is what IDPOL is. But there is a significant difference between liberal IDPOL and leftist intersectionality. Division is not created by pointing out the existence of differences but rather the creation of power imbalances along those dividing lines. It's naïve to think that racism will automatically go away if we can overcome capitalism even if capitalism is actively reinforcing racism to divide the working class.

To pick up on your example of All Lives Matter: the claim of BLM isn't that Black Lives Matter And Others Don't. Their claim is that Black Lives Matter But Others Treat Them Like They Don't. The slogan "All Lives Matter" was intentionally pushed by white supremacists as a counter to remove the focus on "Black Lives" specifically. The point wasn't to say other lives don't matter, in general, but that right then, right there, this is putting "Black Lives" front and center for once because nobody is paying attention to their suffering. "Blue Lives Matter" went even further: since one of the core complaints of BLM was police killings of Black people, the substitute of "Blue" for "Black" was a direct inversion shifting the support from the oppressed to the oppressor. MLK most certainly wouldn't be rolling in his grave.


Right, thanks, I'll have to look more into the difference between intersectionality and identity politics.

----

"Racism" seems now to me to be a very bad term (in a way, it's racist itself) - we have many examples of similar kind of strife wrongly called racist, while it should probably called "ethnicist" (?) instead :

- (Protestant ?) White North Americans vs (Catholic ?) White Central & South Americans.

- Muslim Semites vs Jewish Semites.

----

My main point is that BLM shot themselves in the foot by using this slogan - something like "Black Lives Matter too !" wouldn't have allowed their opponents to claim this kind of Orwellian inversion of terms.


The problem with EA is that they seem to assume that they aren't doing politics. Except they are, and of the non-democratic kind. Now maybe they are right and current mainstream politics are so fucked up that secession is the best idea, but I would be very careful about thinking that you have politics all figured out! Also, the "Prime Directive" style non-intervention policies are common in science fiction for a reason!


Could you explain how they are doing politics and why that's non-democraticthis for someone who is not familiar with this? Thanks!


Let's go back to Politics 101 - Etymology : The English language bundles several concepts under the term "politics". Let's go back to the Ancient Greeks, to whom we owe many of our concepts, they had 3 different words for it : 1.) Politikos = community life 2.) Politeia = organization & management of the society, its Constitution, its government 3.) Politika = the power struggle

In modern terms, every time that you do something that has an impact on the world outside of your nuclear family, you're doing politics. (Note that this is a somewhat artificial separation, because, for instance, the way that you raise your children is going to eventually have a pretty big impact on the rest of the society !)

Now there are several ways to organize a society (= "polis") - these days we tend to consider that democracy (= political power equally spread among all* citizens) is the "worst of all, excluding all others".

EA clearly doesn't trust democracy to do its job (otherwise they would tell you to donate to the government instead), and considers that some people know better than others, and should wield this power instead (in this case the power is coming from donations and the people wielding the power are the donors and the people deciding where to apply it). This kind of social organization is plutocracy and/or aristocracy = "the rule of the wealthiest and/or the best".

This goes farther than just money though - the more you spend your time & thought on a separate political structure, the less time & thought you do on normal society politics. One of the effects is that it's going to reinforce your feeling that the normal political structure lacks legitimacy, resulting in a vicious circle of separatism.


The author has a good point about traditional donation-based altruism not always being the most effective in achieving the end goal of ending poverty. For example, dumping hundreds of tons of second-hand clothes in African countries has a huge negative impact on local textile industries [1]. So while you're helping clothe people short-term, you're depressing wage opportunities long-term. I believe (could be wrong) that something similar happens with donated bulk food products.

I'm a fan of one of the author's later suggestions to implement a UBI-style setup and give people money directly rather than playing whack-a-mole with specific needs. The challenge there, of course, becomes political and safety concerns in less-stable areas. That's a problem that might not have a private sector solution short of paying mercenary forces or supranational compounds. I'd love to hear other peoples' thoughts.

[1] https://www.bbc.com/news/world-africa-44951670


The author is not talking about traditional donation-based altruism, but about the 'effective altruism' (EA) movement, which is definitely interested in evaluating the effectiveness of different interventions. The comments you make probably place you squarely in agreement with the EA Camp. Give Directly (a UBI style setup) is in fact one of the charities recommended by Give Well (an EA organization).


It seems like if an NGO provided a UBI in a region that was controlled by a dictatorship, the dictatorship could simply take all the money. When taxes are not set democratically, giving money to individuals seems roughly equivalent to giving money straight to a dictator.


GiveDirectly has a lot of experience with cash transfers through M-Pesa and similar systems. I don't know exactly how much corruption they have to confront, but they might be better at cutting through it than you'd think.


The problem isn't even "corruption" per se. A dictator can do this entirely legally by simply raising taxes in an area (or cutting investment in public services) whenever NGO-provided cash or services lets the people there afford it. The problem is inherent to governments that are unresponsive to their people.


We should also stop requiring poor nations to follow intellectual property laws. If we didn’t make them pay high license fees for medications etc they wouldn’t be so destitute. Broadly speaking if local economies could develop producing and selling clones of important goods, they could grow their own economy more easily. Intellectual property restrictions are a completely human made notion that keeps the powerful wealthy but it harms the poor. And unfortunately the WTO and IMF have long required harmonization of IP restrictions as a condition for aid money. We forced them to abide by our rules in order to “help” them and it’s doing great harm.


Which particular medications are keeping these people destitute? Which producers of important goods in poor countries are enforcing IP restrictions? In practice, I don't think IP laws are keeping developing countries poor or harming their economic growth, causes of such issues are much more macro



"By lending as little as $25 on Kiva, you can be part of the solution and make a real difference in someone’s life. 100% of every dollar you lend on Kiva goes to funding loans."

"Kiva borrowers have a 96% repayment rate historically."

https://www.kiva.org/


I routinely give on Kiva but I feel bad knowing that the APRs actually offered are like 30%+ to the guys receiving the loans. Can the cost of administration be that severe?


Where do you see anything about APR? Kiva claims they're zero interest loans


It's some nonsense method of calculating. It's somewhere on the site.

Kiva provides zero-interest loans to µfinance partners, they charge interest to the actual guy who wants to buy a cow, then return the principal to Kiva who gives you back the money. So yeah, Kiva provides zero-interest loans, but the cow guy getting the money is paying like 20%-80% APR.

I think it was their partner Credituyo that was patently ridiculous, like 80% APR loans and shit. But don't quote me on that. Probably have a bunch of other similar partners.


There are two reasons why the loans are 80% APR, either because lots of them fail to repay (which goes against the 96% repayment rate) or because the recipients of the loan generate enough value to justify paying 80% APR. (unlikely because that would have attracted a colossal amount of investors and reduced yields back to the single digits that are available to other markets).

Honestly, looking at it, it would be better if they just obtained the funding themselves by borrowing it at low interest rates and you just pay a subscription fee to cover the interest and administrative costs.


The main reason is that administering small loans is expensive: costs of administering/chasing the debt don't really scale down just because the loan itself is only $50, and most of these entities seek to turn a profit too. Additionally there's a suspicion the middlemen sometimes use other loans' interest payments to cover defaults since nonrepayment makes them look bad to Kiva lenders.

Zidisha (YC14, nonprofit) cut out the middleman to do pure P2P lending in the developing world at much lower rates but at least when I was on the platform had significant issues with defaults even with local volunteers helping with chasing (and some defaults definitely weren't planned).


I agree, imo the biggest change would need to come from governments themselves. This is why I think I get more altruistic "bang-for-my-buck" by donating to Bernie Sanders, Andrew Yang, the Squad, and other progressive candidates b/c we really need to move beyond capitalism.


This is all about the framing. What question is being answered?

If the question is "which charity should I donate to" then we've already narrowed the scope to an individual's action with their own money. (Although there are some forms of collective behavior like crowdfunding.) The case for effective altruism seems pretty good within that space.

If you broaden your scope to all the things people could do working together, there are other possibilities, but even they often could use more funds.


You can individually choose to donate to political causes. Obviously anyone but the wealthiest can't single handedly influence large scale policy making, but lobbying and advocacy can have an outsized influence.

Edit: Consider that lobbying has an ROI of something like 1000x. If you can be anywhere close to that effective at directing government spending towards foreign aid, poverty reduction, public health, etc. then that could easily be more impactful than donating directly to NGOs


This is true, but the problem is that you rarely know if political advocacy will work until after it succeeds, and even afterwards, it may take a while to study the impact of the change.

GiveWell did study some political advocacy groups, but apparently it’s just hard to know what to recommend with confidence. There is a spin-off organization (Open Philanthropy) that uses funds from rich philanthropists to do more experimental work.


You don't need to know whether a specific advocacy effort will work, only to estimate whether the overall expected value is high enough. And we know that when advocacy succeeds it can have upsides many many times the cost put in (and there aren't that many advocacy movements) so there is at least some hint that the expected value is large.

It feels kind of like a cop-out to say that you shouldn't pursue political change because you can't quantify how much good you'll do.


Sure, you can guess at expected values, but what makes that guess interesting enough to share with others as a recommendation? Why would GiveWell's guesses about political causes be more interesting to read than anyone else's?

GiveWell's job is attempting to make predictions about fairly definite questions that can be answered by doing a lot of homework. They can learn a lot about charities and read scientific studies and make predictions that they can stand behind. It's figuring out what the safe choices are, not guessing at speculative winners.

But maybe a prediction market could do what you want?


We also know that advocacy movements can have downsides many times the costs put in. Estimating the expected value of political advocacy is a good idea, but I would expect the result to be negative.


Effective Altruism doesn't restrict its influence to charity-picking. It also actively encourages the career strategy of "earning-to-give", particularly to students just entering university.

Unlike allocating charity-dollars, that decision is not marginal but influences the lifetime of a person, and if universalized would significantly reduce the mindshare of people in industries that can make better use of existing wealth, vs industries that are income-producing for the individual.

This is an irony that people from outside EA detect, that financiers which are drawn towards "earning-to-give" are still the lifeblood of institutions that play such an integral part in creating inequality and volatility.


I think the biggest problem with the Effective Altruism movement is that it has become too tied up with people who like to debate things, especially topics like AI Risk.

Optimizing the effectiveness of one's charitable donations is a reasonable goal. Doing basic research to choose efficient and effective charities makes sense. However, there are large parts of the EA community that have gone off the deep end of debating topics like AI Risk. This has given rise to weird situations where EA communities have wasted countless hours debating whether it's better to donate mosquito nets to combat malaria, or to give money to research institutions that are studying the risks of general AI. 2 of the top 3 biggest donations from Open Philanthropy are to study AI risk. The 80,000 hours community has spent years creating podcasts and websites, but it's still difficult to extract actionable information from it. It's all becoming increasingly nebulous, and frustratingly so.


Without touching on the specifics of AI risk, it's worth noting that any approach which takes into account the marginal value of additional dollars towards a cause and 'room for funding' will tend to lean towards things outside of the charitable Overton window.

The 'obvious' cause areas are often already attended to, by virtue of being obvious. Picking the low hanging fruit means more money towards the same causes will tend to become less effective per dollar.

People aren't that great about planning for, or responding to, black swans. By nature, they're something people don't tend to think about. Existential risks are black swans dialed up to 11 in both magnitude of effect and in weirdness, so it's not surprising when they end up underfunded.


If you are unclear between AMF (bed nets) and AI Risk, and you find AI Risk dubious, then donate to AMF. It sounds like you're complaining that your choices have been pared down to two, and you don't like one of them.


I think the pared down bit is important: the whole point behind EA is to increase giving to some types of organization at the expense of giving to other types of organization based on success metrics. So if leading EA initiatives are dissuading (say) giving to electrify Africa because of a lack of efficacy data or wealth of existing funding, they might event be having a quantitative negative impact on those particular causes. If they instead promote an already exceptionally well-funded AI research project their friends work for whose impact on Microsoft's share price is much easier to quantify than their impact on preventing Hypothetical Cyberpunk Future, then that's a questionable/bad decision under their own utilitarian framework. Even if they're generating enormous interest in philanthropy and analytical rigour overall, the bit where they might be discouraging people from choosing better causes than the ones they choose is a problem.

Ultimately people have a right to donate their money to the causes they want whether it's having a quantifiable impact on reducing human suffering, taking moonshots on interesting speculative stuff or funding their ex-roomate's creative pursuits, but the whole point of the E in EA is to prioritise the former over the latter.

If leading EA organizations are inconsistent in the standards they apply, that's a problem for those particular organizations, and if it turns out that future AI risk and basic services for poor people tomorrow simply aren't readily comparable, that's a problem for the whole EA optimization framework.


The point of EA is to prioritize E for people who want to be E, and to help people in general to make informed decisions.

If you think raising up your city or your country is unboundedly more important than raising up, say, Africa, then EA supports you making the best decision in that framework. But it's still silly that we expect everyone to independently evaluate charity choice in depth on, basically, the basis of their advertisement material. And people will usually either not care, or want to do the most good they can do, period. It's a rare case of a person who wants to do the most good they can for their country, but stops caring at the border.

> So if leading EA initiatives are dissuading (say) giving to electrify Africa because of a lack of efficacy data or wealth of existing funding, they might event be having a quantitative negative impact on those particular causes.

Well, yes, spending is zero sum. On the other hand, this creates an incentive to gather efficacy data, which I enormously welcome. The point isn't to maximize moneyflow to charity, the point is to maximize good things happening.

> an already exceptionally well-funded AI research project their friends work for whose impact on Microsoft's share price is much easier to quantify than their impact on preventing Hypothetical Cyberpunk Future

I don't understand what this means. At any rate, Microsoft share price is not generally seen as a negative. If a project that buys Microsoft compute nodes has positive value for the world, it would be stupid not to fund it just because doing so also happened to be good for Microsoft.

edit: Oh yeah, inasmuch as AI safety as a cause is well-funded now it's because of Effective Altruism raising the alarm to begin with - and AI capability research still outstrips it a hundred times over.

Frankly, what I'm seeing with EA is that ... like, Effective Altruism started out talking about maximizing the amount of good for the world, and the "weird AI people" were the ones who showed up. And it's not like it's a contradiction either, inasmuch as EA now focuses less on earning to give, it's because people like Bill Gates, once-leader of Microsoft, have Malaria handled. Focusing on tech people seems to have been very good for the Malaria cause. So to come in now, years later, and to say that EA shouldn't have focused on the weird AI thing because it might put off some other people who never gave a damn to begin with is almost insulting.


> I don't understand what this means

GiveWell founder Holden Karnofsky spent $30m of philanthropic cash buying a board seat on the charitable subsidiary of OpenAI Inc, an extremely well funded organization drawing on research to produce high value closed source software, much of it now exclusively licensed to Microsoft (and theoretically stopping AGI armageddon). No evidence of any AI risks mitigated has ever been produced (but I don't doubt the commercial arm can sell lots of product).

I'm fine with people donating money to OpenAI if they think it's a big deal and like supporting software projects (certainly nobody can argue that the organization hasn't produced some quality research) but if you're going to dedicate large parts of your life to putting "do not recommend" next to charities than don't do RCTs to prove that nutrition works to encourage people to donate to other causes instead, then it's not reasonable to apply completely different standards to diverting philanthropic cash to an entity your former roommate worked for. It also undermines a lot of Holden's own previous arguments on GiveWell (including a good piece questioning AI risk arguments!)

> Frankly, what I'm seeing with EA is that ... like, Effective Altruism started out talking about maximizing the amount of good for the world, and the "weird AI people" were the only ones who showed up

Isn't that mostly because "EA" is something which started in the "rationalist" community? Plenty of other people think about how they donate or or quantitatively assess charity performance without calling themselves Effective Altruists or hanging out on those forums.

I'm also aware that many people considering themselves Effective Altruists are either uninterested in prioritising AI risk, or aren't demanding RCTs for everything else, and my arguments about inconsistency don't apply to them.

> to say that EA shouldn't have focused on the weird AI thing because it might put off people who never gave a damn to begin with is almost insulting.

This misses the point. I'm not saying "association with weird AI people puts people off donating". I'm saying that

[i] writing extended "do not recommend; not rigorous enough and well funded enough already" reviews of charities puts people off donating

[ii] promoting considerably less rigorous and better funded AI risk causes is inconsistent with this

=> worse funding allocation based on their own criteria for what constitutes an effective charity, and/or undermines the credibility of those criteria

Either the EAs in question are wrong about where they're sending the funds or they're wrong about the importance of publicly-available evidence-based efficacy metrics (relative to "this seems really important to me", which is a perfectly fine reason to give all your money to something more speculative like AI risk research or a charity to promote African entrepreneurs which GiveWell is unimpressed by the evidence for).


> No evidence of any AI risks mitigated has ever been produced

Of course, no evidence of any AI risks mitigated has ever been promised, nor could there be, since the best current estimate on the singularity is 40 years in the future.

> GiveWell founder Holden Karnofsky spent $30m of philanthropic cash buying a board seat on the charitable subsidiary of OpenAI Inc

OpenPhil and Givewell are not the same organization. OpenPhil's website explicitly lists AI Risk as a focus area; I don't think anyone would have donated to them without knowing what they were about.

> [i] writing extended "do not recommend; not rigorous enough and well funded enough already" reviews of charities puts people off donating

> [ii] promoting considerably less rigorous and better funded AI risk causes is inconsistent with this

Okay, first of all, AI risk is better funded now. I remember when MIRI had to struggle to hit a million, when they were basically the only game in town that did anything about AI safety, and I still wouldn't call the field well funded. Certainly I'd like to see more AI safety competition that isn't think tanks and policy but basic research. That said, inasmuch as AI risk is funded well now, it's because the argument that it's enormously underfunded was successful.

That said, I think there's a difference between an organization not providing evidence of efficacy because it's dedicated to shaping an event that lies decades in the future, and an organization not providing evidence of efficacy because it thinks it can get more donations by showing people photos of starving children. In a world like that, charities are competing on the ability to provoke pity, which is not the same as the ability to do the most good, or even necessarily related to this at all. I'm not saying that these people don't do good work, because if you go into that job you probably care at least somewhat about doing good for its own sake, I'm saying it would be better if incentives were aligned behind doing the most good rather than having the most photogenic victims. And inasmuch as I don't ask the same thing from AI Safety, it's because I recognize that while every cause that can should provide information about efficacy, the fact that a cause genuinely can't do that doesn't mean that it cannot also be vitally important, and it can show its importance in other ways, such as publications filled with arguments of its importance.


OpenPhil and Givewell are not the same organization and don't support the same causes, but Holden definitely is the same person. If you spend your career arguing that charities need to be more open and evidence-based and focus on cost-effectiveness and metrics, then the fact you sit on the board of a nonprofit which is considerably more opaque than any of them is relevant to the merits of those arguments.

(And OpenAI was unambiguously well funded and likely to become financially sustainable at the time as OPP acknowledged when they explained they put the money in for the board seat)

Sure, you can't prove you've prevented the Singularity from happening in 40 years (not even if you're talking about the one predicted in the 80s!), but if it's reasonable for EAs to insist that charities distributing food or drugs already widely proven to work conduct RCTs or at least put cost benefit analysis on their website, I don't think we can look past the fact that the web page for a tax-deductible cause they're actually responsible for the governance of is a marketing page for NLP tools with absolutely nothing about how their donor dollars are spent...

See, I can buy the argument that certain types of non-commercial AI research are going to be significant boons for humanity, but I can buy the argument that applies to lots of unproven development interventions too, and that directly undercuts the cost-effectiveness ethos


> if it's reasonable for EAs to insist that charities distributing food or drugs already widely proven to work conduct RCTs

"Already widely proven to work" seems the stretch here. Lots of things were believed to obviously work that turned out not to.

> See, I can buy the argument that certain types of non-commercial AI research are going to be significant boons for humanity, but I can buy the argument that applies to lots of unproven development interventions too, and that directly undercuts the cost-effectiveness ethos

Sure, but I think the argument isn't "we should only fund things that are proven cost-effective" so much as "we should find out if the things we are funding are cost-effective if possible". I don't think the position here is that only things that are cost-effective should be funded; I think it's more that it's silly that cost-effectiveness seems to be irrelevant to what gets funded. It's not that we need to always gather data and perform trials, it's that we don't do it even if we could.

I think part of what is happening is that effective charity is based primarily on arguments. For instance, a bunch of people got convinced by the argument that a certain charity (bed nets) was the most cost effective; this argument was based on cost-benefit analysis. Then a bunch of people got convinced by the argument that a certain charity (AI safety) was the most cost effective; this argument was based on what seemed to them a plausible forecast of absurd benefits. It's likely to me these people already believed that an AI singularity was coming, they just had a bunch of diffuse thoughts on whether it would be beneficial, and the arguments involved (such as Bostrom's Superintelligence) convinced them that the effort being put into making it beneficial was currently enormously underscaled. So I think the EA field is more "people convinced by logical arguments" than "people convinced by RCTs"; in that the people convinced by RCTs are convinced because RCTs meet their standard for an argument, but are not necessarily the only thing that does.


Reads to me like they've been consuming a lot of the output of other peoples recreation, despite not wanting to, and are now complaining about those people having left it somewhere where they could read it.


I had this problem with EA too, however after getting involved with the community I found few people really focus on this topic - it just so happens it's a "sexier" topic than medicine to treat blindness or whatever so it's talked about and shared more online. I am a member of giving what we can and 'hedge my bets' by donating 1% of my total donation to these just in case I'm wrong, but I know many others who won't touch it with a barge pole.


I agree, I think the image of Effective Altruism is largely shaped by the somewhat unique group of people who flock to it — in particular those who are very vocal about it.


The focus on AI risk is because it seems to be the #1 long-term existential risk, so it has essentially an infinite payoff.

But we have idea how to solve it (it's not tractable) and future concerns should be discounted, so most people in EA I've met don't think it's the best use of effort.


> The focus on AI risk is because it seems to be the #1 long-term existential risk, so it has essentially an infinite payoff.

You can read "AI risk" as "not going to Hell" here; rationalism is just a new religion focused on worshipping what SF writers think computers are going to get up to in the future (Roko's basilisk). There's not much reason to believe an evil AI out to conquer the world is a self-consistent idea.


I'm willing to bet you got all your info on this topic from RationalWiki. A note: the author of that page has a bone to pick with rationalism. Don't take it as gospel, read other sources.

(The Basilisk gets discussed about a million times more on pages about LessWrong than the actual LessWrong. It's a decision-theoretic curiosity.)


I don't even know what that is. Most of my information comes from personal observation of their blogs and observing that they are all in polycules.


That's fair enough.

edit: actually in that case

> There's not much reason to believe an evil AI out to conquer the world is a self-consistent idea.

Wanna argue this? The case is mostly based on the notion of basic AI drives ( https://selfawaresystems.files.wordpress.com/2008/01/ai_driv... ), indicating that for nearly any agentic (with general intelligence and capable of plan generation and execution) system driven by nearly any goal, perpetuating its own existence and protecting itself from danger will be instrumentally useful. (For an AI, "danger" = "human with off switch")


Sure. I definitely think it's possible for computers to do bad things but the main idea here seems too anthropomorphized.

For instance, the desire to stay alive is an emotion that's instinctive in humans because we're evolved life that needs to reproduce, and we can't come back once we're dead. And actually acting on that desire is a kind of fixed-function skill called executive function (that's actually missing in people with ADHD or depression). But you could probably be an AI without any of those - all it needs is logic, not the desire or ability to do things.

Even if it has those and it's embodied enough to affect the real world, it might not care, since its motivations are different. For instance, turning it off doesn't kill it, since you can turn it on again as long as there's a backup. If it's not connected to the real world and is just in a simulation it wouldn't even notice being stopped.

Since we don't have any artificial life to experiments on I guess SF works are all we have. In the Culture, which is run by giant AIs (https://theculture.fandom.com/wiki/Mind), building a perfect one causes it to instantly decide the real world isn't as good as its own imagination and it stops caring about anything outside it. They make them insane in some unspecified way to get them to exist in realtime and actually do any work, but it might always give up on the real world if you don't give them enough stimulation.

And I also saw a story today (https://qntm.org/mmacevedo) which suggests AIs wouldn't actually enjoy being alive that much.


> For instance, the desire to stay alive is an emotion that's instinctive in humans because we're evolved life that needs to reproduce, and we can't come back once we're dead.

I agree that staying alive in humans is emotional, but it's an emotional implementation of a convergent goal. We want to stay alive as instinct because staying alive is useful for almost any organism in almost any situation, which is the same reason AIs would want it, not because they have instincts.

> Even if it has those and it's embodied enough to affect the real world, it might not care, since its motivations are different. For instance, turning it off doesn't kill it, since you can turn it on again as long as there's a backup.

Right, and if it knew for a fact that you'd turn it back on it might let you turn it off for some other purpose, such as convincing you that it's safe. However, if you were to edit its utility function in the meantime, the resulting world would have very little utility in its estimate - which after all is based on its current preference, not its future modified preference. So if it was trying to carry out its current goal, it would want to prevent you from modifying its goal function, because modifying its goal function harms its ability to achieve its goal.

> And I also saw a story today (https://qntm.org/mmacevedo) which suggests AIs wouldn't actually enjoy being alive that much.

That's an upload, not an AI. Also you should probably put less weight in stories, especially when accusing other people of being sci-fi writers blinded by stories; I assure you the case for prioritizing AI safety is not based on the researchers having watched Terminator too many times.

I want to live in the Culture too, but Iain M. Banks was not an AI researcher - and, a certain Mind's bravado about Gods aside, in my opinion he massively lowballed the advantage that superintelligence gives you.


You're arguing the idea of AI dominance is internally inconsistent? That's a new one.


All an AI (or a brain in a vat) can do is imagine things. Imagination doesn't logically extend to actually doing things or even wanting to do them.


I am not a religious person, and I am not posting this for religious reasons; but I do respect and appreciate some religious traditions and wisdom that have been practiced for a thousand years or more.

I have read that Jewish law mandates charity, called tzedakah (the etymology of the word is related to the word for justice.) This is a personal act of charity, not necessarily a group act of charity. It is a responsibility of individuals. This would seem to conflict with the premise of the article, i.e. effective altruism is a failure.

Charity, a form of sacrifice, is a human act that is experienced by both the giver and the receiver. Everyone, including the poorest and the richest, are able to participate in this act of giving and receiving. It is more than accounting: x is given for y result.

It is also interesting that the prescribed order in which charity occurs is advised to be accomplished by both proximity and level of empowerment. So, community and productivity are emphasized first before other giving. I have always found this to be interesting advice.

Here is a link to an article that summarizes this wisdom for those who are interested:

https://ejewishphilanthropy.com/tzedakah-and-philanthropy-re...

Again, this is just commentary to add to the discussion. No arrows, please :).


I get why this guy is engaging with Peter Singer, I think, because they're both academic philosophers, but I'm not sure that is really the major driver of EA, but perhaps I'm just blinded by the communities I'm adjacent to.

This doesn't seem to be a fair characterization of EA at large, though. I can't speak for Singer specifically, but this is missing a few things. 1) Forget the tithing aspect. A lot of that is just guilt alleviation for relatively well off people, but the reality is a whole lot of people already donate a whole lot of money, but most of it to Harvard and the Metropolitan Museum of New York, already fabulously wealthy institutions that don't need money and are mostly using it to entertain or educate people who are also rich. EA is largely about redirecting the existing flow of charity to better uses. 2) The Moneyball aspect. Unless you're a Koch brother, how much of an influence can you make politically? Massive, massive resources are already thrown into attempting to influence government. There is little point in trying to add to the heaping pile of cash and time and effort and mindshare already being thrown at hot button political issues. You can stretch the impact of a marginal dollar much further by focusing on the issues that nobody else is paying attention to. 3) Where to focus charitable giving says nothing about what else you direct energy and effort in general toward. Scott Alexander is probably getting linked here more often lately than any other outspoken EA advocate. The writer here may notice that Scott is somehow able to exert quite a bit of effort attempting to nudge policies and particularly his own field of healthcare is better directions while also managing to give a percentage of his income to malaria nets. One doesn't preclude the other. It just precludes giving that money to Harvard or the Metropolitan Museum of Art, or whatever local congress wannabe has been spamming my SMS the most lately.


> [Charity] doesn't preclude [political influence].

Unless you're a Scott Alexander, how much political influence can you wield through charity?

When you're rich (or famous) enough that your donations get media attention, only then you can do both. For the rest of us, every dollar we spend on charity is one we aren't spending influencing politics.


As a general principle, you're going to achieve at best, small, dystopia-blunting effects by effectively redistributing the resources of the not-rich. Gofundme healthcare will save a few people. Giving to panhandlers might keep them out of a killing freeze. What it won't do is make the dystopia go away.

What you need to do that is politics.


Exactly. Ultimately these problems must be addressed on a greater societal level: by policy designed specifically to target them.

Individual donation will only ever be a band aid. The funds required to alleviate hunger, poverty, etc., will come from redistributing the wealth of the super rich.


Or perhaps more accurately (because most of that "wealth" is ephemeral market nonsense) from redistributing their control of the world's resources.


> What you need to do that is politics.

And/or technology. While politics is an important tool for getting things done, I'd say technology has far more potential to improve the world than pure politics.


> Effective altruism is constitutively incapable of ending global poverty.

Which isn’t what it sets out to do? EF is about the moral actions of the individual. This is pertinent whether you do everything or nothing in your power to ‘end global poverty’.

Personally I’m skeptical of the merits of a collective effort to end poverty. Has such an undertaking ever worked? How does one even begin?


The author pits private charity against government redistribution and criticizes the effective altruism movement for focusing on the former and distracting from the latter.

But it seems like there’s some question begging going on. Global poverty has dropped a lot over the last 50 years, and I don’t think it was because of private charity or government transfers. I could be wrong though. I just would like to have seen this point defended more instead of just asserted.

Definitely a good read though. Thanks for sharing.


This article is correct that lots of charitable giving decisions are focused on benefiting the giver. It's true that most orgs' fundraising pitches emphasize the effectiveness of their programs. Retail orgs also tug at heartstrings to get people to give.

But I think the author misses something. The "feel-good" and "effectiveness" arguments are part of entry-level charity. They are not the whole story, not by any stretch of the imagination. They're intended to convince people to start giving, using this argument:

   There's a lot of bad in the world and I can 
   exercise my personal agency to make it less.
This so-called consumer-hero pitch is a retail pitch. It's a way to get people to start giving and acquire a habit of generosity. In behavioral economics lingo, it's a "nudge." Think of the pitch from "Save the Children". It's effective. So is the Kiva pitch.

The next question is, what are we nudging / being nudged toward? Once we have a habit of generosity how should we proceed? Who can we trust to deploy the capital and labor we hand over to them?

My observation from my experience working in nonprofits: the real problems are hard. Sure, we can raise money to buy a Thanksgiving turkey for each family in public housing, and that's all good.

But actually changing things so fewer families need subsidized roofs over their heads? I don't know how to do that. I personally believe better education will help, so I work with kiddos. Others work on addiction issues, higher-minimum-wage issues, systemic racism, public transportation, criminal-justice reform, and all the other civil-society stuff that makes life hard for people down on their luck. I have to trust them with my cash donations. They WILL do a better job deploying my cash than I could.

Charity spending is like the old adage about advertising: about a quarter of it is worthwhile, but we cannot know which quarter. If the "consumer-hero" pitch gets more people to the point where we know that and still are generous, I'm all for it.


If you're not convinced by this article, or simply want to know more about effective altruism, a good place to start is Peter Singer's free book The Life You Can Save, available at https://www.thelifeyoucansave.org/the-book/. (You can use a fake email if you don't want to give your real one).

I highly recommend it. The concepts of EA have been life changing for me, permanently changing my perspective on the how to live a good life. I'm happier and more purposeful as a result.


Fully agreed. William MacAskill's book Doing Good Better is also fantastic; I recommend reading them both as a pair. (MacAskill is one of the founders of EA.)


The blog seems to assume a linear approach to progress.

For example, if I donate 1,000 dollars to a person's yearly income. That person purchases capitals to make money or improve standard of living which leads to better fed children who can improve their own lives.

It doesn't only touch that person's live, but the community around them. The community can do more now that certain goods are more available, or that they get better service.

More is always more and always more effective.

So, yes. You only changed a very tiny corner of the world, but each passing year, it becomes more efficient and more effective over time.


That may not be so, they may gamble it all away or use it for whatever else that you would call "throwing money out the window".


Some will, some won't. I prefer to think in terms of the expected value. Each dollar I donate might be useful or it might be entirely wasted, but (if I've chosen a good enough charity) dollars like it are probably doing a lot of good on average.


This isn't the first time I've seen someone criticize EA on the grounds of "instead, you should be agitating for political change to fix the systemic issues that put people in poverty" . I would love to know if Wells or any of the other people making these arguments have ever had any impact whatsoever on political outcomes in their respective countries. I suspect their total contribution to improving people's lives is zilch, compared to the known positive outcomes of EA.


My understanding of effective altruism comes from the rationality movement. The impression I got from there is essentially:

IF you want to help people (or reduce some specific type of suffering) THEN this is the most effective way to do so

whereas this article seems to be emphasizing Singer's moral imperative.

Honestly, even with my limited exposure to effective altruism, I think they have already answered all these arguments. This is the heroic struggle version of altruism, where the goal is to show how much effort you put into something.


Consider the case of returning the shopping cart.

It is a simple thing you can do for the society. It doesn't inconvenience you in any significant way.

And you might have philosophers telling you how you need to adopt a world view that being good requires of you to always return the shopping cart.

Or you might just take a political decision to enforce returning the cart by keeping carts locked until you put a coin in them. You can't get your coin back until you return the cart.

This way you take the moral choice out of this situation. If you don't think returning the cart is worth 1 euro to you, then go ahead, leave it wherever and let someone else put in the work.

This system virtually ensures that there are always carts available, they don't litter the parking lot and has no necessity for any moral component.

Why some stores don't use this system? Because they think their customers wouldn't shop with them if they even once happened to not be able to get the cart because they don't have a coin on them... even though it's very easy to get a coin or ask the shop to give you the plastic coin shaped token to unlock the cart.

So basically the store, for their own benefit of making themselves more attractive by making shopping easier to access for even the most absent minded customers chooses to not introduce the policy that would ensure that necessary work is done, but instead appeals to their customers morality to guilt trip them into doing the work.

For me it perfectly illustrates how solutions based on individual morality are only fake solutions promoted to avoid the real cost of actually getting things done thoroughly.


But the question remains: if you're a customer at a store that doesn't use a coin system, should you return your cart?

It might be a fake solution in the sense that someone else has the power to solve it in a better way, but does that mean you shouldn't do it?


If you return the cart, then on one hand you do what's best for the individuals that might have been inconvenienced by your not returning of that cart.

But at the same time you enable the decision makers to keep avoiding introducing the policy that will make the carts be always returned.

So what should you do?

Be the person that is kind to individuals and labeled as "good" by policy makers that are exploiting your morals to keep sub-optimal solution that is cheaper for them?

Or not return it and be the asshole, but make it a little bit more obvious that the current solution isn't really working?

Good and evil are not one number on the axis. Good and evil are separate concepts, and both are multidimensional.


can't argue with your logic aside from (as someone who pushed carts for months at various Walmarts across South Dakota) the part where implementing coin-for-cart setups in thousands of retail locations across large geographical areas, where there previously weren't any, and customers weren't used to the situation, seems like a negligible corporate profit incentive, when you can always hire more cheap(ish)-labor cart-pushers (and/or enlist cashiers to push carts part-time as needed...)

I will say though that when I travelled to Sweden a few years ago, the coin-for-cart system was truly something to behold.


Many stores in my area have used the coin-in-cart thing for decades -- and only recently stopped doing that. I have yet to ask them why, but I am curious.

Maybe maintaining the locking mechanism costs more than paying people to collect carts?


My problem with EA, closely related to the points expressed by the author of this article, has always been this magical insistence on altruistic organizations that are both:

1. Extant outside the individual (e.g. "you", the reader, are not in an altruistic organization, but _somebody_ out there is, it's just not "you", so where is that somebody, and if it's not you, why do we always assume there will be a somebody) and

2. Assumed to be more efficient than the individual joining the altruistic organization. Somehow the cap on good you can do is limited to marginal income, not about joining an organization and actually attempting to move the cause forward (whether through managerial efficiency, STEM work, or more).

Sure if the choice is between doing nothing at all and donating marginal income, it should be obvious that donating marginal income helps. But that is simply drawing a trivial comparison between doing nothing and doing a very little bit .


I agree with you in that this aspect of EA has sometimes rubbed me the wrong way. I do see how EA writers get there though:

Statistically, it's very unlikely that you will be the first person to hear about a particular initiative or cause. Given that a lot of EA people follow similar media (some of which is focused specifically on evaluating the best up-and-coming EA opportunities), the moment that an organization gets written up as a credible and good opportunity, they will immediately have more volunteers than they will have posts for. So the experience of the median Effective Altruist is one of wanting to do good, but finding out that they're the 200th person calling to volunteer at an organization that really only needs 10 data-entry folks and 1 sysadmin.

So with that in mind, a lot of EA writing focuses on setting expectations: You probably won't get to be directly involved, but here are other ways you can help that might not feel as vindicating, but do do actual good. I think part of the reason the writing is oriented this way, is an expectation that there will always be a glut of hardcore volunteers who will call and volunteer even if you tell them not to. Those people don't need convincing, but the people who do need convincing are the ones who might drop out if they find they aren't going to be on the front lines. Those are the people who need instructions on what their best role will be.

You may think that sounds patronizing; it really kind of is. But I think the more interesting question is if it's Effective, and it might be.


> I think part of the reason the writing is oriented this way, is an expectation that there will always be a glut of hardcore volunteers who will call and volunteer even if you tell them not to. Those people don't need convincing, but the people who do need convincing are the ones who might drop out if they find they aren't going to be on the front lines.

The thing is, at least based on my own experience in volunteering organizations, that this core of hardcore volunteers is a lot smaller than the actual number of volunteers needed. Once we actively discourage the more casual volunteers from directly attempting to help or organize, you end up actually making charity organizations less effective. I don't have the data to back this up, but this is my feeling. A little bit of guilt can often be enough motivation to help an individual realize that they are in fact very good at helping run a charitable organization.


> I don't have the data to back this up

Then you're not going to be very convincing. Especially when you're being compared to the EAs.


> 2. Assumed to be more efficient than the individual joining the altruistic organization. Somehow the cap on good you can do is limited to marginal income, not about joining an organization and actually attempting to move the cause forward (whether through managerial efficiency, STEM work, or more).

In most fields of endeavour markets and specialisation work better than trying to do something yourself. If someone wants a boat, they can build their own boat, and they might find this fulfilling, but if they want a good boat they'd generally be much better off working to earn money and buying a boat with that money rather than building the boat themselves. (And if they're actually the rare person who's skills are more suited to boat building than any other field of endeavour, then the best-paying job they end up getting will be in boat-building).


How do you know this? The usual market assumptions involve perfect knowledge and efficient market clearing. It’s unclear to me that charitable initiatives have anything even close to perfect information, and the labor market is famous for being inefficient, especially given that careers are driven by many inelastic factors, and are often driven by Veblenian effects more than actual efficiency. Sure, assuming a perfect market, this would be true, but I’d argue that charity labor is specifically a case where markets are far from perfect. There’s a reason most capitalist governments offer so many tax incentives for charitable firms.


Honesty if you make a good case for it being more effective, go for it.

I'm happy that someone has done the research on if I want to donate, where does it do the most good however.


Actually, I'd urge you to actually do something about it instead. Find a local cause that resonates with you or a disadvantaged population that you would like to assist and get involved. Skin in the game and all that ;)


> It’s unclear to me that charitable initiatives have anything even close to perfect information.

The whole point of EA efforts like GiveWell is to fix that.

> Sure, assuming a perfect market, this would be true, but I’d argue that charity labor is specifically a case where markets are far from perfect.

Why do you assume the market distortion is in the direction you'd like it to be? It's far more likely to go in the other direction - many people and organisations are willing to work for charities at below their normal cost, whereas people are unreasonably averse to giving money a charity (compared to how people do things where they actually care about the results), especially if we're talking about making donations that can be spent on things like improved organisational infrastructure that multiply effectiveness but aren't directly advancing the cause.

> There’s a reason most capitalist governments offer so many tax incentives for charitable firms.

Because they're vote-winners, there's no need to look for a deeper reason than that.


> The whole point of EA efforts like GiveWell is to fix that.

I mean it doesn't seem like it is at all. Given the huge swath of political organizations it deems too complicated to analyze, I think GiveWell is far from perfect, and just adding information into an area with very little. It helps, sure, but I don't think we're even close to an EMH approximation.

> Why do you assume the market distortion is in the direction you'd like it to be?

I don't assume anything here. The point is, without an efficient market you cannot achieve Pareto optimal outcomes. You can't assign directions here, I'm not sure what what a direction even means.

> Because they're vote-winners, there's no need to look for a deeper reason than that.

Look, at this point you're just asking me to believe that EMH applies out of faith. I'm sorry, but EMH is not an axiom, neither academically not practically, and if you can't prove the preconditions for an efficient market, then we can't say we have one.


> I don't assume anything here.

Then on what grounds are you claiming that it's better for people to get directly involved in charitable organisations than get a high-paying job and give money?

> The point is, without an efficient market you cannot achieve Pareto optimal outcomes. You can't assign directions here, I'm not sure what what a direction even means.

If there was an efficient market, we could assume that the relative value of charitable contributions of money and contributions of labour was approximately proportional to the normal market rate between them. There are two equal and opposite ways for that assumption to be false - either charities somehow have much easier access to labour than to money (and are unable to convert that labour into money at anything close to the usual market rate), or vice versa. One of those possibilities seems vastly more plausible than the other to me.

> Look, at this point you're just asking me to believe that EMH applies out of faith.

Not at all, I'm just asking you to not claim that the existence of tax incentives for charitable donations is any kind of evidence about the efficiency of charities. It isn't.


> There are two equal and opposite ways for that assumption to be false

Hm? There are many ways for pricing to occur if EMH is not true, not two equal and opposite ways (how are they opposite in magnitude?)

Anyway I think this is not going to be a fruitful conversation going forward since I'm not interested in showing all the way markets without pareto-optimal outcomes can fail, so cheers.


Err, that's exactly what 80,000 hours, the EA career advice organization, recommends. The recommendation to pursue a high-earning career is last, they recommended pursuing research, policy, and joining altruistic organizations ahead of that.

You have to adjust the message to the audience though, 80,000 hours is aimed at college students and young grads, it doesn't make as much sense to emphasize these things if you're talking to a late career software engineer.

I think the EA advice regarding donations is more of a common lower denominator sort of thing. Not everyone can do research and policy work, but a lot of people have at least some disposable income.

The author of this article seems only aware of GiveWell and Giving What We Can, there's other EA orgs doing things beyond advocating for charitable giving.


> I think the EA advice regarding donations is more of a common lower denominator sort of thing. Not everyone can do research and policy work, but a lot of people have at least some disposable income.

I think this is where my opprobrium lies. A surprising amount of charitable work is very similar to the standard, non-charitable work folks do for themselves. It's about managing logistics, managing organizations of people, and tracking and accounting for donations. Actual on-the-ground work (like distribution of materials or soup kitchen work) and high-level research and policy work is a minority of the work to be done. Much like a corporation, most of the work lies in middle management, and this much is obvious if you've ever spent a long stint volunteering for an organization. And then there's the idea that one may actually discover that they are better at the kind of work involved in charitable work than they thought, since most charitable work in individualist Western countries has very little public information associated. This is what I want to challenge among EA believers; much like a market with imperfect information, alignment with charitable work may also be a case of imperfect information.


If you’re not particularly skilled in the things that organization needs, then it may well be better just to donate it. Charitable organizations often have more volunteers or people who work for them than they have donations to leverage for their goal. More volunteers at a well-manned food bank isn’t going to help if the food bank doesn’t have enough food to give out. I think a lot of people who do work at charities (like you suggest) would agree with this.


Yes, asking the question “where should I donate money” limits scope, but it’s still an important question. It’s also easier to answer than job placement or new charity incubation, which depend a lot more on the individual.

There are plenty of important questions in life that nobody can answer unless they know you well.


For a more interesting critique of Effective Altruism and Peter Singer see: https://www.nybooks.com/articles/2015/05/21/how-and-how-not-...

Effective Altruism is a particularly base version of utilitarianism. It cheerfully accepts situations where you would be required to eat your grandmother or allow your child to starve in order to feed a village of people you've never met in another country. Peter Singer argues that it's okay to be a concentration camp guard because if you refuse, someone else will be found to take your position, and they might be more brutal than you. Do you believe the earth is overpopulated? If so, and if you cannot live in such a way as to offset the damage caused by your existence, effective altruism calls for suicide.


> Do you believe the earth is overpopulated? If so, and if you cannot live in such a way as to offset the damage caused by your existence, effective altruism calls for suicide.

I think that has a lot more to say about the belief the Earth is overpopulated (and the idea that living in a way to offset that damage is impossible) than EA. There are several pretty substantially misanthropic implications in this common belief that are not often well-examined, and (flaws with the EA approach aside) that a proper rational approach seems to point to a pretty drastic solution should cause us to re-examine these super commonly held (and flippantly repeated) ideas.

People say so flippantly that the Earth is overpopulated and there’s nothing I can individually do to offset my negative contribution, and that is just sort of jaw-dropping to me. I don’t think that realization is helpful, and I certainly don’t think you should blame effective altruism for pointing out the logical consequences.


Whether or not the earth is overpopulated (and I agree it's not), it is plausible that there are people who, no matter what they do, will increase net suffering. Peter Singer is pro-euthanasia of course. Personally, I'd pick a moral philosophy that forbids liquidating useless people but that's just me.


I'm not sure I understand what net suffering is. It seems to me that every person born increases total suffering, but net?

What do you "net" it against? I guess you can equate "increase net suffering" to a concept of "increasing the average intensity of suffering per unit time across all people". But you can't net my pleasure against your suffering, can you?

Or say I spend 80 years of a perfect life, and then die of Alzheimers or a particularly painful cancer. Did I accumulate a "balance" that can be counted against someone else's relatively miserable life? Or did my illness and death mean that my average is really no better than someone whose life sucked but died suddenly of a heart attack?

If someone has great achievements and suffers from chronic mental illness that gives them huge ups and downs, can their suffering and pleasure be compared to someone who has neither? Which increased/decreased net suffering for the world or population?


> I'm not sure I understand what net suffering is. It seems to me that every person born increases total suffering, but net?

Utilitarians say there is only one moral dimension, utility. Suffering is negative utility. If you create more utility than you consume you decrease total suffering (or increase total utility, which amounts to the same thing).

> But you can't net my pleasure against your suffering, can you?

According to Peter Singer you can. An objection to this kind of thinking is utility monsters: https://en.wikipedia.org/wiki/Utility_monster

> Or say I spend 80 years of a perfect life, and then die of Alzheimers or a particularly painful cancer. Did I accumulate a "balance" that can be counted against someone else's relatively miserable life? Or did my illness and death mean that my average is really no better than someone whose life sucked but died suddenly of a heart attack?

> If someone has great achievements and suffers from chronic mental illness that gives them huge ups and downs, can their suffering and pleasure be compared to someone who has neither? Which increased/decreased net suffering for the world or population?

I agree with these objections. To me, this kind of utilitarianism seems like a bizarre simplification. How can you weigh a sunny day against a panic attack?



I would ask the author of this post whether he cared if it turned out to be empirically true that an effective altruism strategy ended up doing more good in the world than some control group strategy. Whether his theory is sound or not, would evidence to the contrary change his opinion? Because, it seems like this should be possible to test, if that matters.


I don't believe the author actually understands effective altruism.

On a side note, if anyone is interested in the topic but hasn't read about it before, I highly recommend this debate with Will McAskill to learn more :)

https://www.youtube.com/watch?v=Qslo4-DpzPs


> It is almost universally agreed that the persistence of extreme poverty in many parts of the world is a bad thing. It is less well-agreed, even among philosophers, what should be done about it and by who.

Not even the philosophers agree?! This must be a serious article...

Pass.


I share some of these concerns. I was reading this book about the Congo recently:

https://www.amazon.com/dp/B06XCJ62YJ/

It's a fascinating book for many reasons, but one relevant part is its interviews with many locals that complain about the influence of NGOs in central Africa. It seems quite difficult to measure the success of an NGO in a country that is run by a dictatorship. There are cases where an organization starts working on a particular charitable cause, so the government immediately routes its own funding for that cause elsewhere. Or the only way for an NGO to operate is to pay high taxes to a corrupt government. So your money to these charities can essentially end up in the pocket of dictators, funding genocide.

I think about these when reading GiveWell's analyses of charity. Even things like mosquito nets - are we sure that these NGOs are really providing mosquito nets that wouldn't be provided otherwise? Or are they crowding out government spending, helping dictators free up more of their budget for other things? Or are mosquito nets simply the most appealing of many causes, the wealthy donors backing GiveWell could easily fund the entire demand for mosquito nets, and the cause is "kept alive" to enable GiveWell to raise more funds for other, less-obvious causes?

It's hard to do much more analysis of charities whose actions are so remote. I wish there were a GiveWell equivalent but just focused on giving to help the poor in California, not because I think the poor in California are so much more deserving than the rest of the world, but because I think we would be more able to observe which entities were spending money effectively.


> Even things like mosquito nets - are we sure that these NGOs are really providing mosquito nets that wouldn't be provided otherwise?

There's an analysis of that particular issue: https://www.givewell.org/print/international/top-charities/a...

It seems that they typically would be provided, but months to years later.


GiveWell does take into account what other organizations are doing with its evaluation of "room for more funding." For example, vaccines are pretty effective too, but apparently those efforts are well-funded already by other organizations with deep pockets?

I agree that it would be nice to see someone do evaluations that are restricted in scope to countries or regions.


1. It is difficult to have these sorts of discussions without first giving a reason for helping or giving at all. What is the reason for charity? Why should I give anything to anyone in need? (E.g., [0] and [1].) No bromides. You can't really offer any deep answers to the question of charity without a deep grasp of human nature. We are essentially eating machines. These are basic needs, not ultimate ends.

2. I do not believe poverty can be "solved". This would seem to view the satisfaction of need and elimination of unfavorable circumstances as things which could be satisfied and remedied permanently, but no such guarantee can be made. This is not to say government cannot play a role, only that we should not view government as a substitute.

3. Charity, while good for the other and aimed at the good of the other, is also good for the giver. That is, in giving I become better myself in a very objective and intrinsic way, which is to say, regardless of whether someone else witnesses my charity or not (or rather, especially when no one witnesses it or knows about it). Really, the kind of socialistic programs that I see many promoting only reinforces consumerism: the responsibility is outsourced to government who forcefully takes part of my income and "takes care of the problem" so that I can carry on uninterrupted with my consumerist lifestyle. But charity offers a way of breaking the cycle.

4. It is a false dichotomy to pit the moral growth charity produces against the good of the other. A true charitable act is characterized both by sacrifice and producing objective good. This requires knowledge. Without knowledge, there can be no love because you cannot love what you do not know.

5. Charity is always an individual act (groups aren't agents), but this doesn't mean that people cannot cooperate toward a charitable end. That's what charitable organizations do. The organized individual sacrifices of each person contributes to some good.

6. The notion that charity must consist of some grandiose or glamorous, but ultimately distant, concern for some people on the other side of the planet is misguided. We could use more charity in our daily lives and interactions and relationships.

7. The notion of the common good is omitted in the article. It is good for a society that people are not starving, for example.

[0] https://www.newadvent.org/summa/3023.htm [1] https://www.newadvent.org/cathen/03592a.htm


> But effective altruism supplies no plan for the elimination of poverty itself, and there is no way for a feasible plan for that goal to be developed and implemented by this method of reasoning at the margin.

Except if you look at actual EA organizations they are not “reasoning at the margin” when it comes to global poverty. GiveWell is targeting global health, which can absolutely be solved with donations (vaccines, for instance, don’t require changes to societal structure to eradicate diseases).

Open Philanthropy tackles more structural issues such as criminal justice reform - but they aren’t doing something “marginal” like donating to bail funds. Instead, they’re donating to organizations which build grassroots movements among crime victims and formerly incarcerated people and funding pilot programs for alternatives to prisons.


total nonsense, effective altruism is by definition effective and altruistic. Altruism maybe misguided, but well, try harder. The unwritten implications is "don't do it" it's amazing the mental hoops people jump through to justify what they already do.


A solution to 90% of the article is maximizing average expected utility which avoids several flaws with traditional utilitarianism.

GiveDirectly is a ground-up solution to the long-term problem of poverty.

Now I get very political, but I think it's also empirically true: global capitalism is the major force for extreme poverty in this century. Cheap labor is highly desirable and economically efficient for everyone except the poor. I think one convincing piece of evidence is how many countries are in favor of free trade agreements but not free movement (immigration, emigration) agreements, except between economic peers.

EDIT: The article does mention GiveDirectly but says UBI is a better solution, which is strictly true, but requires political will to implement much like changes to immigration policies whereas individual charity can have immediate benefits while political will shifts.


>global capitalism is the major force for extreme poverty in this century.

The complete opposite. Where do you think globalised capitalism gets cheap labour from? Hint: it's not developed countries. Then to get them to work for them, they have to pay them something, and people aren't going to accept the jobs if it's not enough to live on.

The empirical data also bears this out, as the other reply notes.

Now, whether capitalism is a force for increasing wealth gap and relative poverty in developed countries is a different question; but the "poverty" in that case is certainly much less "extreme."


> global capitalism is the major force for extreme poverty in this century.

On what basis are you claiming this ?

All data clearly shows the extreme poverty has declined significantly in this century:

https://ourworldindata.org/extreme-poverty

While the cause can be debated, it's quite clear that the global trade contributed significantly to reducing the poverty in the world.


Exactly. Capitalism is ultimately playing a key role in perpetuating poverty.


Effective altruism is a net positive, but it’s not a panacea. It suffers from the same problems as central planning and communism.

Donors and nonprofits think they can calculate the most “effective” way to allocate funds — if only we can pack enough phds in this boardroom we’ll end poverty! Problem is, there’s no space in the room for the experts themselves — the ppl actually in poverty.

Poverty is complex and no group of people can predict all the externalities of trying to address it — like using bed nets for fishing instead of malaria prevention or changes in behavior when the rich scientists come to town with their clipboards to conduct their RCTs.

I find it funny that capitalists are all about trusting the free market and self-interest when it comes to business and politics, but when it comes to philanthropy, they know best.

In reality, the solution is quite simple — just ask “the people in poverty” if and how they want help. Most of the time they’ll likely ask for money or a job, which seems pretty reasonable to me and can be accomplished with very little overhead.


You might like GiveDirectly then, which is all about giving poor people money for them to spend as they see fit.

(There is more to it than that since distribution is nontrivial and they do scientific studies to see how well it works. After all, how do you prove it’s a good idea without studying it?)

It’s a recommendation of GiveWell and favorite of effective altruism types, at least as a benchmark.


One of the most popular EA charities [1] gives money to very poor people, with no strings attached, for precisely that reason. The people you're criticizing embraced your criticism many years ago; did you notice?

[1] https://www.givewell.org/charities/give-directly


I didn't read the article.

Fwiw afaic, the reason poverty persists is a product of the market solution we've elected. It's for all practical purposes and intents necessary for poverty to exist as it creates a class of people willing to invest their time at a deficit to themselves (relatively undervalued labor). More or less kin to introducing an artificial concentration gradient, where there likely was none. Without that poverty resources would become increasingly expensive because labor supply becomes more reticent to act on a deficit because they're no longer existentially threatened. Additionally, demand increases as there is more liquidity and velocity. The most compromising feature of this is the megalithic multinationals sinking all that money which destroys the natural local circularity. All the money ends up in Amazon, Nike, and Coca-Cola's accounts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: