Hacker News new | past | comments | ask | show | jobs | submit login

OpenPhil and Givewell are not the same organization and don't support the same causes, but Holden definitely is the same person. If you spend your career arguing that charities need to be more open and evidence-based and focus on cost-effectiveness and metrics, then the fact you sit on the board of a nonprofit which is considerably more opaque than any of them is relevant to the merits of those arguments.

(And OpenAI was unambiguously well funded and likely to become financially sustainable at the time as OPP acknowledged when they explained they put the money in for the board seat)

Sure, you can't prove you've prevented the Singularity from happening in 40 years (not even if you're talking about the one predicted in the 80s!), but if it's reasonable for EAs to insist that charities distributing food or drugs already widely proven to work conduct RCTs or at least put cost benefit analysis on their website, I don't think we can look past the fact that the web page for a tax-deductible cause they're actually responsible for the governance of is a marketing page for NLP tools with absolutely nothing about how their donor dollars are spent...

See, I can buy the argument that certain types of non-commercial AI research are going to be significant boons for humanity, but I can buy the argument that applies to lots of unproven development interventions too, and that directly undercuts the cost-effectiveness ethos




> if it's reasonable for EAs to insist that charities distributing food or drugs already widely proven to work conduct RCTs

"Already widely proven to work" seems the stretch here. Lots of things were believed to obviously work that turned out not to.

> See, I can buy the argument that certain types of non-commercial AI research are going to be significant boons for humanity, but I can buy the argument that applies to lots of unproven development interventions too, and that directly undercuts the cost-effectiveness ethos

Sure, but I think the argument isn't "we should only fund things that are proven cost-effective" so much as "we should find out if the things we are funding are cost-effective if possible". I don't think the position here is that only things that are cost-effective should be funded; I think it's more that it's silly that cost-effectiveness seems to be irrelevant to what gets funded. It's not that we need to always gather data and perform trials, it's that we don't do it even if we could.

I think part of what is happening is that effective charity is based primarily on arguments. For instance, a bunch of people got convinced by the argument that a certain charity (bed nets) was the most cost effective; this argument was based on cost-benefit analysis. Then a bunch of people got convinced by the argument that a certain charity (AI safety) was the most cost effective; this argument was based on what seemed to them a plausible forecast of absurd benefits. It's likely to me these people already believed that an AI singularity was coming, they just had a bunch of diffuse thoughts on whether it would be beneficial, and the arguments involved (such as Bostrom's Superintelligence) convinced them that the effort being put into making it beneficial was currently enormously underscaled. So I think the EA field is more "people convinced by logical arguments" than "people convinced by RCTs"; in that the people convinced by RCTs are convinced because RCTs meet their standard for an argument, but are not necessarily the only thing that does.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: