(this discussion is quite nuanced so I apologize in advance for any uncharitable interpretations that I may make.)
> I'm not really trying to rebut Michael's argument -- I think it's true, to an extent, some of the time. But I think it's more true more of the time in the reverse direction.
I understand you to be saying:
Michael: Pro AI capabilities people are ignoring AIXR ideas because they are very excited about benefiting from (the funding of) future AI systems.
Reverse Direction: ainotkilleveryoneism people are ignoring AIXR ideas because they are very excited about benefiting from the funding of AI safety organizations.
And that (RD) is more frequently true than (M).
IMO both (RD) and (M) are true in many cases. IME it seems like (M) is true more often. But I haven't tried to gather any data and I wouldn't be surprised if it turned out to actually be the other way.
> So I don't think it's a good argument.
I might be misunderstanding you here because I don't see Michael making an argument at all. I just see him making the assertion (M).
> And more importantly, I think it fails to properly grapple with the ideas, instead using an ad hominem approach to discarding them somewhat thoughtless.
I am ambivalent toward this point. On one hand Michael is just making a straightforward (possibly false) empirical claim about the minds of certain people (specifically, a claim of the form: these people are doing X because of Y). It might really be the case that people are failing to grapple with AIXR ideas because they are so excited about benefiting from future AI tech, and if it were, then it seems like the sort of thing that it would be good to point out.
But OTOH he doesn't produce an argument against the claim "AIXR is just marketing hype." which is unfair to someone who has genuinely come to that conclusion via careful deliberation.
> On your last point, I do think it's important to note, and reflect carefully on, the extremely high overlap between those funding ai notkilleveryoneism and those funding capabilities development.
Thanks for pointing this out. Indeed, why are people who profess that AI has a not insignificant chance of killing everyone also starting companies that do AI capabilities development? Maybe they don't believe what they say and are just trying to get exclusive control of future AI technology. IMO there is a significant chance that some parties are doing just that. But even if that is true, then it might still be the case that ASI is an XR.
> I'm not really trying to rebut Michael's argument -- I think it's true, to an extent, some of the time. But I think it's more true more of the time in the reverse direction.
I understand you to be saying:
Michael: Pro AI capabilities people are ignoring AIXR ideas because they are very excited about benefiting from (the funding of) future AI systems.
Reverse Direction: ainotkilleveryoneism people are ignoring AIXR ideas because they are very excited about benefiting from the funding of AI safety organizations.
And that (RD) is more frequently true than (M).
IMO both (RD) and (M) are true in many cases. IME it seems like (M) is true more often. But I haven't tried to gather any data and I wouldn't be surprised if it turned out to actually be the other way.
> So I don't think it's a good argument.
I might be misunderstanding you here because I don't see Michael making an argument at all. I just see him making the assertion (M).
> And more importantly, I think it fails to properly grapple with the ideas, instead using an ad hominem approach to discarding them somewhat thoughtless.
I am ambivalent toward this point. On one hand Michael is just making a straightforward (possibly false) empirical claim about the minds of certain people (specifically, a claim of the form: these people are doing X because of Y). It might really be the case that people are failing to grapple with AIXR ideas because they are so excited about benefiting from future AI tech, and if it were, then it seems like the sort of thing that it would be good to point out.
But OTOH he doesn't produce an argument against the claim "AIXR is just marketing hype." which is unfair to someone who has genuinely come to that conclusion via careful deliberation.
> On your last point, I do think it's important to note, and reflect carefully on, the extremely high overlap between those funding ai notkilleveryoneism and those funding capabilities development.
Thanks for pointing this out. Indeed, why are people who profess that AI has a not insignificant chance of killing everyone also starting companies that do AI capabilities development? Maybe they don't believe what they say and are just trying to get exclusive control of future AI technology. IMO there is a significant chance that some parties are doing just that. But even if that is true, then it might still be the case that ASI is an XR.