Well then that sounds like a case against regulation. Because regulation will guarantee that only the biggest, meanest companies control the direction of AI, and all the benefits of increased resource extraction will flow upward exclusively to them. Whereas if we forego regulation (at least at this stage), then decentralized and community-federated versions of AI have as much of a chance to thrive as do the corporate variants, at least insofar as they can afford some base level of hardware for training (and some benevolent corporations may even open source model weights as a competitive advantage against their malevolent competitors).
It seems there are two sources of risk for AI: (1) increased power in the hands of the people controlling it, and (2) increased power in the AI itself. If you believe that (1) is the most existential risk, then you should be against regulation, because the best way to mitigate it is to allow the technology to spread and prosper amongst a more diffuse group of economic actors. If you believe that (2) is the most existential risk, then you basically have no choice but to advocate for an authoritarian world government that can stamp out any research before it begins.
Microsoft or perhaps Vanguard group might have different view of the future than yours.