What nightmare scenario are you envisioning? Now stop, go back, and examine who the bad actor is in that scenario.
Leaving such powerful tools in the hands of secretive organizations or uber-wealthy individuals is a guarantee for bad outcomes. Distributing the tools, the know-how required to build and service them, and the benefits of their use -- that's the antidote.
A much better model to compare to is a medical one.
Yes, the knowledge about how to perform medical practices should be open and shared for the benefit of the world.
However, laws and practices should be put in place to prevent harm by incompetent or malicious people.
It’s not rocket science.
You don’t need a phd to look at a deep fake and go “well, gee, I guess this could be problematic”. Did you see the recent coverage about teenagers generating porn of their class mates?
Come on.
It’s not about the end of world, it’s about preventing all kinds of harmful stuff (eg. above).
You cant have cake and eat it too.
Either you make everything available to everyone and wear the consequences; or you restrict things so not everyone can have them; or you make it illegal to use them without an appropriate license.
The idea that (a), (ie. just go wild!) is the best outcome it naive, utopian and deeply misguided.
Yes, it’s complicated; but the “Illuminati control the world” stuff is just crazy, stupid conspiracy theory stuff.
Do what you want privately, but you need to be certified to have a business that can legaly run AI models with capabilities > X?
It’s not outrageous.
That all AI runs entirely behind the APIs of a few businesses? That would be literally the same thing, except you could choose to do it yourself if you were willing to meet the legal requirements.
Like banks. Or insurance.
Outrageous I know, but … that’s the world we live in, and it’s not nearly as “nightmare” as some people seem to think.
Some rando figures out a really really clever agentic-behavior prompt and opensources it. Some other rando figures out a way to allow GPT2 to generate its own training data and iteratively loop in it, and opensources it. Some third rando figures out a way to make much more efficient use of limited examples and opensources it. Some fourth rando puts them all together, sets them running on Azure and goes to sleep. What happens after that point depends on whether you think enzyme-bootstrapped nanotech is physically possible, but at any rate it's no longer our game.
The bad actor in this scenario is, of course, Yann LeCun, and no I am not joking.
If you believe this is a possibility, what makes you believe laws restricting the technology to certain corporate or government entities will prevent that scenario? The exact same scenario is very possible still, only many additional (and more realistic) bad scenarios are also possible as well, such as technocrat dictatorship scenarios.
It won't prevent it, but it will reduce the likelihood enormously.
I don't think the technocrat dictatorship risk adds up to enough to outweigh the rando experimenter risk by far. Also, it's not like we don't face the dictatorship risk if randos have AI too; "Russia has nukes" is not solved by "actually, some civilians also have nukes" except in the sense that they may kill us first, and thus relieve us of having to worry about Russia.
What prevents a random guy (or team) at some corporate from performing all those steps deliberately in the earnest belief it would "improve the world[1]", or at least increase the employer's profits and bag them bonuses and/or promotions?
1. Say, if it shows promising early signs to develop a cure for a cancer.
Leaving such powerful tools in the hands of secretive organizations or uber-wealthy individuals is a guarantee for bad outcomes. Distributing the tools, the know-how required to build and service them, and the benefits of their use -- that's the antidote.