OpenAI is heavily influenced by big-R Rationalists, who fear the issues of misaligned AI being given power to do bad things.
When they were first talking about this, lots of people ignored this by saying "let's just keep the AI in a box", and even last year it was "what's so hard about an off switch?".
The problem with any model you can just download and run is that some complete idiot will do that and just give the AI agency they shouldn't have. Fortunately, for now the models are more of a threat to their users than anyone else — lawyers who use it to do lawyering without checking the results losing their law licence, etc.
But that doesn't mean open models are not a threat to other people besides their users, as all the artists complaining about losing work due to Stable Diffusion, the law enforcement people concerned about illegal porn, election interference specialists worried about propaganda, and anyone trying to use a search engine, and that research lab that found a huge number of novel nerve agent candidates whose precursors aren't all listed as dual use, will all tell you for different reasons.
> Fortunately, for now the models are more of a threat to their users than anyone else
Models have access to users, users have access to dangerous stuff. Seems like we are already vulnerable.
The AI splits a task in two parts, and gets two people to execute each part without knowing the effect. This was a scenario in one of Asimov's robot novels, but the roles were reversed.
AI models exposed to public at large is a huge security hole. We got to live with the consequences, no turning back now.
My impression is that OpenAI was founded by true believers, with the best intentions; whose hopes were ultimately sidelined in the inexorable crush of business and finance.