Well, for starters, Musk? As an investor (rather, lead capital provider) in a non-profit, he was blindsided by the decision entirely.
And before anyone criticizes me for defending his viewpoint, let's assume this: if Musk's $100 million investment were to be replaced by 100k HN users investing $1k each into a non-profit focused on safe and "open AI", do you think they would be uniformly happy about this decision?
You can perfectly have an AI that is powerful if you have the investment backing (like Musk provided). You can develop stuff out in the open like most software non profits do (case in point, the Linux Foundation), instead of playing Microsoft's slave.
An open AI is safe inherently, since it means that it can be easily ripped apart and thoroughly studied for exploitable points, unlike some closed black box system. Having Open AI be some closed system does nothing to reduce the number of bad actors - they will all choose to exploit Open AI's system once given the opportunity.
>An open AI is safe inherently, since it means that it can be easily ripped apart and thoroughly studied for exploitable points, unlike some closed black box system. Having Open AI be some closed system does nothing to reduce the number of bad actors - they will all choose to exploit Open AI's system once given the opportunity.
Replace the word "AI" with "ultra deadly and contagious bioweapon" and it comes clearer why being "open" is itself a danger, for those who aren't able to zero-shot understand it.
That's precisely the point of it being open - it makes it easier to understand its points of failure rather than attributing it as a feature of the black box. An open source bio weapon (if one existed) would not be as dangerous compared to a secretive one, simply because once out in the open, its points of failure would have already been studied.
And before anyone criticizes me for defending his viewpoint, let's assume this: if Musk's $100 million investment were to be replaced by 100k HN users investing $1k each into a non-profit focused on safe and "open AI", do you think they would be uniformly happy about this decision?