Plausible deniability to avoid liability. Consider that:
1. We're in new legal territory with these models.
2. Many people lack the intellectually maturity to accept speech they dislike. This includes regulators, activists, advertisers, etc.
3. The alignment people loudly repeat their alarmist sci-fi stories to laypeople.
4. People hate Big Tech.
These circumstances don't exactly exude stability, so companies react with caution. I don't blame them.
The situation will settle down. Case law will establish that LMs don't produce derivative work anymore than humans do. Discourse about free speech & personal discretion with these tools will soften outrage. The immense utility of LMs will build appreciation. As this technological revolution rolls out slowly over many years without major catastrophe, public panic will die down regarding sudden mass unemployment, bad actors creating misinformation wars or doomsday bugs, etc.
Today remains the best time to exist in human history. That clear trend will continue. Our imaginations will again vastly outpace the pedestrian, steady, wonderful march of progress. As this moment settles down, so too will companies regarding aggressively thought-policing AI. But it'll be a hot minute. ;)
This is not about people, this is about journalists.
Almost everybody who can use a LLM understands its results shouldn't be taken literally. Still, it takes only a couple of clickbaity "parents concerned about ChatGPT turning their children into nazis" stories to move public opinion against LLMs.
There are a lot of serious open legal questions against these models, so preventing these stories is on the best interests of Microsoft.
If ChatGPT had existed in 2016, the media would have blamed it for Trump getting elected.
People in my state were using GPT3 to write letters to our representatives citing completely fake gun laws and precedent for their disapproval with a candidate and "proof" they're infringing on the 2nd amendment. None of them listen when I told them that the data and the sources were completely falsified.
People already use it as a harmful bias reinforcement tool
1. We're in new legal territory with these models.
2. Many people lack the intellectually maturity to accept speech they dislike. This includes regulators, activists, advertisers, etc.
3. The alignment people loudly repeat their alarmist sci-fi stories to laypeople.
4. People hate Big Tech.
These circumstances don't exactly exude stability, so companies react with caution. I don't blame them.
The situation will settle down. Case law will establish that LMs don't produce derivative work anymore than humans do. Discourse about free speech & personal discretion with these tools will soften outrage. The immense utility of LMs will build appreciation. As this technological revolution rolls out slowly over many years without major catastrophe, public panic will die down regarding sudden mass unemployment, bad actors creating misinformation wars or doomsday bugs, etc.
Today remains the best time to exist in human history. That clear trend will continue. Our imaginations will again vastly outpace the pedestrian, steady, wonderful march of progress. As this moment settles down, so too will companies regarding aggressively thought-policing AI. But it'll be a hot minute. ;)