Hacker News new | past | comments | ask | show | jobs | submit login

There were dozens of 20th Century ideological movements which developed their own forms of "Newspeak" in their own native languages. Largely, natural human dialog between native speakers and between those opposed to the prevailing regime recoils violently at stilted, official, or just "uncool" usages in daily vernacular. So I wouldn't be too surprised to see a sharp downtick in the popular use of any word that becomes subject to an LLM's positive-feedback loop.

Far from saying the pool of language is now polluted, I think we now have a great data set to begin to discern authentic from inauthentic human language. Although sure, people on the fringes could get caught in a false positive for being bots, like you or I.

The biggest LLM of them all is the daily driver of all new linguistic innovation: Human society, in all its daily interactions. The quintillions of daily phrases exchanged and forever mutating around the globe - each mutation of phrase interacting with its interlocutor, and each drawing from not the last 500,000 tokens but the entire multi-modal, if you will, experience of each human to date in their entire lives - vastly eclipses anything any hardware could ever emulate given the current energy constraints. Software LLMs are just a state machine stuck in a moment in time. At best they will always lag, the way Stalinist language lagged years behind the patois of average Russians, who invented daily linguistic dodges to subvert and mock the regime. The same process takes place anywhere there is a dominant official or uncool accent or phrasing. The ghetto invents new words, new rhythm, and then it becomes cool in the middle class. The authorities never catch up, precisely because the use of subversive language is humanity's immune system against authority.

If there is one distinctly human trait, it's sniffing out anyone who sounds suspiciously inauthentic. (Sadly, it's also the trait that leads to every kind of conspiracy theorizing imaginable; but this too probably confers in some cases an evolutionary advantage). Sniffing out the sound of a few LLMs is already happening, and will accelerate geometrically, much faster than new models can be trained.




Really insightful.

I'm a little more cautious though. I think GPT will be way more integrated, simply because it's useful. Stalinist language was artificial, in the sense that it was basically imposed on you from outside for no good reason. When you wanted to get real stuff done (either talking to close friends, being productive with colleagues, etc) you wouldn't use socialist newspeak because it got in the way. GPT will be imposed by the outside world, but it's actually a useful thing to be able to converse with a language model; you'll do it every day at work, when buying things, when using your phone/PC.

And also, unlike in USSR times, so much of our communication is online and visible. It would not surprise me if we develop a model that can train continuously on the firehose. Text is small. Data rate of every person on earth speaking simultaneously:

- 150 words per minute spoken

- 150 words × (5 characters/word + 1 space) = 150 × 6 = 900 characters per minute

- 1 byte per char = 900 bytes/min = 15 bytes/sec

- 15 bytes / sec * 8,000,000,000 people speaking continuously = 120 gigabytes/second

That's a lot but it's not even the bandwidth of a single consumer GPU.


humans also lag humans, the future may already be spoken, but the slang is not evenly memed out yet.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: