…I mean, it was advancing slowly for linguistic tasks until late 2022, that’s fair. That’s why we’re in such a crazy unexpected rollercoaster of an era - we accidentally cracked intuitive computing while trying to build the best text autocomplete.
AI in general is from 1950, or more generally from whenever the abacus was invented. This very website runs on AI, and always has. I would implore us to speak more exactly if we’re criticizing stuff; “LLMs” came around (in force) in 2023, both for coherent language use (ChatGPT 3.5) and image use (DALLE2). The predecessors were an order of magnitude less capable, and going back 5 years puts us back in the era of “chatbots”, aka dumb toys that can barely string together a Reddit comment on /r/subredditsimulator.
AI so far has given us ability to mass produce shit content of no use to anybody and the next iteration of customer support phone menu trees that sound more convincingly yet remain just as useless. That and another round of IP theft and mass surveillance in the name of progress.
This is a consequence of a type of cognitive bias - bad examples of AI are more easily detectable than good examples of AI. Subsequently, when we recall examples of AI content, bad examples are more easily accessible. This leads to the faulty conclusion that.
> AI so far has given us ability to mass produce shit content of no use to anybody
Good AI goes largely undetected, for the simple reason that it closely matches the distribution of non-AI content.
Controversial aside: This is same bias that results in non-passing trans people being representative of the whole. Passing trans folk simply blend in.
This basic concept can be applied in many places. Do you ever wonder why social movements seem to never work out well and demands are never met? That’s because when they do work out, and demands are met, those people quickly become the “oppressor” or the powerful class from which others are fighting to receive more rights or money.
All criminals seem so incredibly stupid that you can’t understand why anyone would ever try since they all are caught? The smart ones don’t get caught and no one ever hears about them.
You're making an unverifiable claim. How are we supposed to know that the undetected good AI exists at all? Everything I've seen explicitly produced by any of these models is in uncanny valley territory still, even the "good" stuff.
Verificationism[1] is a failed epistemology because it breaks under the Münchhausen trilemma. It's pseudo-scientific like astrology, four humors, and palm reading. Use better epistemologies.