Given that we can train a transformer model by shoveling large amounts of inert text at it, and then use it to compose original works and solve original problems with the addition of nothing more than generic computing power, we can conclude that there's nothing special about what the human brain does.
All that remains is to come up with a way to integrate short-term experience into long-term memory, and we can call the job of emulating our brains done, at least in principle. Everything after that will amount to detail work.
If the brain only uses language like a sportscaster explaining post-hoc what the self and others are doing (experimental evidence 2003, empirical proof 2016), then what's special about brains is entirely separate from what language is or appears to be. It's not even like a ticker tape that records trades, it's like a disengaged, arbitrary set of sequences that have nothing to do with what we're doing (and thinking!).
Language is like a disembodied science-fiction narration.
> we can conclude that there's nothing special about what the human brain does
...lol. Yikes.
I do not accept your premise. At all.
> use it to compose original works and solve original problems
Which original works and original problems have LLMs solved, exactly? You might find a random article or stealth marketing paper that claims to have solved some novel problem, but if what you're saying were actually true, we'd be flooded with original works and new problems being solved. So where are all these original works?
> All that remains is to come up with a way to integrate short-term experience into long-term memory, and we can call the job of emulating our brains done, at least in principle
What experience do you have that caused you to believe these things?
No, the burden of proof is on you to deliver. You are the claimant, you provide the proof. You made a drive-by assertion with no evidence or even arguments.
I also do not accept your assertion, at all. Humans largely function on the basis of desire-fulfilment, be that eating, fucking, seeking safety, gaining power, or any of the other myriad human activities. Our brains, and the brains of all the animals before us, have evolved for that purpose. For evidence, start with Skinner or the millions of behavioral analysis studies done in that field.
Our thoughts lend themselves to those activities. They arise from desire. Transformers have nothing to do with human cognition because they do not contain the basic chemical building blocks that precede and give rise to human cognition. They are, in fact, stochastic parrots, that can fool others, like yourself, into believing they are somehow thinking.
[1] Libet, B., Gleason, C. A., Wright, E. W., & Pearl, D. K. (1983). Time of conscious intention to act in relation to onset of cerebral activity (readiness-potential). Brain, 106(3), 623-642.
[2] Soon, C. S., Brass, M., Heinze, H. J., & Haynes, J. D. (2008). Unconscious determinants of free decisions in the human brain. Nature Neuroscience, 11(5), 543-545.
[3] Berridge, K. C., & Robinson, T. E. (2003). Parsing reward. Trends in Neurosciences, 26(9), 507-513. (This paper reviews the "wanting" vs. "liking" distinction, where unconscious "wanting" or desire is driven by dopamine).
[4] Kavanagh, D. J., Andrade, J., & May, J. (2005). Elaborated Intrusion theory of desire: a multi-component cognitive model of craving. British Journal of Health Psychology, 10(4), 515-532. (This model proposes that desires begin as unconscious "intrusions" that precede conscious thought and elaboration).
If anything, your citation 1, along with subsequent fMRI studies, backs up my point. We literally don't know what we're going to do next. Is that a hallmark of cognition in your book? The rest are simply irrelevant.
They are, in fact, stochastic parrots, that can fool others, like yourself, into believing they are somehow thinking.
What makes you think you're not arguing with one now?
You are not making an argument, you are just making assertions without evidence and then telling us the burden of proof is on us to tell you why not.
If you went walking down the streets yelling the world is run by a secret cabal of reptile-people without evidence, you would rightfully be declared insane.
Our feelings and desires largely determine the content of our thoughts and actions. LLMs do not function as such.
Whether I am arguing with a parrot or not has nothing to do with cognition. A parrot being able to usefully fool a human has nothing to do with cognition.
I was just saying that it's fine if you don't accept my premise, but that doesn't change the reality of the premise.
The International Math Olympiad qualifies as solving original problems, for example. If you disagree, that's a case you have to make. Transformer models are unquestionably better at math than I am. They are also better at composition, and will soon be better at programming if they aren't already.
Every time a magazine editor is fooled by AI slop, every time an entire subreddit loses the Turing test to somebody's ethically-questionable 'experiment', every time an AI-rendered image wins a contest meant for human artists -- those are original works.
Heck, looking at my Spotify playlist, I'd be amazed if I haven't already been fooled by AI-composed music. If it hasn't happened yet, it will probably happen next week, or maybe next year. Certainly within the next five years.
> The International Math Olympiad qualifies as solving original problems, for example. If you disagree, that's a case you have to make. Transformer models are unquestionably better at math than I am. They are also better at composition, and will soon be better at programming if they aren't already.
No, it does not. You're just telling me you've never seen what these problems are like.
> Every time a magazine editor is fooled by AI slop, every time an entire subreddit loses the Turing test to somebody's ethically-questionable 'experiment', every time an AI-rendered image wins a contest meant for human artists -- those are original works.
That's such an absurd logical leap. If you plagiarize a paper and it fools your English teacher, you did not produce an original work. You fooled someone.
> Heck, looking at my Spotify playlist, I'd be amazed if I haven't already been fooled by AI-composed music.
Who knows, but you've already demonstrated that you're easy to fool, since you've bought all the AI hype and seem to be unwilling to accept that an AI CEO or a politician would lie to you.
> If it hasn't happened yet, it will probably happen next week, or maybe next year. Certainly within the next five years.
I can pull numbers out of my ass too, watch! 5, 18, 33, 1, 556. Impressed? But jokes aside, guesses about the future are not evidence, especially when they're based on nothing but your own misguided gut feeling.
No they dont. Humans also know when they are pretending to know what they are talking about - put said people against the wall and they will freely admit they have no idea what the buzzwords they are saying mean.
WTAF? Maybe you're new here, but the term "hallucinate" came from a very human experience, and was only usurped recently by "AI" bros who wanted to anthropomorphize a tin can.
>Humans also know when they are pretending to know what they are talking about - put said people against the wall and they will freely admit they have no idea what the buzzwords they are saying mean.
>Machines possess no such characteristic.
"AI" will say whatever you want to hear to make you go away. That's the extent of their "characteristic". If it doesn't satisfy the user, they try again, and spit out whatever garbage it calculates should make the user go away. The machine has far less of an "idea" what it's saying.