Hacker News new | past | comments | ask | show | jobs | submit login

Waters are often muddied here by our own psychology. We (as a species) tend to ascribe intelligence to things that can speak. Even more so when someone (or thing in this case) can not just speak, but articulate well.

We know these are algorithms, but how many people fall in love or make friends over nothing but a letter or text message?

Capabilities for reasoning aside, we should all be very careful of our perceptions of intelligence based solely on a machines or algorithms apparent ability to communicate.




>we should all be very careful of our perceptions of intelligence based solely on a machines or algorithms apparent ability to communicate.

I don't think that's merely an irrational compulsion. Communication can immediately demonstrate intelligence, and I think it quite clearly has, in numerous ways. The benchmarks out there cover a reasonable range of measurements that aren't subjective, and there's clear yes-or-no answers to whether the communication is showing real ways to solve problems (e.g. change a tire, write lines of code, solving word problems, critiquing essays), where the output proves it in the first instance.

Where there's an open question is in whether you're commingling the notion of intelligence with consciousness, or identifying intelligence with AGI, or with "human like" uniqueness, or some other special ingredient. I think your warning is important and valid in many contexts (people tend to get carried away when discussing plant "intelligence", and earlier versions of "AI" like Eliza were not the real deal, and Sophia the robot "granted citizenship" was a joke).

But this is not a case, I think where it's a matter of intuitions leading us astray.


> Where there's an open question is in whether you're commingling the notion of intelligence with consciousness

I’m absolutely commingling these two things and that is an excellent point.

Markov chains and other algorithms that can generate text can give the appearance of intelligence without any kind of understanding or consciousness.

I’m not personally certain of consciousness is even requisite for intelligence, given that as far as we know consciousness is an emergent property stemming from some level of problem solving ability.


This seems like the classic shifting of goalposts to determine when AI has actually become intelligent. Is the ability to communicate not a form of intelligence? We don't have to pretend like these models are super intelligent, but to deny them any intelligence seems too far for me.


My intent was not to claim communication isn’t a sign of intelligence, but that the appearance of communication and our tendency to anthropomorphize behaviors that are similar to ours can result in misunderstandings as to the current capabilities of LLMs.

glenstein made a good point that I was commingling concepts of intelligence and consciousness. I think his commentary is really insightful here: https://news.ycombinator.com/item?id=42912765


AI certainly won't be intelligent while it has episodic responses to queries with no ability to learn from or even remember the conversation without it being fed back through as context. This is the current case for LLM models. Token prediction != Intelligence no matter how intelligent it may seem. I would say adaptability is a fundamental requirement of intelligence.


>AI certainly won't be intelligent while it has episodic responses to queries with no ability to learn from or even remember the conversation without it being fed back through as context.

Thank God no one at the AI labs is working to remove that limitation!


And yet, it is still a current limitation and relevant to all current claims of LLM intelligence.


The guy in memento is clearly still an intelligent human despite having no memory. These arguments always strike me as coming from a "humans are just special okay!" place. Why are you so determined to find some way in which LLMs aren't intelligent? Why gatekeep so much?


I mean humans have short term and long term memory, short term memory is just our context window.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: