Hacker News new | past | comments | ask | show | jobs | submit login

Isn't that exactly how humans learn to respond to stimuli? Don't we just try to predict the best next response to everything? Yes, It's statistics but the fun part is nobody is writing this statistical function by hand.



LLMs don't have a concept of "best". Only most likely in what they've been trained on.

I think LLMs ultimately just take imitation to a creative and sophisticated extreme. And imitation simply doesn't comprise the whole of human intelligence at all, no matter how much it is scaled up.

The sophistication of the imitation has some people confused and questioning whether everything can be reduced to imitation. It can't.

The ability to imitate seeking a goal isn't identical to the ability to seek a goal.

The ability to imitate solving a problem isn't identical to the ability to solve a problem.

Imitation is very useful, and the reduction of everything to imitation is an intriguing possibility to consider, but it's ultimately just wrong.


You need to think deeper.

There are levels of sophistication in "imitation". It follows a gradient. At the low end of this gradient is a bad imitation.

At the high end of this gradient is a perfect imitation. Completely indistinguishable from what it's imitating.

If an imitation is perfect than is it really an imitation?

If I progressively make my imitation more and more accurate am I progressively building an imitation or am I progressively building the real thing?

See what's going on here? You fell for a play on words. It's a common trope. Sometimes language and vocabulary actually tricks the brain into thinking in a certain direction. This word "imitation" is clouding your thoughts.

Think about it. A half built house can easily be called an imitation of a real house.


Ok, so now we need an example that separates humans from LLMs?

I struggle to think of one, maybe someone on HN has a good example.

Eg if I'm in middle school and learning quadratic equations, am I imitating solving the problem by plugging in the coefficients? Or am I understanding it?

Most of what I see coming out of chatGPT and copilot could be said to be either. If you're generous, it's understanding. If not, it's imitation.


It is very easy to separate humans from LLMs. Humans created math without being given all the answers beforehand. LLMs can't do that yet.

When an LLM can create math to solve a problem, we will be much closer to AGI.


Some humans created maths. And it took thousands of years of thinking and interaction with the real world.

Seems like goalpost moving to me.

I think the real things that separate LLMs from humans at the moment are:

* Humans can do online learning. They have long term memory. I guess you could equate evolution to the training phase of AI but it still seems like they don't have quite the same on-line learning capabilities as us. This is what probably prevents them from doing things like inventing maths.

* They seem to be incapable of saying "I don't know". Ok to be fair lots of humans struggle with this! I'm sure this will be solved fairly soon though.

* They don't have a survival instinct that drives proactive action. Sure you can tell them what to do but that doesn't seem quite the same.


Interestingly some humans will admit to not knowing but are allergic to admitting being wrong (and can get fairly vindictive if forced to admit being wrong).

LLM’s actually admit to being wrong easily, but aren’t great at introspection and confabulate too often. also their Meta cognition is poor still.


I guess LLM's don't have the social pressure to avoid admitting errors. And those sort of interactions aren't common in text so they don't learn them strongly.

Also ChatGPT is trained specifically to be helpful and subservient.


About this goalpost moving thing. It's become very popular to say this, but I have no idea what it's supposed to mean. It's like a metaphor with no underlying reality.

Did a wise arbiter of truth set up goalposts that I moved? I guess I didn't get the memo.

If the implied claim is "GPT would invent math too given enough time", go ahead and make that claim.


> Did a wise arbiter of truth set up goalposts that I moved?

Collectively, yes. The criticism of AI has always been "well it isn't AI because it can't do [thing just beyond its abilities].

Maybe individually your goalpost hasn't moved, and as soon as it invents some maths you'll say "yep, it's intelligent" (though I strongly doubt it). But collectively the naysayers in general will find another reason why it's not really intelligent. Not like us.

It's very tedious.


Other than complaining about perceived inconsistencies in others' positions, what do you actually believe? Do you think GPT is AGI?


No. I don't think anyone seriously believes that. AGI requires human level reasoning and it hasn't achieved that, despite what benchmarks show (they tend to focus on "how many did it get right" more than "how many did it fail in stupid ways").

The issue with most criticism of LLMs wrt AGI is that they come up with totally bogus reasons why it isn't and can't ever be real intelligence.

It's just predicting the next word. It's a stochastic parrot. It's only repeating stuff it has been trained on. It doesn't have quantum microtubules. It can't really reason. It has some failure modes that humans don't. It can't do <some difficult task that most humans can't do>.

Seems to be mostly people feeling threatened. Very tedious.


You can ask ChatGPT to solve maths problems which are not in its training data, and it will answer an astonishing amount of them correctly.

The fact that we have trained it on examples of human-produced maths texts (rather than through interacting with the world over several millennia) seems like more of an implementation detail and not piece of evidence about whether it has “understood” or not.


They also get problems wrong, in the most dumb way possible. I've tested it out many times where the LLM got most of the more 'difficult' part of the problem right, but then forgot to do something simple in the final answer--and not like a simple error a human would make. It's incredibly boneheaded, like forgetting to apply the coefficient it solved for and just returning the initial problem value. Sometimes for coding snippets, it says one thing, and then produces code which does not even incorporate the thing it was talking about. It is clear that there is no actual conceptual understanding going on. I predict the next big breakthroughs in physics will not be made by LLMs--even if they have the advantage of being able to read every single paper ever published, because they cannot think.


> LLMs don't have a concept of "best". Only most likely in what they've been trained on.

At temperature 0 they are effectively producing the token that maximizes a weighted sum of base LM probability and model reward.


I don't think that also humans in general have this concept of "best".

But humans are able to build certain routines within their own system to help them to rationalize.


> Isn't that exactly how humans learn to respond to stimuli?

Maybe it is, maybe it isn't. Maybe we are "just" an incredibly powerful prediction engine. Or maybe we work from a completely different modus operandi, and our ability to predict things is an emergent capability of it.

The thing is, no one actually knows what makes us intelligent, or even how to define intelligence for that matter.


Yes, if you are in the no free will school of thought, then that would be what humans do.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: