Hacker News new | past | comments | ask | show | jobs | submit login

I feel this is mostly going to come down to how we define the word. I suspect we agree that there's no point in differentiating "reasoning" from "mimicked reasoning" if the performed actions are identical in every situation.

So let's ask differently: what concrete problem do you think LLMs cannot solve?




> what concrete problem do you think LLMs cannot solve?

From the top of my head:

Drawing novel solutions from existing scientific data for one. Extracting information from incomplete data that is only apparent by reasoning (such as my code-bug example given elsewhere in this thread), aka. assuming hidden factors. Complex math is still beyond them, predictive analysis requiring inference is an issue.

They also still face the problem of, as has been anthropomorphized so well, "fantasizing", especially during longer conversations; which is cute when they pretend that footballs fit in coffee-cups, but not so cute when things like this happens:

https://eu.usatoday.com/story/opinion/columnist/2023/04/03/c...

--

These certainly don't matter for the things I am using them for, of course, and so far, they turn out to be tremendously useful tools.

The trouble, however, is not with the problems I know they cannot, or cannot reliably, solve. The problem is with as of yet unknown problems where humans, me included, might assume they can solve, and suddenly it turns out they can't. What these problems are, time will tell. So far we have barely scratched the surface of introducing LLMs in our tech products. So I think it's valueable to keep in mind that there is, in fact, a difference between actually reasoning, and mimicking it, even if the mimicry is to a high standard. If for nothing else, then only to remind us to be careful in how, and for what, we use them.


I mean, do you think a LLM cannot draw a novel solution from existing data, fundamentally, because its reasoning is "of the wrong kind"? That seems potentially disprovable. - Or do you just think current products can't do it? I'd agree with that.

What's the easiest novel scientific solution that AI couldn't find if it wasn't in its training set?


> because its reasoning is "of the wrong kind"?

No, because it doesn't reason, period. Stochastic analysis of sequence probabilities != Reasoning. I explained my thoughts on the matter in this thread to quite some extend.

> That seems potentially disprovable.

You're welcome to try and disprove it. As for prior research on the matter:

https://www.cnet.com/science/meta-trained-an-ai-on-48-millio...

And afaik, Galactica wasn't even intended to do novel research, it was only intended for the, time consuming but comparably easier, tasks of helping to summarize existing scientific data, ask questions about it in natural language and write "scientific code".


Alright, I'll keep an eye open for instances of networks doing scientific reasoning.

(My own belief is that reasoning is 95% habit and 5% randomness, and that networks don't do it because it hasn't been reflected in their training sets, and they can't acquire the skills because they can't acquire any skills not in the training set.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: