Hacker News new | past | comments | ask | show | jobs | submit login

I mean, do you think a LLM cannot draw a novel solution from existing data, fundamentally, because its reasoning is "of the wrong kind"? That seems potentially disprovable. - Or do you just think current products can't do it? I'd agree with that.

What's the easiest novel scientific solution that AI couldn't find if it wasn't in its training set?




> because its reasoning is "of the wrong kind"?

No, because it doesn't reason, period. Stochastic analysis of sequence probabilities != Reasoning. I explained my thoughts on the matter in this thread to quite some extend.

> That seems potentially disprovable.

You're welcome to try and disprove it. As for prior research on the matter:

https://www.cnet.com/science/meta-trained-an-ai-on-48-millio...

And afaik, Galactica wasn't even intended to do novel research, it was only intended for the, time consuming but comparably easier, tasks of helping to summarize existing scientific data, ask questions about it in natural language and write "scientific code".


Alright, I'll keep an eye open for instances of networks doing scientific reasoning.

(My own belief is that reasoning is 95% habit and 5% randomness, and that networks don't do it because it hasn't been reflected in their training sets, and they can't acquire the skills because they can't acquire any skills not in the training set.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: