No, because it doesn't reason, period. Stochastic analysis of sequence probabilities != Reasoning. I explained my thoughts on the matter in this thread to quite some extend.
> That seems potentially disprovable.
You're welcome to try and disprove it. As for prior research on the matter:
And afaik, Galactica wasn't even intended to do novel research, it was only intended for the, time consuming but comparably easier, tasks of helping to summarize existing scientific data, ask questions about it in natural language and write "scientific code".
Alright, I'll keep an eye open for instances of networks doing scientific reasoning.
(My own belief is that reasoning is 95% habit and 5% randomness, and that networks don't do it because it hasn't been reflected in their training sets, and they can't acquire the skills because they can't acquire any skills not in the training set.)
No, because it doesn't reason, period. Stochastic analysis of sequence probabilities != Reasoning. I explained my thoughts on the matter in this thread to quite some extend.
> That seems potentially disprovable.
You're welcome to try and disprove it. As for prior research on the matter:
https://www.cnet.com/science/meta-trained-an-ai-on-48-millio...
And afaik, Galactica wasn't even intended to do novel research, it was only intended for the, time consuming but comparably easier, tasks of helping to summarize existing scientific data, ask questions about it in natural language and write "scientific code".