Hacker News new | past | comments | ask | show | jobs | submit login

I think it's similar, although I think it would be more similar if the LLM did the steps in lower layers (not in English), and instead of the end being fed to the start, there would be a big mess of cycles throughout the neural net.

That could be more efficient since the cycles are much smaller, but harder to train.






It doesn't do the 'thinking' in English (inference is just math), but it does now verbalize intermediate thoughts in English (or whatever the input language is, presumably), just like humans tend to do.

Agreed. It we never "just autocomplete" unless your definition of "autocomplete" includes "look at the whole body of text".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: