Hacker News new | past | comments | ask | show | jobs | submit login

The classical answer of why more hardware resources are needed for the same tasks is that the new system allows for way much more flexibility. A problem domain can be thoroughly optimized for a single purpose, but then it can only be used for that purpose alone.

This is quite true for LLMs. They can do basic arithmetic, but they can also read problem statements in many diverse mathematical areas and describe what they're about, or make (right or wrong) suggestions on how they can be solved.

Classic AIs suffered the Frame problem, where some common-sense reasoning depended on facts not stated in the system logic.

Now, LLMs have largely solved the Frame problem. It turns out the solution was to compress large swathes of human knowledge in a way that can be accessed fast, so that the relevant parts of all that knowledge are activated when needed. Of course, this approach to flexibility will need lots of resources.






Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: