Hacker News new | past | comments | ask | show | jobs | submit login

I do not consider "understanding", which cannot be quantified, as a feature of AGI.

In order for something to qualify as AGI, answering in a seemingly intelligent way is not enough. An AGI must be able to do the following things, which a competent human would do: given the task to accomplish something that nobody has done before, conceive a detailed plan how to achieve that, step by step. Then, after doing the first steps and discovering that they were much more difficult or much easier than expected, adjust the plan based on the accumulated experience, in order to increase the probability of reaching the target successfully.

Or else, one may realize that it is possible to reformulate the goal, replacing it with a related goal, which does not change much the usefulness of reaching the goal, but which can be reached by a modified plan with much better chances of success. Or else, recognize that at this time it will be impossible to reach the initial goal, but there is another simpler to reach goal that it is still desirable, even if it does not provide the full benefits of the initial goal. Then, establish a new plan of action, to reach the modified goal.

For now this kind of activity is completely outside the abilities of any AI. Despite the impressive progress demonstrated by LLMs, nothing done by them has brought a computer any closer of having intelligence in the sense described above.

It is true however, that there are a lot of human managers who would be equally clueless with an LLM, on how to perform such activities.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: