Hacker News new | past | comments | ask | show | jobs | submit login

> There's nothing narrow about these models.

There is, they can't create new ideas like humanity can. AGI should be able to replace humanity in terms of thinking, otherwise it isn't general, you would just have a model specialized at reproducing thoughts and patterns human have thought before, it still can't recreate science from scratch etc like humanity did, meaning it can't do science properly.

Comparing an AI to a single individual is not how you measure AGI, if a group of humans perform better then you can't use the AI to replace that group of humans, and thus the AI isn't an AGI since it couldn't replace the group humans.

So for example, if a group of programmers write more reliable programs than the AI, then you can't replace that group of programmers with the AI, even if you duplicate that AI many times, since the AI isn't capable of reproducing that same level of reliability when ran in parallel. This is due to an AI being run in parallel is still just an AI, an ensemble model is still just an AI, so the model the AI has to beat is the human ensemble called humanity.

If we lower the bar a bit at least it has to beat 100 000 humans working together to make a job obsolete, since all the tutorials etc and all such things are made by other humans as well if you remove the job those would also disappear and the AI would have to do the work of all of those, so if it can't humans will still be needed.

Its possible you will be able to substitute part of those human ensembles with AI much sooner, but then we just call it a tool. (We also call narrow humans tools, it is fair)




I see these models create new ideas. At least at the standard humans are beholden to, so this just falls flat for me.


You don't just need to create an idea, you need to be able to create ideas that on average progress in a positive direction. Humans can evidently do that, AI can't, when AI work too much without human input you always end up with nonsense.

In order to write general program you need to have that skill. Every new code snipped needs to be evaluated by that system, whether it makes the codebase better or not. The lack of that ability is why you can't just loop an LLM today to replace programmers. It might be possible to automate it for specific programming tasks, but not general purpose programming.

Overcoming that hurdle is not something I think LLM ever can do, you need a totally different kind of architecture, not something that is trained to mimic but trained to reason. I don't know how to train something that can reason about noisy unstructured data, we will probably figure that out at some point but it probably wont be LLM as they are today.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: