Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

it is an architecture problem, too. LLMs simply aren't capable of AGI


Why not?

A lot of people say that, but no one, not a single person has ever pointed out a fundamental limitation that would prevent an LLM from going all the way.

If LLMs have limits, we are yet to find them.


We have already found limitations of the current LLM paradigm, even if we don't have a theorem saying transformers can never be AGI. Scaling laws show that performance keeps improving with more params, data + compute but only following a smooth power law with sharply diminishing returns. Each extra order of magnitude of compute buys a smaller gain than the last, and recent work suggests we're running into economic and physical constraints on continuing this trend indefinitely.

OOD is still unsolved problem, they basically struggle under domain shifts and long tail cases or when you try systematically new combinations of concepts (especially on reasoning heavy tasks). This is now a well documented limitation of LLMs/multimodal LLMs.

Work on COT faithfulness shows that the step by step reasoning they print doesn't match their actual internal computation, they frequently generate plausible but misleading explanations of their own answers (lookup anthropic paper). That means they lack self knowledge about how/why they got a result. I doubt you can get AGI without that.

None of this proves that no LLM based architecture could ever reach AGI. But it directly contradicts the idea that we haven't found any limits. We've already found multiple major limitations of the current LLMs, and there's no evidence that blindly scaling this recipe is enough to cross from very capable assistant to AGI.


A lot of those failings (i.e. COT faithfulness) are straight up human failure modes.

LLMs failing the same way as humans do on the same tasks as humans is a weak sign of "this tech is AGI capable", in my eyes. Because it hints that LLMs are angling to do the same things human mind does, and in similar enough ways to share the failure modes. And human mind is the one architecture we know to support general intelligence.

Anthropic has a more recent paper on introspection in LLMs, by the way. With numerous findings. The main takeaway is: existing LLMs have introspection capabilities - weak, limited and unreliable, but present nonetheless. It's a bit weird, given that we never trained them for that.

https://transformer-circuits.pub/2025/introspection/index.ht...

You can train them to be better at it, if you really wanted to. A few other papers tried, although in different contexts.


This is all nonsense and you are just falling for marketing that you want to be true.

The whole space is largely marketing at this point, intentionally conflating all these philosophical terms because we don't want to face the ugly reality that LLMs are a dead end to "AGI".

Not to mention, it is not on those who don't believe in Santa Clause to prove that Santa Clause doesn't exist. It is on those who believe in Santa Clause to show how AGI can possibly emerge from next token prediction.

I would question if you even use the models much really because I thought this in 2023 but I just can't imagine how anyone who uses the models all the time can possibly think we are on the path to AGI with LLMs in 2025.

It is almost like the idea of a thinking being emerging from text was a dumb idea to start with.


You are falling for the AI effect.

Which is: flesh apes want to feel unique and special! And "intelligence" must be what makes them so unique and special! So they deny "intelligence" in anything that's not a fellow flesh ape!

If an AI can't talk like a human, then it must be the talking that makes the human intelligence special! But if the AI can talk, then talking was never important for intelligence in the first place! Repeat for everything.

I use LLMs a lot, and the improvements in the last few years are vast. OpenAI's entire personality tuning team should be loaded into a rocket and fired off into the sun, but that's a separate issue from raw AI capabilities, which keep improving steadily and with no end in sight.


Breaking down in -30C temperatures is also human failure mode, but doesen't make cars human. They both exhibit the exact same behavior (not moving), but are fundamentally different


The similarities go quite a bit deeper than that.

Both rely on a certain metabolic process to be able to move. Both function in a narrow temperature range, and fail outside it. Both have a homeostatic process that attempts to keep them in that temperature range. Both rely on chemical energy, oxidizing stored hydrocarbons to extract power from them, and both take in O2-rich air, and emit air enriched in CO2 and water vapor.

So, yes, the cars aren't humans. But they sure implement quite a few of the same things as humans do - despite being made out of very different parts.

LLMs of today? They implement abstract thinking the same way cars implement aerobic metabolism. A nonhuman implementation, but one that does a great many of the same things.


Real time learning that doesn't pollute limited context windows.


You can mimic this already. Unreliable and computationally inefficient, but those are not fundamental limitations.


LLMs are bounded by the same bounds computers are. They run on computers so a prime example of a limitation is Rices theorem. Any ‘ai’ that writes code is unable (just like humans) to determine if the output is or is not error free.

This means a multi agent workflow without human that writes code may or may not be error free.

LLMs are also bounded by runtime complexity. Could an llm find the shortest Hamiltionian path between two cities in non polynomial time?

LLMs are bounded by in model context: Could an llm create and use a new language with no context in its model?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: