yes, you have made the point that I argue against above. I claim that "looking right" and "being right" are absolutely and fundamentally different at the core. At the same time, acknowledge that from a tool-use, utilitarian, automation point of view, or a sales point of view, results that "look right" can be applied for real value in the real world.
many corollaries exist. "looking right" is not at all General Artificial Intelligence, is my claim yes.
"Being right" seems to be an arbitrary and impossibly high bar. Human at their very best are only "looks right" creatures. I don't think that the goal of AGI is god-like intelligence.
Humans "at their very best" are at least trying to be right. Language models don't - they are not concerned with any notion of objective truth, or even with "looking right" in order to gain social status like some human bullshitter - they are simply babbling.
That this strategy is apparently enough to convince a large number of (supposedly) intelligent people otherwise is very troubling!
Not saying that General AI is impossible, or that LLMs couldn't be a useful component in their architecture. But what we have right now is just a speech center, what's missing is the rest of the brain.
Also, simply replicating / approximating something produced by natural evolution seems to me like the wrong approach, for both practical and ethical reasons: if we get something with >= human-like intelligence, it would be a black box we could never understand how any part of it actually works, and it might be a sentient being capable of suffering.
many corollaries exist. "looking right" is not at all General Artificial Intelligence, is my claim yes.