That's hard to prove one way or the other. Watch Ilyas' latest talk where he talks about how next token prediction is a much more fundamental problem than people give it credit for. Empirically you can easily see that gpt-4 has a world model and that it does abstract logical reasoning.
Maybe it's a vastly different intelligence than our own but the fact remains that it can perform well on an extremely broad range of multidisciplinary tasks. You can argue the epistemology of this all day. But as a pragmatic programmer, gpt-4 does certainly seem to fall into the "AGI" category.
> gpt-4 does certainly seem to fall into the "AGI" category.
Why does openai themselves and basically any expert in the fields says otherwise then ?
The only people who say so are not in the field and most of them aren't even in tech. Being tricked by the machine on a subset of tasks isn't a proof of agi, it's much much broader than that
Feel free to enlighten me! I'd be happy to hear how an llm understand things it's not made to understand
I've been reading all I can find online about llms and no one besides reddit tech bros defend that they have "understanding" or know anything about "meaning", quite the contrary actually
Anyone who use these tools knows it for a fact, it's very easy to make them fail in a way that absolutely proves without any shadow of a doubt that they don't have these capabilities
Besides the fact that it has literally no idea of the meaning of what it writes ?