Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> GPT-4 is closely resembles AGI already

Thats a very bold statement and goes against everything I've read on it so far, care to backup such a claim with some facts? Of course each of us has their own bar for such things, but for most its pretty darn high



OpenAI people put out this paper https://arxiv.org/abs/2303.12712 called Sparks of Artificial General Intelligence: Early experiments with GPT-4 and as the title makes clear, they think it has hints of AGI. I guess that's a good place to start to answer your question. I don't think this is AGI, but the paper is full of examples where GPT-4 works well and does impressive stuff.


you can download tex source for that pdf which at one point (idk if it still does) included the comment "WORK IN PROGRESS - DO NOT SHARE" and the commented out title "First Contact with an AGI System" which they ended up toning down for publication lol


that paper is unreal... section 6 on theory of mind is downright scary


It is.

Some things I wonder about, it says things like this:

> GPT-4 successfully passes the classic Sally-Anne false-belief test from psychology [BCLF85] (which was modernized to avoid the possibility the answer was memorized from the training data)

But it's a language model, generalizing text and performing substitutions on it, is what it excels at. "The car is yellow" is "the <noun> is <descriptor>" and it can substitute in other things, so I'm not sure how their modernization really ensure it does not pattern match on learned texts.


The meanings we assign to words like “understanding”, “sentience”, “consciousness” imply really a very high degree of shared context and cultural baggage. If we expand those terms to include systems radically unlike us, then they could be literally anything and everything, from weather systems to laws of physics—but we don’t think of those words in this way; if we can’t reason about sentience or understanding from our vantage point, then for all intents and purposes there is no sentience or understanding. This catch-22 brings together the fallacies of believing in aliens and believing in AGI: we yearn for sentient non-humans, but we only recognize human-like sentience.

One human mind is a lot like another human mind, markedly less like one of an octopus or a dog (still enough similarity that the concept of, e.g., “hurt” kind of makes sense), but really unlike an LLM (and I’m not going to get into an argument as to why an LLM is fundamentally different from a human mind and why we are not even close and possibly can never be to achieving that, the only way of producing new systems like us remaining childbirth; if you don’t agree on that part then it’d be useless to discuss the matter further). We have an uncanny situation where a system radically unlike us can produce output that is mostly similar to what another human mind might produce, but unless we accept that everything around can be conscious (and believe in gods and spirits again) it’s not even a question as to whether the system can understand any of the symbols it produces or have consciousness in the commonly accepted meaning of those terms: it’s only a tool.

Note that I’m not against widening our concept of sentience, just saying it needs to happen if we want to grant an LLM sentience; and if this widening does happen, a sentient language model would be small beans compared to a philosophical revolution we’d have on our hands then.


Agreed. I did feel they didn't modify that particular test as much as I thought they would / should.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: