Hacker News new | past | comments | ask | show | jobs | submit login

Our most basic intuitive notion of consciousness is that inanimate objects aren't conscious, awake people or animals are, and that sleeping people or animals aren't (except maybe when dreaming). Pursuing this line, there's a school of scientific inquiry looking at this and working of the notion that conscious experiences are ones we can form memories of and talk about later while if we can't do that we aren't really conscious of an experience. And this then leads into the realm of subliminal stimuli which can influence a person's behavior a bit but whose influence fades out in about a second before disappearing as if it was never there as the brain activations fade away.

You have research involving patients with odd traits like blindsight, where damage to their brain prevents them from being consciously aware of things that their eyes see despite the brain processing the images it receives. They can pick up objects in front of them when prompted but unlike people with normal vision can't describe what they see nor can they look, close their eyes, and grab it like most of us can.

On this metric it seems like systems like GPT aren't conscious. GPT4 has a buffer of 64k tokens which can span an arbitrary amount of time but the roughly 640 kilobytes in that buffer which is a lot less than the incoming sensory activations your subconscious brain is juggling at any given time.

So by that schema large language models are still not conscious but given that they can already abstract text down to summaries it doesn't feel like we're that far from being able to give them something like working or long term memories.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: