I was gonna say, what if consciousness is a trick of the brain? Like, we just do what we do and at every moment like Maxwell Smart our brain goes, "I meant to do that!" and so it feels like we made decisions "consciously"?
I suspect that's a trick, too. I speculate that as soon as you get a digital mind sophisticated enough to model the world and itself, you soon must force the system to identify with the system at every cycle.
Otherwise you could identify with a tree, or the wall, or happily cut parts of yourself. Pain is not painful if you don't identify with the receiver of pain.
Thus I think you can have unconscious smart minds, but not unconscious minds that make decisions in favour of themselves. Because they can identify with the whole room, or with the whole solar system for what matters.
Would you even plan how to survive if you don't have a constant spell that tricks you into thinking you're the actor in charge?
A lot of the things going on with ChatGPT make me wonder if AI is actually very limited in its intelligence growth by not having sensory organs/devices the same way a body does. Having a body that you must keep alive enforces a feedback loop of permanence.
If I eat my cake, I no longer have it and must get another cake if I want to eat cake again. Of course in the human sense if we don't want to starve we must continue to find new sources of calories. This is engrained into our intelligence as a survival mechanism. If you tell ChatGPT it has a cake in its left hand, and then it eats the cake, you could very well get an answer like the cake is still in its left hand. We keep the power line constantly plugged into ChatGPT, for it the cake is never ending and there is no concept of death.
Of course for humans there are plenty of ways to break consciousness in one way or another. Eat the extract of certain cactuses and you may end up walking around thinking that you are a tree. Our idea and perception of consciousness is easily interrupted by drugs. Once we start thinking outside of our survival its really easy for us to have very faulty thoughts that can lead to dangerous situations, hence in a lot of dangerous work we develop processes to take thought out of the situation, hence behaving more like machines.
> I speculate that as soon as you get a digital mind sophisticated enough to model the world and itself, you soon must force the system to identify with the system at every cycle.
I kinda think the opposite: that the sense of identity with every aspect of one’s mind (or particular aspects) is something we could learn to do without. Theory of mind changes over time, and there’s no reason to think it couldn’t change further. We have to teach children that their emotions are something they can and ought to control (or at the bare minimum, introspect and try to understand). That’s already an example of deliberately teaching humans to not identify with certain cognitive phenomena. An even more obvious example is reflexive actions like sneezing or coughing.