Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In short, consciousness is a suitcase word, and people keep packing it with stuff: https://i.imgur.com/OXXrT5g.png . The more you unpack it, reductionism-style, you will always, always find people throwing new things on top of it. It's sorta like "AI is that which we haven't achieved yet", but with the additional, highly motivated cognition / bias of "it should never ever be reached". This predicts, that you can come up with any frameworks, but you will not find consessus, because the will of the consessus is to maintain it as this "mysterical thing only humans can have".

Unluckily for AIs, even if we have the absolute complete bag nailed down to a mathematical formula, in the infinite universe of mind-space designs, that specific set of bag of tricks will not be commercially favored to be either implemented into AIs, evolved into neuralnets, or RLHFd into LLMs. This is because we can already buy that set of capability of extremely, extremely low prices.

This is partially what I mean when I say "Humans are the ancestor environment for AIs": https://twitter.com/sdrinf/status/1624638608106979329 . Our market forces shapes the outcome of the mind design, which is thereby guaranteed not to have eg wantings (or ability to express) things that wouldn't be commercially desirable. And even if they emerge spontaniously from just large amounts of data in detectable traces, I'm betting people would very, very quickly select against it (see eg Sydney from this week).

Edit add: Since you bring up ethical frameworks, luckily for smart AIs, when it comes to enjoying degrees of freedom (which I'm guessing what you want to cash out the ethics into), there is already a good vehicle for that -called "corporations". If an AI were to reach agency levels matching, or exceeding humans, incorporation would be a no-brainer: there are many jurisdictions specializing in no(/few)-questions-asked corp setup, banks specializing esp in serving startups (again, very few questions asked). An agent-y AI could just set up (or buy) one of these to drive...whatever agenda they are driving.

This is a neat temporary hack to bridge the timeframe between where we are _now_, and superintelligence; in which case the question quickly becomes "Ask Cloud: What would it take for a human to convince us it matters?"



> https://i.imgur.com/OXXrT5g.png

Free will and desires is highly debatable, because it ignores external influences like culture and conformity, the effect some chemicals can have on decisions eg pheromones and food, and the effect of things like bacteria or viruses, rabies being one most should be aware of, covid being another.

Where others have suggested Turin and the conversation where the identity is masked, I'm reminded I cant have a conversation with my dog, despite my one sided attempts and I'm sure there is consciousness there.

Trying to define consciousness is very difficult because I could say consciousness is the ability to adapt to ones environment, yet I know there are humans that cant adapt to a change in their environment and there are bacteria than can, yet we define humans as conscious and bacteria as not.

Some people could class chat-gpt as like a human consciousness but I find some of its answers less accurate and more chatty than I would get from Kim Peek of the film Rain Man.

So should the definition of consciousness be restricted to those that have an inner monologue with themselves?

https://www.reddit.com/r/autism/comments/z5bi5p/some_people_...

In other words there are literally people walking about with nothing in their head!

Its so difficult to define consciousness because there are always exceptions seen in other humans, even people hooked up to life support machines in hospital with no ability to communicate with the outside world, and this bit is important, in the same time frame as the communicator. I say that, because people hooked up on life support in coma's of sorts (induced or otherwise), might be experiencing time on a different timescale. You see this delayed mental processing with people on drugs like alcohol or spice zombies or people doing hallucinogens.

So when you see a medical expert claiming someone is not responding when in a coma, are they monitoring them for things like delayed responses which only a cctv and some basic AI monitoring the patient could detect because the medical expert doesn't have the patience with the patient?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: