I'm referring to how Stephen King discusses his writing process in On Writing. I doubt you actually believe Stephen King might be a p zombie and I'm skeptical you really think consciousness is an illusion. I think if i chained you to a bed and sawed off your leg (like what happens to the protagonist in Misery) you would insist you were a conscious actor who would prefer to not suffer. I don't even know what consciousness is an illusion is supposed to mean.
If I sawed off your leg would it have the moral consideration of removing the leg of a barbie doll's leg if you feel your consciousness is an illusion?
When I write a story my brain might be doing something you could refer to as a black box calculation if you squint a little, but how is it "statistics?" When I feel the desire to urinate, or post comments on hacker news, or admire a rainbow, or sleep am I also "doing statistics?"
You seem to be referring to what people traditionally call "thinking" or "cognition" and rebranding it as "statistics" in search of some rhetorical point.
My point is human beings have things called "personalities" and "preferences" that inform their decision makings, including what to write. In what sense is that "statistics"?
The idea that the human subconscious is not consciously accessible is not a new idea. Freud had a few things to say about that. I don't think it tells us much about AI. I do think my subconscious ideas are informed by my consious preferences. If I hate puns I'm not going to imagine story jdeas involving puns, for example.
Most authors would prefer copyright exists because they'd prefer book publishers, bookstore retailers and the like pay them royalties instead of selling the books they made without paying them. It's pretty simple conceptually, at least with traditional books.
Copyright existed far longer than the last 50 years so how is our 50 years of culture relevant? The U.S. has had copyright since 1790.
have you ever used or spoken to GPT2 or GPT3? Not chatgpt. The one before they did any RLHF, to train it to respond a certain way. If you asked it whether it would like to be hurt, it would beg you not to. If you asked it to move it's leg out of the way, it would apologize, and claim it obliged. It would claim to be conscious, to be aware.
Of course, these statement do not come from a place of considered action: everybody knows the machine is not conscious in the same way as a human, but the point is that an unfeeling machine even being able to make such claims makes us have to move the "consciousness" divider further and further back into the shadows, until it's just some nebulous vibe people insist must be somewhere. It's possible there is a clear and unambiguous way to define humans and intelligent animals conscious, but nobody has come up with a workable definition yet that lets us neatly divide them.
Another slight thing that gives me a little pause: you know how great our brain is at confabulating right? Have you ever done something, then had someone ask you why you did it? You generally tell them a story about how you thought about doing something, weighed the pros and cons etc, that isn't actually true when you think deep down. We like to think we are a single being that thinks carefully about everything it does. instead we're more like an explaining machine sitting on top of a big pile of confusing processes and making up stories why the processes do what they do. How exactly this last thought relates to the discussion I haven't figured out yet, its just something that comes to mind :)
I guess I'll just say that it's not obvious that language has much to do with consciousness, so it's not obvious that a language model has moved things into the shadows. Like, maybe we're in the shadows, but I don't think you can blame GPT 3 for that.
In 1974 philosopher Thomas Nagel wrote, "Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism."
He's clearly willing to recognize consciousness to non verbal creatures.
The point is, how can you tell if a being is conciousness. We can start with the idea that we think these models aren't (I think they aren't!) A lot of your arguments seem to be based in asking questions of the being, whether it can experience things. But seeing as we can't read minds, we can't tell whether it actually believes or experiences what it is saying. So we have (had) to trust that a creature capable of explaining such things is concious. Now we can't do that, hence the dividing line that we feel must exist now moves into some unobservable territory. I'm not commenting on whether such a line exists, just that it's kinda hard to test right now, so any argument about it has to go into hypotheticals and philosophy
> I doubt you actually believe Stephen King might be a p zombie and I'm skeptical you really think consciousness is an illusion.
Consciousness is an unobservable, undefinable thing which with LLMs in the mix we can theorise has no impact on reality; since we can reproduce all the important parts with matrices and a few basic functions. You can doubt facts all you want, but that is a pretty ironclad position as far as logic, evidence and rationality goes. Consciousnesses is going the way of the dodo in terms of importance.
> If I sawed off your leg would it have the moral consideration of removing the leg of a barbie doll's leg if you feel your consciousness is an illusion?
For sake of argument, lets say conclusive proof arises that Stephan King is a philosophical zombie. Do you believe that suddenly you can murder him? No; that'd be stupid and immoral. Morality isn't predicated on consciousness. I'm perfectly happy to argue about morality but consciousness isn't a thing that makes sense outside of talking about someone being knocked unconscious as a descriptive state.
> When I feel the desire to urinate, or post comments on hacker news, or admire a rainbow, or sleep am I also "doing statistics?"
No, you're responding to stimulus. But right now it looks extremely likely that the creative process is driven by statistics as has been revealed by the latest and greatest in AI. Unless you can think of a different mechanism - I'm happy to be surprised by other ideas at the moment it is the only serious explanation I know of.
> You seem to be referring to what people traditionally call "thinking" or "cognition" and rebranding it as "statistics" in search of some rhetorical point.
I don't think I've said anything about thinking or cognition. Although statistics will crack those too, but I'm expecting them to be more stateful processes than the current generation of AI techniques.
> Copyright existed far longer than the last 50 years so how is our 50 years of culture relevant? The U.S. has had copyright since 1790.
Yeah but the law has been continuously strengthened since then and as it's scope increases the damage gets worse. The last 50 years are where new works are effectively not going to enter the public domain before everyone who was around when they were created is dead.
This is just "fashion" and you're arguing against a fashion. How on earth people can claim consciousness doesn't exist when you're entire experience is indeed consciousness is really the most difficult and petulant "fashion trend" imaginable.
Me: "Hey I really enjoyed that surf"
Internet guy: "No you were just responding to your training data because you're just a LLM"
It's only circular if you pretend that our lived experience is not relevant. I experience consciousness, as do other humans, and ChatGPT does not. You know this, I know this, and the fact that your conceptual framework can't account for this is a problem with your conceptual framework, not a demonstration that consciousness is not real
I doubt it. You don't have long enough to do so before time passes and you're relying on your memory being accurate. At which point it is more likely that you're something similar to an artificial neural network that has evolved to operate under a remembered delusion that you exist as an entity independent from the rest of existence. It is far too obvious why that'd be an evolutionary advantage to discount the theory.
I'm not saying this has any meaningful implications for day to day existence, the illusion is a strong one. But LLMs have really shrunk the parts of the human experience that can't be explained by basic principles down to a wafer so thin it might not exist. In my view it is to thin to be worth believing in, but people will argue.
I'd really like to understand what you're trying to convey.
If there is no direct experience, and it really is just an illusion, do you mind if I cut off your genitals ? Because even if existence is an illusion, being that illusion and experiencing it is still obviously relevant?
Seems like a breathtakingly awkward position to take.
1) Transexualism and congenital analgesia are both things, so that isn't a meaningful test of consciousness if that is what you are going for.
2) I personally would be extremely upset ... but anything going on with me being extremely upset can be explained with analogy to ChatGPT + a few hardware generations + maybe some minor tweaks that are easy to foresee but not yet researched. Everything that would happen would be built up from basic stimulus-response's mediated by a big neural net and a bit of memory.
These ideas aren't that new, there have been schools of Buddhist thought that have been exploring them for around 2,500 years if you want a group that has thought through this more thoroughly. Probably other traditions that I haven't met yet. It is just that the latest LLMs have really put some practical we-know-because-we-have-done-it muscle behind what was already a strong theory.
That'd be a reasonable interpretation. Although a slightly more palatable take is that I'm arguing I don't exist independently of anything else. It is safe to say something exists because, otherwise HN comments don't get processed. But the idea that we exist as independent perspectives is probably an evolutionary illusion. Or alternatively, if we do exist as independent perspectives then we appear to be an uncomfortably short step away from GPUs being able to do that too using nothing much more than electricity, circuits and matrices. There doesn't seem to be anything special going on.
I'm not confident enough (yet?) to take that all the way to its logical conclusions in how I live, but I think the evidence is strongest for that interpretation.
Okay, first of all AI experts will tell you LLMs need a lot of data to do what they do, while humans do more with less data. I'm paraphrasing something Geoffrey Everest Hinton said in an interview. He also said current systems are like idiot savants. So why the "evidence" and "rationality" led you to say current systems are just like the brain, I don't know, seems a religious belief to me.
Also most humans are not great at fiction writing, but just for the record GPT 4 is bad at long form fiction writing. It has no idea how to do it. Its training set is likely just small chunks of text and LLMs make up everything one token at a time, which isn't conductive to managing a 100k word or greater story. It's also been trained to wrap things up in 2000 tokens or less. It does this even if you tell it not to. It also isn't very creative, at least in contrast to a professional fiction writer. It also can't really follow the implications of the words it writes. So if it implies something about a character 300 words in it'll have forgotten that by 4000 words in because it doesn't actually have a mental model of the character.
I mention this in case you are under the delusion Stephen King is going to lose his job to GPT 4, or that it actually is as creative as him.
I mean, if a chatbot could write as well as king for 200,000 words in a coherent, original story with a beginning middle and end that would be fascinating and terrifying. But we're not anywhere close to there yet.
>For sake of argument, lets say conclusive proof arises that Stephan King is a philosophical zombie. Do you believe that suddenly you can murder him? No; that'd be stupid and immoral. Morality isn't predicated on consciousness.
Morality has a lot to do with consciousness. If I chop down a tree branch at my house, it isn't immoral because the tree is not conscious- it lacks a central nervous system so as far as we can tell there is no suffering.
If I chop off your arm (or your gentiles as another poster suggested) your consciousness and the pain it suffers is what makes the action immoral.
If Stephen King were a p zombie that would say a lot about the morality of destroying him.
I don't even understand what belief system you are trying to communicate when you say things like consciousness is an illusion or has nothing to do with morality, or an LLM is the same thing as a brain.
Like, I realize these are not your own ideas and you are repeating memes you've heard elsewhere, so it's not like you are speaking in tongues.
But any human being presumably knows they have subjective experience (so why say consciousness is an illusion?) And anyone who has used ChatGPT know it doesn't operate like their brain (no sense of identity, agency, emotions, etc, no ability to form long term personal memories, no real opinion or preference on things) So I don't really get these takes.
If I sawed off your leg would it have the moral consideration of removing the leg of a barbie doll's leg if you feel your consciousness is an illusion?
When I write a story my brain might be doing something you could refer to as a black box calculation if you squint a little, but how is it "statistics?" When I feel the desire to urinate, or post comments on hacker news, or admire a rainbow, or sleep am I also "doing statistics?"
You seem to be referring to what people traditionally call "thinking" or "cognition" and rebranding it as "statistics" in search of some rhetorical point.
My point is human beings have things called "personalities" and "preferences" that inform their decision makings, including what to write. In what sense is that "statistics"?
The idea that the human subconscious is not consciously accessible is not a new idea. Freud had a few things to say about that. I don't think it tells us much about AI. I do think my subconscious ideas are informed by my consious preferences. If I hate puns I'm not going to imagine story jdeas involving puns, for example.
Most authors would prefer copyright exists because they'd prefer book publishers, bookstore retailers and the like pay them royalties instead of selling the books they made without paying them. It's pretty simple conceptually, at least with traditional books.
Copyright existed far longer than the last 50 years so how is our 50 years of culture relevant? The U.S. has had copyright since 1790.