Hacker News new | past | comments | ask | show | jobs | submit login

I'm pretty sure the artist is conscious and the AI isn't which means there's something reductive about your claim that they are both merely "applying a black-box statistical" model.

Even if it's true (which is debatable,) it doesn't appear to be more informative than saying "they are both made of atoms."




Well, my personal opinion is with the rise of neural nets we've basically proven that "consciousness" is an illusion and there is nothing there to find. But for the sake of argument, lets assume that there is something called consciousness, artists have it and neural nets don't.

How are you going to demonstrate that consciousness is responsible for what the artists are doing? We have undeniable proof that the art could be created by a statistical model, there is solid evidence that the brain creates art by simulating a mathematical neural network to achieve creative outcomes - the brain is full of relatively simple neurons linking together in a way that is logically similar to the way we're encoding information into these matrices.

So it is quite reasonable to believe that the artists are conscious but suspect that consciousness isn't involved in the process of creating a copyrighted work. How does that get dealt with?


I'll talk about fiction, since I've written some. If I write a ghost story it's because I enjoy ghost stories and want to take a crack at my own. While I don't know why ideas pop into my head, I do know that I pick the ones that are subjectively fun or to my taste. And if I do a clever job or a bad job I have a sense of it when I reread what I wrote.

These AI's aren't doing anything like that. They have no preference or intent. Their choices change depending on setting like temperature or prompts.

Or let's try a different example. Stephen King wrote a novel where he imagined the protagonist gets killed and eaten by the villain's pet pig (Misery). He struggled to come up with a different ending because he said nobody wants to read a whole novel just to see the main character die in the end. He thought about it and did a different ending.

Are you claiming Stephen King's conscious deliberation wasn't part of his writing process? I'd say it clearly was.

Also, I don't really understand the consciousness is an illusion argument. If none of us are conscious, why should I justify any copyright policy preference to you? That would be like justifying a copyright policy preference to a doorknob. But somehow I'm also a doorknob in this analogy???

Suppose Bob says he's conscious and Jim says he isn't and we believe them. Doesn't that suggest we would have different policy preferences on how they are treated? It would appear murdering Jim wouldn't be particularly harmful but murdering Bob would. I don't have to show how Jim and Bob's mind differ to prefer policies that benefit Bob over Jim.


> ...[w]hile I don't know why ideas pop into my head...

If you're trying to argue that you're doing something different from statistical sampling, not knowing how you're doing it isn't a very strong place to argue from. What you're experiencing is probably what it feels like for a sack of meat to take a statistical sample from an internal black-box model. Biologically that seems to be what is happening.

I have also done a fair amount of writing. The writing comes from a completely different part of my mind than the part that experiences the world. I see no reason to believe it is linked to consciousness, even allowing that consciousness does exist which is questionable in itself.

It is an unreasonable position to say that you don't know the process but it must be different from a known process that you also don't have experience using.

> Are you claiming Stephen King's conscious deliberation wasn't part of his writing process? I'd say it clearly was.

Unless you're claiming to have a psychic connection to Stephen King's consciousness, this is a remarkably weak claim. You have no idea how he was writing. Maybe he's even a philosophical zombie. Thanks to the rise of LLMs we know that philosophical zombies can write well.

And "clearly" is not so - I could spit out a lost Stephen King work in this comment that, depending on how good ChatGPT is these days, would be passable. It isn't obvious that it is the work of a conscious mind. It in fact would obviously be from a statistical model.

> If none of us are conscious, why should I justify any copyright policy preference to you?

I've been against copyright for more than a decade now. You tell me why it is justified even if consciousness is a factor. The edifice of copyright is culturally and economically destructive and also has been artistically devastating (there haven't been anywhere near as many great works in the last 50 years as there should have been in a culturally dynamic society).


I'm referring to how Stephen King discusses his writing process in On Writing. I doubt you actually believe Stephen King might be a p zombie and I'm skeptical you really think consciousness is an illusion. I think if i chained you to a bed and sawed off your leg (like what happens to the protagonist in Misery) you would insist you were a conscious actor who would prefer to not suffer. I don't even know what consciousness is an illusion is supposed to mean.

If I sawed off your leg would it have the moral consideration of removing the leg of a barbie doll's leg if you feel your consciousness is an illusion?

When I write a story my brain might be doing something you could refer to as a black box calculation if you squint a little, but how is it "statistics?" When I feel the desire to urinate, or post comments on hacker news, or admire a rainbow, or sleep am I also "doing statistics?"

You seem to be referring to what people traditionally call "thinking" or "cognition" and rebranding it as "statistics" in search of some rhetorical point.

My point is human beings have things called "personalities" and "preferences" that inform their decision makings, including what to write. In what sense is that "statistics"?

The idea that the human subconscious is not consciously accessible is not a new idea. Freud had a few things to say about that. I don't think it tells us much about AI. I do think my subconscious ideas are informed by my consious preferences. If I hate puns I'm not going to imagine story jdeas involving puns, for example.

Most authors would prefer copyright exists because they'd prefer book publishers, bookstore retailers and the like pay them royalties instead of selling the books they made without paying them. It's pretty simple conceptually, at least with traditional books.

Copyright existed far longer than the last 50 years so how is our 50 years of culture relevant? The U.S. has had copyright since 1790.


have you ever used or spoken to GPT2 or GPT3? Not chatgpt. The one before they did any RLHF, to train it to respond a certain way. If you asked it whether it would like to be hurt, it would beg you not to. If you asked it to move it's leg out of the way, it would apologize, and claim it obliged. It would claim to be conscious, to be aware.

Of course, these statement do not come from a place of considered action: everybody knows the machine is not conscious in the same way as a human, but the point is that an unfeeling machine even being able to make such claims makes us have to move the "consciousness" divider further and further back into the shadows, until it's just some nebulous vibe people insist must be somewhere. It's possible there is a clear and unambiguous way to define humans and intelligent animals conscious, but nobody has come up with a workable definition yet that lets us neatly divide them.

Another slight thing that gives me a little pause: you know how great our brain is at confabulating right? Have you ever done something, then had someone ask you why you did it? You generally tell them a story about how you thought about doing something, weighed the pros and cons etc, that isn't actually true when you think deep down. We like to think we are a single being that thinks carefully about everything it does. instead we're more like an explaining machine sitting on top of a big pile of confusing processes and making up stories why the processes do what they do. How exactly this last thought relates to the discussion I haven't figured out yet, its just something that comes to mind :)


I guess I'll just say that it's not obvious that language has much to do with consciousness, so it's not obvious that a language model has moved things into the shadows. Like, maybe we're in the shadows, but I don't think you can blame GPT 3 for that.

In 1974 philosopher Thomas Nagel wrote, "Conscious experience is a widespread phenomenon. It occurs at many levels of animal life, though we cannot be sure of its presence in the simpler organisms, and it is very difficult to say in general what provides evidence of it. (Some extremists have been prepared to deny it even of mammals other than man.) No doubt it occurs in countless forms totally unimaginable to us, on other planets in other solar systems throughout the universe. But no matter how the form may vary, the fact that an organism has conscious experience at all means, basically, that there is something it is like to be that organism."

He's clearly willing to recognize consciousness to non verbal creatures.


The point is, how can you tell if a being is conciousness. We can start with the idea that we think these models aren't (I think they aren't!) A lot of your arguments seem to be based in asking questions of the being, whether it can experience things. But seeing as we can't read minds, we can't tell whether it actually believes or experiences what it is saying. So we have (had) to trust that a creature capable of explaining such things is concious. Now we can't do that, hence the dividing line that we feel must exist now moves into some unobservable territory. I'm not commenting on whether such a line exists, just that it's kinda hard to test right now, so any argument about it has to go into hypotheticals and philosophy


> I doubt you actually believe Stephen King might be a p zombie and I'm skeptical you really think consciousness is an illusion.

Consciousness is an unobservable, undefinable thing which with LLMs in the mix we can theorise has no impact on reality; since we can reproduce all the important parts with matrices and a few basic functions. You can doubt facts all you want, but that is a pretty ironclad position as far as logic, evidence and rationality goes. Consciousnesses is going the way of the dodo in terms of importance.

> If I sawed off your leg would it have the moral consideration of removing the leg of a barbie doll's leg if you feel your consciousness is an illusion?

For sake of argument, lets say conclusive proof arises that Stephan King is a philosophical zombie. Do you believe that suddenly you can murder him? No; that'd be stupid and immoral. Morality isn't predicated on consciousness. I'm perfectly happy to argue about morality but consciousness isn't a thing that makes sense outside of talking about someone being knocked unconscious as a descriptive state.

> When I feel the desire to urinate, or post comments on hacker news, or admire a rainbow, or sleep am I also "doing statistics?"

No, you're responding to stimulus. But right now it looks extremely likely that the creative process is driven by statistics as has been revealed by the latest and greatest in AI. Unless you can think of a different mechanism - I'm happy to be surprised by other ideas at the moment it is the only serious explanation I know of.

> You seem to be referring to what people traditionally call "thinking" or "cognition" and rebranding it as "statistics" in search of some rhetorical point.

I don't think I've said anything about thinking or cognition. Although statistics will crack those too, but I'm expecting them to be more stateful processes than the current generation of AI techniques.

> Copyright existed far longer than the last 50 years so how is our 50 years of culture relevant? The U.S. has had copyright since 1790.

Yeah but the law has been continuously strengthened since then and as it's scope increases the damage gets worse. The last 50 years are where new works are effectively not going to enter the public domain before everyone who was around when they were created is dead.


> Consciousness is an unobservable thing

No it isn't. I observe consciousness in myself, right?


Of course you do.

This is just "fashion" and you're arguing against a fashion. How on earth people can claim consciousness doesn't exist when you're entire experience is indeed consciousness is really the most difficult and petulant "fashion trend" imaginable.

Me: "Hey I really enjoyed that surf" Internet guy: "No you were just responding to your training data because you're just a LLM"

It's ridiculous.


That is a circular argument. An AI might also be able to observe it in itself, in fact I would argue that that seems just as likely.


It's only circular if you pretend that our lived experience is not relevant. I experience consciousness, as do other humans, and ChatGPT does not. You know this, I know this, and the fact that your conceptual framework can't account for this is a problem with your conceptual framework, not a demonstration that consciousness is not real


I doubt it. You don't have long enough to do so before time passes and you're relying on your memory being accurate. At which point it is more likely that you're something similar to an artificial neural network that has evolved to operate under a remembered delusion that you exist as an entity independent from the rest of existence. It is far too obvious why that'd be an evolutionary advantage to discount the theory.

I'm not saying this has any meaningful implications for day to day existence, the illusion is a strong one. But LLMs have really shrunk the parts of the human experience that can't be explained by basic principles down to a wafer so thin it might not exist. In my view it is to thin to be worth believing in, but people will argue.


I'd really like to understand what you're trying to convey.

If there is no direct experience, and it really is just an illusion, do you mind if I cut off your genitals ? Because even if existence is an illusion, being that illusion and experiencing it is still obviously relevant?

Seems like a breathtakingly awkward position to take.


Well I suppose two points:

1) Transexualism and congenital analgesia are both things, so that isn't a meaningful test of consciousness if that is what you are going for.

2) I personally would be extremely upset ... but anything going on with me being extremely upset can be explained with analogy to ChatGPT + a few hardware generations + maybe some minor tweaks that are easy to foresee but not yet researched. Everything that would happen would be built up from basic stimulus-response's mediated by a big neural net and a bit of memory.

These ideas aren't that new, there have been schools of Buddhist thought that have been exploring them for around 2,500 years if you want a group that has thought through this more thoroughly. Probably other traditions that I haven't met yet. It is just that the latest LLMs have really put some practical we-know-because-we-have-done-it muscle behind what was already a strong theory.


Are you arguing that you don't exist? You seem to be


That'd be a reasonable interpretation. Although a slightly more palatable take is that I'm arguing I don't exist independently of anything else. It is safe to say something exists because, otherwise HN comments don't get processed. But the idea that we exist as independent perspectives is probably an evolutionary illusion. Or alternatively, if we do exist as independent perspectives then we appear to be an uncomfortably short step away from GPUs being able to do that too using nothing much more than electricity, circuits and matrices. There doesn't seem to be anything special going on.

I'm not confident enough (yet?) to take that all the way to its logical conclusions in how I live, but I think the evidence is strongest for that interpretation.


Okay, first of all AI experts will tell you LLMs need a lot of data to do what they do, while humans do more with less data. I'm paraphrasing something Geoffrey Everest Hinton said in an interview. He also said current systems are like idiot savants. So why the "evidence" and "rationality" led you to say current systems are just like the brain, I don't know, seems a religious belief to me.

Also most humans are not great at fiction writing, but just for the record GPT 4 is bad at long form fiction writing. It has no idea how to do it. Its training set is likely just small chunks of text and LLMs make up everything one token at a time, which isn't conductive to managing a 100k word or greater story. It's also been trained to wrap things up in 2000 tokens or less. It does this even if you tell it not to. It also isn't very creative, at least in contrast to a professional fiction writer. It also can't really follow the implications of the words it writes. So if it implies something about a character 300 words in it'll have forgotten that by 4000 words in because it doesn't actually have a mental model of the character.

I mention this in case you are under the delusion Stephen King is going to lose his job to GPT 4, or that it actually is as creative as him.

I mean, if a chatbot could write as well as king for 200,000 words in a coherent, original story with a beginning middle and end that would be fascinating and terrifying. But we're not anywhere close to there yet.

>For sake of argument, lets say conclusive proof arises that Stephan King is a philosophical zombie. Do you believe that suddenly you can murder him? No; that'd be stupid and immoral. Morality isn't predicated on consciousness.

Morality has a lot to do with consciousness. If I chop down a tree branch at my house, it isn't immoral because the tree is not conscious- it lacks a central nervous system so as far as we can tell there is no suffering.

If I chop off your arm (or your gentiles as another poster suggested) your consciousness and the pain it suffers is what makes the action immoral.

If Stephen King were a p zombie that would say a lot about the morality of destroying him.

I don't even understand what belief system you are trying to communicate when you say things like consciousness is an illusion or has nothing to do with morality, or an LLM is the same thing as a brain.

Like, I realize these are not your own ideas and you are repeating memes you've heard elsewhere, so it's not like you are speaking in tongues.

But any human being presumably knows they have subjective experience (so why say consciousness is an illusion?) And anyone who has used ChatGPT know it doesn't operate like their brain (no sense of identity, agency, emotions, etc, no ability to form long term personal memories, no real opinion or preference on things) So I don't really get these takes.


> Well, my personal opinion is with the rise of neural nets we've basically proven that "consciousness" is an illusion and there is nothing there to find.

I keep seeing this claim being made but I never understand what people mean by it. Do you mean that the colors we see, the sounds we hear, the tastes, smells, feels, emotions, dreams, inner dialog are all illusions? Isn't an illusion an experience? You're saying that experience itself is an illusion and there is nothing to experience.

I can't make sense of that. At any rate, I see no reason to suppose LLMs have experiences. They don't have bodies, so what would they be experiencing? When you say an LLM is identical to a person, I can't make good sense of that either. There's a thousand things people do that language models don't. Just the simple fact that I have to eat on a regular basis to survive is meaningful in a way that it can't be for a language model.

If an LLM generates text about preparing a certain meal because it's hungry, I know that's not true in a way it can be true of a human. So right away, there's reasons we say things that go beyond the statistical black box reasoning of an LLM. They don't have any bodies to attend to.


I agree, it almost seems like some type of coping mechanism for that fact that after all the ability to get computers to generate art and coherent sentences, we're still completely none the wiser about understanding objective reality and consciousness and even knowing how to really enjoy the gift of having the experience of consciousness. So instead people create these type of "cop outs".

Mechanical drawing machines have existed forever, I loved, love, loved them when I was a kid and I used to hang the generated images on my wall. Never did I once look at those machines who could draw some pretty freaking awesome abstract art and think to myself, "well that's it, the vale of consciousness is so thin now, it's all an illusion", or "the machine is conscious".

As impressive as some of these models are at generating art, they are still drawing machines. They display the same amount of consciousness as a mechanical drawing machine.

I saw someone on Twitter ask ChatGPT-4 to draw a normal image, you know what it drew ? A picture of a suburban neighborhood, why might a conscious drawing machine do that?


The "it's an illusion" part is a piece of rhetorically toxic language that usually comes up in these discussions, as its a bit provocative. But it's equally anthropocentric to say that someone without a body can't have a conscious experience. When you're dreaming your body is shut off - but you can still be conscious (lucid dreaming or not). You can even have conscious experiences without most of your brain - when you hit your little toe on something, you have a few seconds of terror and pain that surely doesn't require most of your brain areas to experience. In fact, you can argue you won't even need your brain. Is that not a conscious experience? (I'm not really trying to argue against you, I just find this boundary interesting between what you'd call conscious experience and not)


Reading through this entire exchange and it would seem your argument inevitably falls back on this argument of consciousness, which is both arbitrary (why does that matter in the original context of the distinction in learning, particularly from copyrighted works?) and ill-defined (what even is consciousness? How do we determine if another being is conscious or not?).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: