> ignoring the fact that is the program that is responsible for the behavior, not the human
This is called the systems response to the Chinese Room Argument. [0]
Searle's response [0] goes something like this:
You're missing the point. Suppose the man in the room memorised the program, so that he could answer the questions himself with no need of external aids. Now, the man and the system are the same, and yet by your account, the man doesn't speak Chinese, but the system does.
Personally I find the whole argument to bear no clear connection to the questions of consciousness. The question of whether a system consciously understands a problem is muddled, as it isn't clear a priori that problem-solving competence has any connection to consciousness. To use Dennett's term, a system can be competent without comprehension. A pocket calculator is quite unable to explain what it's doing, but is able to perform superhuman number-crunching. The argument may succeed in tying the reader in a knot, but I figure that's just because most people haven't thought much about what conscious comprehension really means.
Another problem with the Chinese Room Argument is that, if it really holds up, it ought to hold up just as well against the human species itself. If you're going to attack the idea that consciousness can arise from non-conscious components, where does that leave us?
This is not directed at you personally but I find the whole conundrum tiresome and frustrating. Systems reply, Searle's reply, virtual mind reply, and then Searle replies something like "but a virtual mind can not be really conscious". At the end we are just left with the question "do you agree or disagree with functionalism and computationalism" and all the arguments on both sides turn out to be just empty statements of agreement or disagreement.
We agree. The Chinese Room Argument offers little insight into consciousness. All it really does is to take someone with confused thoughts on how 'understanding' works (specifically the way intellectual competence interacts with consciousness), and to tie them in a knot.
It fails to demonstrate anything interesting about consciousness. It certainly doesn't demonstrate that computer systems can never be conscious in the way we can. Nothing in the argument applies any more or less to neurons than to transistors.
If the man acts as a virtual machine that faithfully runs the program, then it's still the program that understands Chinese, it doesn't matter on what machine it runs. In this case there are two minds: the man's mind and program's mind.
Mind strikes me as a loaded term, being used to mean locus of computation or computation stream, but carrying the implication of consciousness. If you wish to make the case that computation and consciousness are in some sense intertwined, this needs to be done explicitly. The Chinese Room Argument does not do so. (Incidentally I'm of the opinion that they are indeed intertwined.)
If a person is manually executing an algorithm (the memorised Chinese Room algorithm, or naive matrix multiplication, or whatever), then we could distinguish between the algorithm and the person executing it. In that sense there are two computation streams at work. That doesn't show that a second consciousness has been brought into being, though. I wouldn't conclude that there are two 'minds'.
Your comparison to virtual machines is a good one, it's one that Dennett uses.
> If the executed algorithm is consciousness, then a second consciousness has been brought into being, same as for first consciousness.
A lot hinges on If the executed algorithm is consciousness.
I can see the sense in that argument though, in that the 'first consciousness' is acting as the computational substrate for the 'second consciousness', the way the physical action of neurons acts as our computational substrate. I'm not convinced this maps to using a card system to have a conversation in a foreign language. It doesn't seem self-evident that doing so should be considered enough to demonstrate consciousness, it's only enough to demonstrate than the algorithm is effective in having a conversation.
It's not self-evidently the equivalent of an accurate real-time computer simulation of a specific person, for instance. I think you can make a strong case that such a system would amount to a conscious person and should be treated as such. (It would follow from this that shutting down a holodeck simulation of someone is morally fraught.) The only alternative would be to morally privilege neuron-based architectures over transistor-based architectures, which seems like an uphill philosophical battle.
I don't put much stock in the question of whether the card-based algorithm 'understands' the conversation it is having. Is it competent? Yes. Is it conscious? I'm not convinced it is. Asking whether it understands is to conflate these two questions.
Competence here is indistinguishability from human in conversation. The most straightforward way to implement it is to make the algorithm have the same structure and work in the same way as human mind. We can say that it understands and is conscious because it's not different from human mind in structure and operation. It's a good part about artificial algorithms that they are transparent and we can show that they aren't only effective in conversation, but have everything to be had behind that conversation. It's due to the latter. Simply put, the algorithm isn't GPT, but AGI.
>It's not self-evidently the equivalent of an accurate real-time computer simulation of a specific person, for instance. I think you can make a strong case that such a system would amount to a conscious person and should be treated as such.
This was touched recently in art. If you're interested, it's "Sword art online Alicization". The shutdown problem is applicable to AI too. What's new is that the life of virtual people is shown at length and discussed questions as to what identity those people should have, what worldview, religion, philosophy, pride, dignity, justice.
> Competence here is indistinguishability from human in conversation. The most straightforward way to implement it is to make the algorithm have the same structure and work in the same way as human mind
I'm not sure that's the case. Simple chat programs can do a pretty good job simulating a human interlocutor. For a 'full' simulation, which we need by definition, we'd need something much more sophisticated (able to reason about all sorts of abstract and concrete things), but conceivably the solution might be very different from brain-simulation.
The thought experiment can easily be adjusted to close the door on my objection here: rather than a man in a room with an enormous card index, we have a pretty accurate real-time computer simulation of some specific person. That way the computational problem is defined to be equivalent to something we consider conscious. Laboriously computing that simulation by hand (presumably not in real time but instead over millennia) would change the substrate, but not the computational problem. If we're ok with there being an outer host consciousness and an inner hosted consciousness, the thought experiment poses no problem.
Of course, this isn't the position I started at, but it makes some sense that the real meaning of the thought experiment change as we adjust the simulated process. If our man were looking up the best moves to play tic-tac-toe, it would be plainly obvious that we're looking at competence without comprehension. If he's instead simulating the full workings of a human brain, the situation is different. The foreign language problem is somewhere between these extremes.
> It's a good part about artificial algorithms that they are transparent and we can show that they aren't only effective in conversation, but have everything to be had behind that conversation. It's due to the latter. Simply put, the algorithm isn't GPT, but AGI.
I'm not sure I quite follow you here. I agree that the depth and detail of the simulation is an important factor.
> What's new is that the life of virtual people is shown at length and discussed questions as to what identity those people should have, what worldview, religion, philosophy, pride, dignity, justice.
In contrast to Ex Machina that had the computer as a sociopathic villain with only surface level feigning of normal human emotion and motivation.
While we're vaguely on the topic, homomorphic crypto also puts a spin on things. We know it's possible for a host computer to be entirely 'unaware' of what's going on in the VM that it's running, in a cryptographic sense. Related to this, I've long thought that there's a sticky 'interpretation problem' with consciousness (perhaps philosophers have another term for it) that people rarely talk about.
If you run a brain simulator inside a homomorphically encrypted system, such that no one else will ever know what you're running in there, does that impact whether we treat it as conscious? Part of it is that the simulated brain isn't hooked up to any real-world sensors or actuators, but that's just like any old brain in a jar. Philosophically pedestrian. This goes far beyond that. Someone could inspect the physical computer, and they'd have no idea what was really running on it. They'd just see a pseudorandom stream of states. If there's consciousness inside the VM, it's only there with respect to the homomorphic crypto key!
If we allow that to count as consciousness, we've opened the door to all sorts of computations counting as consciousness, if only we knew the key. We can take this further: we can always invent a correspondence such that any sequence of states maps to a computation stream that we would identify as yielding consciousness. This looks like some kind of absurd endgame of panpsychism, but here we are.
Is there an alternative? I'm increasingly of the opinion that it seems like a non-starter to try to deny that transistor-based computers could ever be the substrate of consciousness. Short of that, where else is there to go?
>If there's consciousness inside the VM, it's only there with respect to the homomorphic crypto key!
But the entropy of the encryption key is the degree to which the consciousness is "hidden". This entropy is still massively lower than the entropy of the matter that makes up a brain. If we have some way to demonstrate the encrypted system is computing the mind program, say, by interacting with its input/output, then we can in theory demonstrate the system is conscious. The fact that the encrypted system's operation maps to a mind program with entropy equal to the key, rather than equal to the entropy of the bits in the mind program entails that the encrypted system intrinsically encodes the mind program. If the mapping were equivalent to just mapping the states of an arbitrary system to the mind program, the entropy would be equal to the much greater bits in the program. Comparing the entropy is the key differentiator.
> the entropy of the encryption key is the degree to which the consciousness is "hidden"
Seems fair.
> But the entropy of the encryption key is the degree to which the consciousness is "hidden". This entropy is still massively lower than the entropy of the matter that makes up a brain.
Sure, there are far more possible states for a brain, than possible 4096-bit keys (for example).
> If we have some way to demonstrate the encrypted system is computing the mind program, say, by interacting with its input/output, then we can in theory demonstrate the system is conscious.
Right, although if we further adjust the thought experiment we run into a sort of 'systems argument' problem.
Suppose the homomorphically encrypted system uses an encrypted channel to communicate with actuators, such that a decrypter module is needed to connect it up. (We need not encrypt the channels from the sensors.) In that case, the homomorphically encrypted brain simulator plus the decrypter module, adds up to what we call a conscious system. On its own, the homomorphically encrypted brain simulator does nothing of interest, or at least, appears to do nothing of interest.
> The fact that the encrypted system's operation maps to a mind program with entropy equal to the key, rather than equal to the entropy of the bits in the mind program entails that the encrypted system intrinsically encodes the mind program.
That's what we'd typically expect of a cryptographic system, but now we have a sliding scale. What if the key were so large that it outweighed the state-space of the machine itself? Do we conclude that the length of the key determines how conscious the system is?
We could say that the contribution of homomorphic crypto is just that it permits us to use vastly smaller (pardon the oxymoron) keys to scramble states and their progressions.
What if we define another cryptographic scheme such that the correspondence formerly represented by a very long key, is instead represented by just a few bits? (Finally a way to tie philosophy of mind to Kolmogorov complexity!) Or perhaps I'm misunderstanding the point about entropy here?
The point of introducing entropy was to give us a principled way to identify which systems intrinsically capture some process. This is to avoid pancomputationalism, the claim that every system computes and that we project particular meanings onto computational systems. If some operation is in a state space of 1x10^1000 bits and using some external system we can perform the operation in 100 steps (e.g. we did 100 guess-and-check steps), we know that system intrinsically captured approximately 1x10^1000 bits of the operation. If pancomputationalism were true, all systems are equally computational in nature and so no system would be better than any other at supporting the performance of any operation. But this obviously false.
But the entropy considerations are just a practical way for us to identify computational systems with certainty. It isn't an identity criteria. Consider your homomorphically encrypted program where the key is longer than the state space for that program. Presumably we cannot tell this encrypted program apart from some random set of operations that computes nothing (I doubt this is true in practice, but lets go with it). How can we say this program is in fact computing something? The assumption that the program is homomorphically encrypted also says there is a magic string of bits that unlocks its activity. Further, this magic string is independent (i.e. has zero mutual information) with the program in question. Essentially the key is random and so it cannot provide any information about the program itself. So when the key is combined with the encryption scheme to produce the decrypted program, we know that the program was embedded in the encrypted system the whole time, not added by the application of the key.
The key point is that information doesn't just pop into existence, information requires upfront payment in (computational) entropy, a computation-like process that does something like guess-and-check over the state space of the information. If ever you have a string of information, you either got it from somewhere else or you did guess-and-check over the state space. In the case of the homomorphic encryption, we know the key is independent of the program and so the key does not secretly contain the program. Thus the program must already exist in the behavior of the encrypted system.
We don't know the key but we know it exists by assumption. "Exists" here just means the upfront computational cost has already been paid for the relation between the hidden program, the encrypted system and the decryption key. Indeed, we can in theory recover the encrypted program with comparatively zero computational work by using the key. This is in contrast to recovering the mind program from, say, the dynamics of my wall. No upfront computational cost has been paid and so I have to search the entire state space to find a mapping between the wall and the mind program. Thus the wall provides no information about the mind program, i.e. it is not computing the mind program.
This is called the systems response to the Chinese Room Argument. [0]
Searle's response [0] goes something like this:
You're missing the point. Suppose the man in the room memorised the program, so that he could answer the questions himself with no need of external aids. Now, the man and the system are the same, and yet by your account, the man doesn't speak Chinese, but the system does.
Personally I find the whole argument to bear no clear connection to the questions of consciousness. The question of whether a system consciously understands a problem is muddled, as it isn't clear a priori that problem-solving competence has any connection to consciousness. To use Dennett's term, a system can be competent without comprehension. A pocket calculator is quite unable to explain what it's doing, but is able to perform superhuman number-crunching. The argument may succeed in tying the reader in a knot, but I figure that's just because most people haven't thought much about what conscious comprehension really means.
Another problem with the Chinese Room Argument is that, if it really holds up, it ought to hold up just as well against the human species itself. If you're going to attack the idea that consciousness can arise from non-conscious components, where does that leave us?
[0] https://en.wikipedia.org/wiki/Chinese_room#Systems_and_virtu...