This is a common attempt to rationalize away the hard problem of consciousness that I find to be an almost textbook example of "begging the question". What is a feeling? Well, it's a type of experience. Does a glass of water have experiences? Probably not, because experiences are phenomena that are relative to conscious entities. Saying that X has experiences or that X is conscious are, in my view, equivalent claims.
Saying that "consciousness is just a feeling" is equivalent to saying that "consciousness is just consciousness". It may sound like and explanation, but it is just a tautology.
I was fascinated by a paper that argues the compression conjecture: the idea that consciousness is indistinguishable from compression. I just finished putting down my thoughts on the matter this morning [1]. This paper addresses "the hard problem of consciousness" directly:
> The answer to this question lies in the realization that the hypothesis of Amy’s subjective experience is a hypothesis which Amy herself holds, an understanding which is manifested through the compression she carries out. Understanding the hypothesis that one is feeling something and the actual experience of
feeling are the same thing. Amy’s feeling therefore exists relative to the assumption of her own existence, an assumption which the system itself is capable of making
> Chairs do not carry out compression. They do not source sensory information from multiple locations and process it
in parallel. They do not store memories to enhance future compression. And they do not develop a theory of self by compressing their own actions. Therefore they are not conscious
> Imagine holding a flame to the leg of a chair. The flame leaves a black mark, therefore the chair has certainly been affected by the flame. But intuitively, it does not seem reasonable to claim that the chair has experienced the flame. This difference between effect and experience is directly related to compression: specifically, the chair fails to experience the flame because the information it provides is not compressed in any way. If a chair’s leg is burned it has no effect on any of the other legs. No information is communicated, and consequently there is no inter-leg data compression to bind the experiences of the chair together. Furthermore, the chair stores no memory (other than a black mark). The burning event has no effect on how subsequent events are processed, meaning that the experiences of the chair are not bound together across time. Finally, because the chair does not compress its own response to the flame, it has no awareness of any subjective experience
Similarly, gzip does not develop a theory of self. It just compresses data based on some algorithm. It doesn't learn, have a sense of self or internalize its action
> This difference between effect and experience is directly related to compression: specifically, the chair fails to experience the flame because the information it provides is not compressed in any way.
This is just begging the question: this (and everything else quoted in the above post) takes as axiomatic the proposition that consciousness is compression.
I would say the hard problem of consciousness is the problem of relativity. How do you see what happens in a different reference frame? Well, you can speculate, model, imagine it, but you can't see it. To see it you must be in that reference frame an look right from there. When you're in place of mind, you see what happens in it, otherwise you don't and can only speculate, model and imagine.
The "hard problem" is only a problem at all because it is ill defined. I don't get why many people think the hard problem is a problem. We think therefore we are.
What would a solution to the so called (I argue non-existent) hard problem look like?
Why is the hard problem a problem?
What new, useful information would "solving" it deliver?
Ah since we know why humans share a common qualia for red now we can do X?
I don't get it. Attempting to rationalise away an irrational premise, i.e. a hard problem exists, is always going to fail to convince believers. What is the point in attempting to define something that is ill-defined to start with?
Here is my position. Consciousness is a word. What it means varies over time and and space and according to who uses it. The mental model of who uses it may or may not correspond to objective reality in a broad or narrow sense. Magic is a word too. Just because we can explain N magic tricks does that mean that there is some "essence" of magic that is left unexplainable, i.e. the hard problem of magic? Just because we can come up with words for things does not mean that we actually know what we are talking about. I think "consciousness" as a word is not concretely defined therefore I am sceptical of a single unified natural phenomenon underlying the word. Giving a mental model a name does not necessarily give it reality.
I have a first person experience of the world. You probably do too. Everything that I know, including all of the science I have learned, all of the books I have read, all the music I have listed too, all of the people I loved and even this post that you wrote and me replying to it exist for me purely within my first person experience of reality.
My subjective experience of reality is the only thing I have direct knowledge of. Everything else I must doubt. If you think about it, I am sure you will come to the same conclusion.
My paycheck says that I am a scientific researcher, and I have a deep appreciation for science. It provides me with models that go all the way from the subatomic world to societies, that expose regularities in reality, that allow me to understand how many things fit together, and to make predictions. But these are the things that are "just words" or, better yet, symbols. My conscious experience is the thing that I DO KNOW. Maybe it's all a dream. Maybe I'm a brain in a jar. Maybe I was created one nanosecond ago, and I am just a random fluctuation in some process and all of my memories are part of that.
And yet, and yet, the most fundamental phenomena of them all, the thing that I have experience of, is not predicted by any of the scientific models. There is no reason why a bunch of atoms should "become conscious" once their interactions are complex enough. There is no way to build a scientific instrument that measures consciousness. Sure, you can build a scientific instrument that detects brain states that we assume to be correlated with consciousness, but this is just crude analogy. Firstly we can never be sure, and secondly we don't know where the boundaries of the phenomenon are. Is a star conscious? Is the universe? Are the individual cells in your body? How do you know? How could you tell?
That is the mystery. I find that the people who deny the mystery have something equivalent to a religious belief, and that they are not necessarily aware of it . Believing that matter -> ??? -> consciousness is not so different from believing in creationism. A comforting story that makes reality seem intelligible, but that has no basis other than one's wishful thinking and group think. It certainly has nothing to do with science or rationality, at least so far.
What is that composed of? I would say thoughts, feelings (somatic, sensory, psychological, psycho-motor state) and memories. Ok so - now supposedly fundamental atomic "consciousness" is actually composed of reducible parts. At least three, thoughts, feelings, memories. Which of these is scientifically unexplainable in principle? Thoughts are models of something, like an equation, image, or a program. Feelings are the product of sensory-thought integration. Memories are simulations of past thoughts and feelings. I think subjectivity is an illusion. Probably in a minority here though.
With a sufficiently powerful scanning tool i.e. super fMRI you could measure the neurocorrelates of thoughts and feelings.
Consciousness is a word. Subjectivity is an illusion existing in living people. Obviously past historical figures are not subjectively conscious because they are dead. So where did this supposedly existing thing "consciouness" go? Wouldn't a simpler, more scientific explanation be that it never existed to begin with. You say denying consciousness as a real extant thing is akin to religious belief but then you agree there is no possible mechanism for how it could arise materialistically. Isn't it simpler just to say it doesn't exist as something separate from the ensemble of functions of a living brain after all?
Memories and feelings are merely constructs of the brain. Even those can be doubted. The "first person experience" goes deeper, it's the observer of these parts.
And beyond being an observer, the brain is aware of the observer, so it is not merely an observer (observer effect).
(It's also interesting (with controversial implications) that there are people who know precisely what's meant by consciousness, and others who only seem to understand it conceptually.)
>Memories and feelings are merely constructs of the brain...
So what does the first person experience when it has no memories or feelings or sensory-motor input? Obviously no living people can answer this barring perhaps some Buddhist monks or the like if they have truly managed to dissociate so profoundly. Remember if "nirvana" feels good then you got a feeling. I would argue that consciousness is also a construct of the living brain. It is possible that it also exists in other contexts (e.g. ghosts, demons, jin, characters in a novel, etc) but this has never been proven or even suggested by evidence and does not seem intuitively plausible to me. If I ever experience a ghost or demon in a convincing way maybe I will change my view. So as far as we can tell what we clumsy call consciousness is inseparable from a living brain running the program "human mind" or perhaps dog consciousness is inseparable from a living dog brain running the program "dog mind." So consciousness is also a mental construct. It comes from the mind.
I am interested in this statement
>(It's also interesting (with controversial implications) that there are people who know precisely what's meant by consciousness, and others who only seem to understand it conceptually.)
Can you explain what you actually mean by this? It is not clear at all to me.
Are you suggesting some of us may be NPCs?
Assuming you are in the first group, those who "know precisely what's meant by consciousness," what is it that is meant by consciousness. "meant" means signified, what does consciousness signify that is not covered as a construct of the mind? Would you argue brain dead people are conscious, what about dead people, incorporate entities? Does consciousness not imply the ability to process information how does the energy to do that derive if not attached to a metabolising body? If consciousness,is a recursively self aware observer nested Russian doll style, as you seem to suggest how deep does the nesting go?
I think there is an observer in your mind, and it is part of your mind, but it tricks you it into thinking that it is not. That is the nature of consciousness. It is kind of like empathy turned inward.
Yes. Although it's of no consequence, since nobody can prove they have a consciousness. It may be a non-binary state as well. (Ever lived for a few days "on automatic"?)
On the extreme ends, consciousness may logically appear anytime between existing in the womb (or even earlier?) and the first years of our lives. (There are anecdotes online of people who remember suddenly being self aware around 3 years old.) It might disappear anytime during the last years of our lives and on or before our death (I doubt any later, but who knows...). (Elderly people who suddenly regress into a purely reactive state, no longer taking conscious decisions, come to mind.)
(Appearing and disappearing could either mean coming into and out of existence, or attaching to our body from some greater or external mass of consciousness.)
>I think there is an observer in your mind, and it is part of your mind, but it tricks you it into thinking that it is not.
That is possible too.
Or the mind tricks the consciousness into thinking it's part of the body.
>So what does the first person experience when it has no memories or feelings or sensory-motor input?
I cannot imagine. Memories are a physical state of the brain, so perhaps any physical shape can qualify as memories. Senses are electrical signals, so perhaps any electrical signals can qualify. Perhaps something else entirely. It definitely would not be anything like the human experience, though.
Lightning striking down a tree is an electrical signal changing physical state.
Or... Perhaps there's a million consciousnesses in your brain, and all of them, including you, are convinced that they're the one true consciousness in your brain.
If subjectivity is an illusion, then who or what is being deceived? It seems like even a false subjective experience would have to be experienced by someone or something, subjectively.
See this the fundamental problem with verbal reasoning about certain topics. I guess "illusion" was my imprecise attempt to say "something analogous to an illusion." What do I mean by that? Basically I mean that consciousness seems like one thing but perhaps it is not. I do not not agree that seems requires a "subjective" observation. Basically I am arguing subjectivity is incorrect but useful model of our selves that the brain creates on its own. It does not exist independent of a living brain.
If an "AI" image recognition algorithm classifies a zebra print couch with four legs as a zebra, the animal does that mean that the algorithm is experiencing a subjective deception or that it is just wrong?
Ideas don't have a noumen part and can't appear different from what they are. The error in your example is in association between the idea of zebra and an external object, the idea of zebra itself isn't wrong and isn't illusion and isn't different from some kind of true form of itself.
I am having difficulty understanding your point. Please feel free to clarify. To me "noumen" means mental or "thought" derived part.
Yeah the couch looks like a zebra. The idea it looks like a zebra is not wrong. However the conclusion that it therefore is a zebra is wrong. It is a visual misidentification. What distinguishes this from an illusion? We are not looking at true forms versus reality. We are looking at perceived form versus reality. When perceived form diverges from reality that seems like an illusion to me. Maybe you could call it something else, a mistake. What's the point here that you are trying to make?
I suspect the algorithm isn't experiencing anything at all. I know I am experiencing something even when I'm wrong or being deceived. I strongly suspect other people experience things.
What does "I know I am experiencing" actually mean? Can you define it in concrete terms? How would it differ from "I believe I am experiencing," or "I see myself as experiencing?" I think subjectivity is fundamentally unexplainable and does not require an explanation. I also think a belief in subjectivity is indistinguishable from the actuality of subjectivity. You believe you have subjective experience specific to you. Despite the fact that subjective, experience, and you is undefined apart from the functioning of your mind, i.e. the program running on your brain. Therefore if consciousness == mind why not just call it that? Mind can be broken down into various functional components. Why do you believe you have conscious experience? Because that is the way your mind is wired and that is what that word means to you at this point in space time. Is this indistinguishable, objectively from an illusion? I say no. If you say yes by what criteria would it be distinguished? If we define "illusion" as perceiving something as different from what it actually is, does the existence of an illusion require a "conscious" observer? I would say no. Now, as an aside, an astute reader will see in another comment I said the existence of this particular universe as a collapse of a superposition of universes may depend on consciousness, by that I simply meant a certain degree of information structure resulting in structure perceiving itself, not necessarily subjectivity.
The way I get out of that conundrum is to ask what would happen if the Earth were destroyed in some cataclysmic event and there were no humans left. I mean do you agree that the universe would still exist?
Do you think some other life in the universe might have consciousness, even if it's organized in a way that would be unrecognizable to us? How would you go about examining that question?
Carl Sagan spent a lot of time looking for ways to pose that question in ways that make it possible to "do science" about it and not just trail off into ideas that are beyond discussion, beyond meaning.
I mean if you are looking at a planet to see if the surface has been worked in a way that proves the existence of a technology-capable species (even if it was long extinct), you're talking about something "real" at that point.
Also, the book "Solaris" by Stanislaw Lem is really wonderful in the way it evokes the mystery of how a being can be sentient and utterly non-comprehensible to us. Quite brilliant really.
>I mean do you agree that the universe would still exist?
I have no basis for answering this question either way. Maybe the universe requires a conscious observer for wave function collapse. If a tree falls in the forest does it make a sound? Who knows.
>Do you think some other life in the universe might have consciousness, even if it's organized in a way that would be unrecognizable to us?
Maybe. Maybe not. It depends on how rare life is in the universe. So far we have one example.
>How would you go about examining that question?
I wouldn't since earth would be blown up and I wouldn't exist. If you mean when earth is still here then I wouldn't because unless intelligent life is very common space is simply too vast to find it.
Sagan was wrong about a lot of things.
Anyway not to be a downer but if other intelligent life exists in the universe the chances of us meeting it is very low. So low that the human race is more likely to go extinct before meeting creatures from another star system. If a superposition of universes exists in the same region of meta-space-time then maybe it is higher. For example UFOs may be from "here" just another plane that some times leaks into ours. Imagine multiple earths in multiple universes some of whose inhabitants have learned to cross between.
You asked the question, "Why is the hard problem a problem?"
There's a documentary where someone asked Richard Feynman why magnets work, he spends almost 10 minutes talking about how a scientist assigns meaning to a question:
"When you explain a 'Why', you have to be in some framework where you allow things to be true."
So what I would say is that people generally allow Cartesian Dualism to be "true" and that's the thing that needs to be challenged.
Sometimes this is called Nondualism or Nonduality. If you look at it that way, the idea that the consciousness problem is an appropriate area for inquiry seems like a straightforward idea.
As far as Carl Sagan being wrong, he was big on the idea that our subjective minds are just a thing that hydrogen atoms do, if you give them billions of years to do it.
But he never came out as a Strict Materialist. He also liked to poke holes in religion but never came out as an Atheist. Yes, he was being clever and cute about it. But I don't think that makes him wrong, just that he was onto something more subtle about the nature of what we are. Human beings are a way for the universe to know itself.
>As far as Carl Sagan being wrong, he was big on the idea that our subjective minds are just a thing that hydrogen atoms do, if you give them billions of years to do it.
That is a beautiful thought. But it is not clear life is inevitable at all. We only have one example of a planet containing life. The normal flow of the universe is toward increasing levels of entropy. I suppose gravity wells containing stars reverse this on some scales. The early post big bang universe was in an extremely low entropy state.
Personally I am not a materialist. I also do not believe that we know what we are talking about when we attempt to talk about consciousness. I think the noun is imprecisely defined.
> Saying that "consciousness is just a feeling" is equivalent to saying that "consciousness is just consciousness". It may sound like and explanation, but it is just a tautology.
I think you hit the nail in the head with that. Consciousness might very well just be a tautology, so the only definitions we are going to be able to produce are going to be circular or recursive.
However, this is a problem with written language in general. The definition of any word is only provided in the form of other words, which themselves are defined by words as well, so without an interpreter/reader that makes a connection from language to something else, language by itself can’t really have any meaning.
You may be surprised to learn that there is more to the theory than is conveyed in the headline to the Nautilus Q&A with the scientist who came up with it. A more accurate statement of Solms's view is that consciousness is precision adjustment within a hierarchical network of Markov blankets that self-organise to minimise free energy. You can judge for yourself whether he avoids tautology or rationalises away the hard problem, but you should probably first read his book, in which he explains the idea. Remarkably, given that this is a book about predictive coding in the brain, it is not all that taxing to read. What's more, as far as I can see, it does in fact solve the problem.
I feel like the greater problem is that we use the word 'consciousness' at all; the very term is undefined and acting like its a term we can build a science on is just as flimsy as saying its a feeling
How do you know there is a difference between a conscious being and a glass of water? I'm not saying the glass of water is conscious. Let's say it isn't. What tells you that an animal is any different?
I don't know if a glass of water is conscious or not, nor can I come up with a scientific experience to test such an hypothesis, so I remain agnostic on that question. In fact, I cannot even come up with a scientific experience to test if other people are conscious. I just bet they are, because they are similar to me and I know I am.
My point is simply that "feeling" already assumes consciousness. Maybe everything is conscious. Still, saying "it's just a feeling" explains nothing. Most people assume that glasses of water are not conscious, so I just used that example to illustrate my point. But you make a valid remark.
I hadn't scrolled down this far when I brought up the cogito myself - but I guess great minds think alike.
But, I think Descartes is straight up wrong here. He doesn't doubt, he believes that he doubts. Descartes never got away from his evil demon - it simply blinded him. He didn't take radical skepticism far enough and if he did he would realize that the Cogito is not good enough for a proof of ontology!
The universe of the evil skepticism demon is as follows: "There is exactly one truth, that there are no other truths". If you hold that axiom, than how can one unironically conclude that the cogito is a neat proof of ontology? Seems like the philosophical equivalent of "wet pavement causes rain"
What do you offer as the difference between the belief that he doubts and “actual” doubt. For him the doubt seems actual and acute. He doesn’t believe in the doubt, he is experiencing it. (Also Descartes never said doubt, that is a common addition)
This crosses my mind from time to time, what physical properties sustain conciousness?
Our brains, which is where we suppose our conciousness originates, is made (most likely) from the same sub-atomic particles inanimate objects are made of.
So is there something more to these inanimate objects? Does a rock has more potential then we are aware of?
Mostly whole atoms (or ions) plus electrons and protons (although the latter could be thought of as the ion of a particular isotope of hydrogen). Being pedantic, we can include photons, too.
Another good analogy here is definition of life. What is life and what is a difference between a living being and a glass of water? The difference is that they are different systems that work in different ways, for a system to be conscious, it should work in a certain way, and a glass of water doesn't work in that way, that's why it's not conscious. Consciousness should perceive reality, remember, model and analyze it, maybe even have intelligence, will, abstract thought, attention and reflection, then it can be seen as consciousness.
Are you sure that neurons are a precondition for consciousness? I'm not arguing a glass fo water is conscious, but that argues against computers ever being conscious because they also have no neurons.
I think computers could be conscious (whatever that means) if they modelled living minds to a sufficiently high degree of precision. I do not see why the physical substrate would matter.
thats true, I guess theyre not required for consciousness but thats how it evolved here on Earth. I suppose all you would need is to be able to have a mental map of the world, however it happens
We should not assume that the appearances of experience is proof of well... anything, and especially not of existence of experience. The classic example of where I think people f*** this up is with unironically accepting "Cogito Ergo Sum" or "I think, therefor I am". I'm surprised that blade runner didn't give folks intuition for why this might be wrong.
Funny enough though, many people have tried to boil down consciousness to be some "simple" concept like the original author with "consciousness is just a feeling". I'm reminded of when Sartre writes all about it in Being and Nothingness: "All of consciousness is consciousness of something"
It's clearly not a tautology. It's a basic scientific approach:
> You reduce it down to something much more biological, like basic feelings, and then you start building up the complexities.
Author is attempting to "build up the the complexities" from discrete falsifiable parts. There's great and obvious benefit to that. One obvious drawback is that it may not succeed in adequately addressing the complexity suggested by the history of philosophy of consciousness. But that in no way makes it a tautology.
> You reduce it down to something much more biological, like basic feelings, and then you start building up the complexities.
But at this point we're not even sure if the thing you're referring to is the same thing we originally intended to study.
This is one of the pitfalls of scientific discussions of consciousness. Usually the first reductionist step appears to substitute the subject under discussion for something entirely different and then proceeds to explain this other thing.
That "first person" is just a group of neurons experiencing a signal. Call it what you will: "On". Or "I am". I think about it as "Still connected", given that people who lose significant parts of their brain still "feel alive". When those neurons stop receiving it, the sense of self gradually stops.
I also think that one of the reasons we need sleeping is to recharge the signal emitters; that's why we lose consciousness in the process.
I must stress that I am not an expert in neorology at all, I just have thought about this for a while. Obviously when I say "a signal" as if it was a digital 1 or 0, evolution will probably have implemented bunch of chemicals and electrons being interchanged at different times at different places.
It galls me every time I see “neuroscientists”/“neuropsychologists” and the like considering themselves to have essentially discovered the problem of mind-brain duality, fundamental questions about what consciousness is, and so forth, as if philosophers haven’t been carefully studying these topics for hundreds and in some cases thousands of years—and as if many were not continuing to study it in philosophy departments to this day.
This guy for instance seems not to have read much or any philosophy. The interviewer appropriately poses question regarding Searle and Chalmers, but from his answers he doesn’t seem familiar with these quite important thinkers.
I’m genuinely confused—what is it that makes someone who calls themselves a scientist want to avoid having their lit reviews include writings categorized as philosophy?
I have a Ph.D in Cognitive Neuroscience, and my dissertation was on the neurophysiological correlates of consciousness [0].
I can assure you that Philosophy of Mind is part of the curriculum. About a third of my dissertation deals with Descartes, Helmholtz, Ryle, Dennett, Searle, and others.
The general consensus among my peers is that philosophy is an essential part of scientific inquiry. Studying philosophy of mind is regarded as the only way to ensure the empirical questions we're asking even make sense, or that our discoveries are non-trivial.
It bears repeating: the cognitive neuroscientists I frequented have a deep respect and genuine interest in philosophy.
[0] Specifically: I presented evidence that attention was a causal mechanism in conscious access. We were able to take stimuli that were initially not consciously perceived, and induce a conscious percept after the stimulus had disappeared by manipulating exogenous attention.
> We were able to take stimuli that were initially not consciously perceived, and induce a conscious percept after the stimulus had disappeared by manipulating exogenous attention.
That's cool! Can you say more about how the experiment worked?
My wife does this to me. She'll be talking about something and I'm not listening but I make semi-plausible automatic responses. Sometimes she catches on that I wasn't paying attention and calls me out on it. At that point the last 6-7 seconds of speech pops into my mind like magic and I can pay attention to it and respond. Sadly sometimes that's not enough to deduce what she was talking about.
Yes! We informally referred to this as the "double-take phenomenon", and our hunch is that this is exactly what is going on.
During my Ph.D, I would often spot this on my way to the lab. I would be seated in the subway early in the morning, and suddenly realize that the name of a station had just been announced, but that I hadn't heard it. Then, it was suddenly like I had heard it -- as if the trace sensory signal in auditory cortex had been isolated and amplified by my attentional system, before it fully dissipated.
Probably quite a separate phenomenon, but if I wake when it's still night and I want to know what time it is, I'll often blink my eyes open for a tiny fraction of a second to see the glowing alarm clock. I often don't know what time it is until I "read" the time from my memory. (It's either my memory or the after-image on my retina.)
When my partner does this to me I call it his "audio buffering" mode -- he'll be absorbed in something and make no indication that he's heard me, but if I wait long enough (in the 15-60 second range), it'll suddenly make it all the way to his conscious attention and he'll respond as though there was no gap at all.
Sounds like inattentive ADHD or auditory processing disorder, where symptoms like this are typical. Unless you are thoughtfully meaning to give plausible responses instead of pay attention, then nvm!
Please don't throw around diagnoses like that. You're not wrong, but the hallmark of psychiatric disorders is usually degree, not nature. Everybody checks out from time to time; ADHD people just do it more. You are not equipped to make that call, here.
Here's a pure psychophysics study we performed. It gets at the essence of the mechanism without forcing you to wade through pages of fMRI-related esoterica.
That's a cool experiment, and a well-written article, thanks!
I've had a tiny bit of experience dealing with the visual system in a related but distant field (the feedback loops of visual accommodation and vergence and conflicts that can emerge with stereoscopic 3d - and I wish I had understood and done actually insightful things..). This left me wondering about the important place of vision-based experiments in the theories of perception as well as consciousness. What are your thoughts about that subject? For example, would similar research be doable with other senses just as well, or is that something that we have a harder time grasping or even formulating (due to a certain bias towards vision) ?
edit: I've read the other comment about the double-take so in this case there seems to be some ways to test that out on other senses, but my question still holds when talking about the overall field and the prevalence of vision in this kind fo research
> For example, would similar research be doable with other senses just as well [...]?
Yes, very much so! I believe there is less bias towards vision than you think. Once you have a paradigm that works in vision, replicating it in other sensory modalities is generally low-hanging fruit for subsequent publication. I didn't get the sense that consciousness research was terribly biased towards vision. The fields of audition and haptics are alive and thriving.
Here's an experiment I did with MEG in the auditory domain:
In all fairness, there probably is some bias towards vision. More exactly, there is bias towards things that are easy to study. Vision and audition are easy to study because screens and speakers are widely available. Touch and proprioception are harder, but still common. Smell and taste are rare.
So your point is far from crazy. However, there is sufficient multi-modal evidence that I don't think our current theories of access consciousness are mere visual artifacts.
What do you think about people like Chalmers who obstinately refuse even the idea of neural correlates of conscious is even a thing, preferring to just assert that the hard problem is hard, full stop?
More seriously, my reading of Chalmers is not that neural correlates of consciousness don't exist per se, it's rather that
1. So-called "access consciousness" (i.e. consciousness in the sense of "I am conscious of X") is distinct from phenomenological consciousness (contrary to what Dennett thinks).
2. Further, access consciousness has very little to do with phenomenological consciousness, to the point that calling it "consciousness" is a misnomer.
3. Therefore, neural correlates of access consciousness will never give us insight into phenomenology.
I think my position is that of most people in the field. I'm agnostic. Or more exactly, damned if I know what I think about that. I find it hard to believe -- as Dennett claims -- that once we've explained everything about neural correlates of (access) consciousness, that there will be nothing left to explain. I also find it hard to believe Chalmers' story that neural correlates have nothing to teach us about about phenomenology. But my arguments ultimately appeal to an interpretation of scientific data in the context of my own experience, so it's really a gut feeling of "neither seems quite right".
Given this, my stance is basically "I don't research hard problems; I research easy problems". Access consciousness -- whatever that is -- is interesting and useful to study. I choose that.
(N.B.: most of this should be written in the past tense, as I have now moved to a different field!)
Terminology doesn't change anything, assume everything is about phenomenological consciousness. According to Chalmers brain doesn't differentiate between access consciousness and phenomenological consciousness, because the zombie concludes that he has phenomenological consciousness based on analysis of access consciousness.
It's been a while, but yes, that does ring a bell.
Ok, so you have a "Zombie" that has all the behaviors and physiological processes associated with access, but no qualia.
In principle, we might learn everything there is to know about access and still have no insight into qualia. Have I summarized that correctly?
Assuming so, my position with respect to "why study neural correlates" is:
1. Access (whatever that is), is still interesting and worth studying, if only to find out what it actually is, and even if it's "just" working memory or attention.
2. At the very least, we will have succeeded in distinguishing access from phenomenology. Our common human understanding of (phenomenal) consciousness will have advanced insofar as we will understand it to exclude access and all its second-order effects.
3. It remains to be proven that there is something more to explain than "just" the physiology and intentionality of brain processes with respect to access. If there is, Chalmers might be right. If not, the point goes to Dennett -- and think about the implications!
Becaues the Education system in America is designed to be gamed at all levels, from students to teachers to school board admins, not for anyone to actually be given life-enriching value.
People like Descartes, Helmholtz, Ryle, Dennett, Searle were/are stuck in the framework that tries to explain (while it provides no explanation at all) consciousness as an emergent property of a complex enough computation (by neurons).
So Dennett, Hofstadter, Edelman et al, while really smart, will never get an inch closer to a true explanation.
The mechanism by which general anesthetics temporarily and partially/fully disables consciousness should provide a nice hint in which direction a true explanation should go.
There is no doubt imo that the theory will be based on quantum mechanics. Roger Penrose goes even a step further and states that the theory needs a non-computational component. (see Orch Or : https://en.wikipedia.org/wiki/Orchestrated_objective_reducti... )
In any case, I'm still waiting on an Einsteinian admission from most scientists that it is "their biggest mistake" to consider that a "warm, wet and noisy" environment like the brain cannot host the necessary quantum phenomena. In the current state of affairs, they have been proven wrong already.
>There is no doubt imo that the theory will be based on quantum mechanics.
Eesh, this again ...
There has never been a proposed (serious, testable, much less supported) mechanism by which quantum mechanics could explain consciousness. This argument always boils down to "phenomenology happens in the mysterious place". Quantum mechanics is the 21st century pituitary gland.
>In any case, I'm still waiting on an Einsteinian admission from most scientists that it is "their biggest mistake" to consider that a "warm, wet and noisy" environment like the brain cannot host the necessary quantum phenomena. In the current state of affairs, they have been proven wrong already.
We are all convinced that quantum-mechanical phenomena can take place in microtubules. That debate is settled, but here's the thing: that was never really the issue. The issue was -- again -- that none of these theories explain how phenomenology emerges from quantum mechanics. All they say is "it does".
EDIT:
>People like Descartes, Helmholtz, Ryle, Dennett, Searle were/are stuck in the framework that tries to explain (while it provides no explanation at all) consciousness as an emergent property of a complex enough computation (by neurons).
This, however, is a statement on which we agree. This is why most consciousness researchers set their sites lower: they aim to explain access consciousness, and take no position on phenomenology. In other words, we're displaying humility by asserting knowledge of the easy problem, not the hard one.
Yes, the HN crowd surely loves to downvote these type of posts. You're the compassionate scientist who has seen this layman's argument too many times. The brain is mysterious, consciousness is mysterious, QM is mysterious, bingo, QM will explain consciousness. Well, that's not what I, as a layman, meant. What I meant was Orch-OR and it is at least a good starting point.
>That debate is settled, but here's the thing: that was never really the issue.
It's still an issue but not in the way you see it. Behavior of microtubules is subject to phenomena like quantum coherence. Microtubules help move mitochondria in the axons. Mitochondria in the axons help suppress or enhance neurotransmitter release. So qm phenomena have a direct effect on the formation of thought. Please let me know If i'm talking nonsense :).
>In other words, we're displaying humility by asserting knowledge of the easy problem, not the hard one.
You've given up already. I believe we can find a plausible explanation for consciousness. The mainstream theory is an absolute non-explanation, and a dead-end.
>Well, that's not what I, as a layman, meant. What I meant was Orch-OR and it is at least a good starting point.
Very well, I shall look again, but color me (extremely) skeptical. Are you able to propose a mechanism by which quantum mechanics can explain qualia?
I have yet to see even a partial mechanism. The issue is that qualia specifically involves something non-mechanistic. You would have to first qualify what the substance of qualia is (it's ontology ... and good luck with that!), how it emerges from quantum-mechanical phenomena, and whence it comes (where is the stuff of qualia when it's not manifesting?).
Again, I don't (yet?) see how any mechanistic explanation can explain this.
>Please let me know If i'm talking nonsense :).
It's not nonsense per se, but none of it addresses the key criticism. You must explain how that causal chain (microtubules -> mitochondria -> neurotransmitter) produces qualia.
Your argument is effectively that microtubules can influence the creation of thought. Great! So can neurons! So can drugs! So can a blow to the head! Why insist on microtubules?? Why not elsewhere?
>You've given up already.
I've also given up on proving the existence of God!
Less facetiously: phenomenology has the advantage of being self-evident. We're all pretty much convinced that it exists, contrary to God. Given that, and given the advances in materialist philosophy, it's not prima facie crazy to suspect that phenomenology is a physical process, and therefore amenable to scientific explanation. In this, you have a point.
However, I don't think science (or philosophy for that matter) currently has the ability to propose an explanation for how some process p implements qualia. We don't even know what such a theory would look like. As such, the quantum-mechanical argument really does seem like the pituitary gland of the 21st century.
>The mainstream theory is an absolute non-explanation
Which mainstream theory? Are you referring to access consciousness? If so, then you are basing your argument on a false premnise, namely that these scientists are investigating the same thing as you. They are not. They are interested in another thing we call consciousness, which may or may not be related to the phenomenological consciousness that you are interested in. It's still an interesting question.
If you're referring to mainstream theory of phenomenology (e.g. sensorimotor theory), it is at the very least no more absurd than quantum-mechanical theories.
>Yes, the HN crowd surely loves to downvote these type of posts.
Respectfully, have you considered the possibility that your ideas just don't add up? I mean no disrespect, and again will take another look (links welcome), but I notice that you have not addressed the central question of how a mechanism (let alone yours, specifically) can produce a qualia. I suspect this is what motivates the downvotes.
>Very well, I shall look again, but color me (extremely) skeptical.
That's awesome, thank you! Ultimately it's beneficial to keep an open mind on things about which one is skeptical or even convinced the idea is rubbish (well, some are of course).
>Are you able to propose a mechanism by which quantum mechanics can explain qualia? Again, I don't (yet?) see how any mechanistic explanation can explain this.
No, I'm not that smart, by far (hehe). You're way too far up the explanatory hierarchy. A starting point are rather the laws of nature, of which Roger Penrose makes a good point that they are not enough understood to make inroads to an understanding of consciousness. At the beginning, consciousness and qualia must be based upon something mechanistic. Penrose argues that QM is incomplete (a non-computational component most likely), I would add that not only GR and QM should be unified (we need to find out where both are wrong), but the triad of Information theory (which I consider the most fundamental), GR and QM should be unified. Maybe this would leed us to conclude (proto-)consciousness is pervasively embedded at the most fundamental level of nature. I think it must be. So why wouldn't a rock be conscious? Maybe because its composition and arrangement of atoms doesn't allow it to be a facilitator to quantum information processing like neurons. Maybe neurons - in their composition and arrangement of atoms and biochemical state and interconnectedness - are especially apt acting as a facilitator harboring microtubules that store and process qubits. In any case, i haven't and can't answer your question, sorry.
>I've also given up on proving the existence of God!
I agree the non-existence of god can't be proven, and is not provable even in theory. But I've worked around it by arguing that the soul can't exist, considering the theory of evolution. At what point of the evolution exactly would god have turned us human by magically inserting a soul? And if this principal instrument of god doesn't exist, maybe the idea of god is also a bit nonsensical. Good enough for me! (And as I always joke, Thumbs up if you believe you have a soul).
>However, I don't think science (or philosophy for that matter) currently has the ability to propose an explanation for how some process p implements qualia.
Agree completely on that one.
>QM argument really does seem like the pituitary gland of the 21st century.
Eesh, not that again ... :)
>Respectfully, have you considered the possibility that your ideas just don't add up?
I actually consider this the most likely possibility, yes.
>(links welcome)
Already mentioned in my original post. Enjoy diving deep in the rabbit hole of Orch OR! If you got heaps of time, watch a few youtube videos of Penrose (and Hammeroff) explaining the concept. It's really inspiring watching the most eminent scientist currently on the planet (imsvho).
Yes. Our experience of (or as) consciousness may be biased by having a brain. Consciousness may not require thought or senses in the way that we know it. They key is in the observer, and the observer effect that makes the brain aware of the observer (giving rise to self awareness).
Consciousness cannot be proven, but it does affect the conscious being, so there are potential observable effects that might differentiate conscious existence from non-conscious existence (assuming it exists).
A rock may be conscious and the consciousness would observe being stone. Thought may not necessarily be a requirement for consciousness. It would not think that it is stone, it would not have any memories, it might not feel anything. It might be a rather dull consciousness, or perhaps it observes the forces that hold the rock together. Perhaps it experiences the forces on the rock as an 'otherworldly' sense that we cannot imagine. But it would not think about it, nor remember it, those are functions of the brain.
>Maybe this would leed us to conclude (proto-)consciousness is pervasively embedded at the most fundamental level of nature. I think it must be.
Why? You've presented no argument! This is just a declaration of faith!
>So why wouldn't a rock be conscious?
Indeed why not, but then again why would it be? This position is called panpsychism. It is hardly new, and hardly specific to QM. If you want to posit consciousness as a fundamental natural force, we need to be able to predict and measure it in all things, including rocks.
>Maybe because its composition and arrangement of atoms doesn't allow it to be a facilitator to quantum information processing like neurons.
Perhaps. Or perhaps a rock doesn't integrate information as much as a brain? That's what Giulio Tononi thinks [0], and at least his panpsychic theory has managed to collect some evidence (n.b. it suffers from exactly the same problem as QM with respect to mechanism).
Even if we buy the panpsychic argument (after all, why not!), you have made no argument for quantum mechanics as opposed to information-integration, or indeed anything else! None! Zero! Panpsychism does not imply quantum mechanics!
>In any case, i haven't and can't answer your question, sorry.
I appreciate the concession, but I'm afraid I must now insist upon a subsequent one. You must concede that your only reason for thinking QM is involved is because you don't see a reason why it wouldn't be. That's a bad reason.
>I actually consider this the most likely possibility, yes.
That contradicts your insistence of QM as the likely candidate for phenomenological consciousness. Either you think that assertion has merit, or you think it doesn't. Which is it?
>Already mentioned in my original post.
Yes, I was asking for things I might not already be familiar with. Am I to conclude that this theory hasn't made any progress in 15 years?
The search for simple forms of consciousness is aimless, like a search for simple smartphones. Is a feature phone a smartphone? Is a dumbphone a smartphone? Is a rock a smartphone?
Anesthetics make that emergent behavior to cease or something to that effect. The hard problem is applicable to computers too, but they don't rely on any quantum phenomena, which is enough to show, that quantum phenomena are irrelevant and are more likely to be a hindrance, if they were, they would introduce randomness, which leads to hard indeterminism.
Smart people have a tendency to think that their intelligence is enough to figure out anything. Turns out it's basically impossible to have an original thought in most fields without first getting caught up.
Actually, now that I think about this, I'd extend it to just about everyone, not just academics/engineers/etc. Imagine two people working on a plumbing problem. An hour into it, Bob waltzes in, looks at it, and says, "did you try __?" The two people who have actually been working on it roll their eyes and mutter, "yes, Bob, we've been working on it."
Sometimes Bob will get lucky and notice something they didn't, because they're too deep in the weeds, but I think people dramatically overestimate how likely this is.
It's important to recognize this tendency in everyone. When I have the instinct to say something like that, I switch gears and recognize that I'm not helping - I'm actively asking them to stop working on the problem and get me caught up.
Usually the only time I'm actually helpful in situations like that is if I have esoteric knowledge or experience that is uncommon and directly related to the problem at hand - "Oh, I ran into this 3 years ago, here's what we did to solve it". Your peers have already tried the obvious things.
Another rule of thumb to notice when this is happening is the word just. "Why don't you just..." is a belittling thing to say, as if they hadn't already thought of the obvious. Better is "why didn't X work"?
On the other hand smart people tend to avoid simple solutions to "hard" problems, that reduce their ego to a mechanical process.
For instance, I don't see a good reason why having consciousness of X is in any way fundamentally different, than having an ability to play StarCraft at MMR Y this instant in time. IMHO, conceptually they are the same and the only difference is the game being measured.
Info asymmetry in society will keep growing as Info keeps growing. On the other hand that 6 inch chimp brain we have, has no way to keep up no matter what tech is produced within our lifetimes.
So only thing worth doing is point people at things you know and keep walking, sort of like two ants passing each other in the universe.
I continually go back to the story of Socrates and the Oracle of Delphi. 2500 years later and it’s still relevant today.
The Oracle of Delphi pronounced Socrates the wisest of Greeks; and Socrates took this as approval of his agnosticism which was the starting point of his philosophy: ‘One thing only I know’, he said, ‘and that is that I know nothing’. Philosophy begins when one begins to doubt — when one begins to question the accepted wisdom of tradition. Particularly the one’s cherished beliefs, one’s dogmas and one’s axioms.
Puzzled by the priestess of Delphi’s statement, Socrates felt obliged to seek the meaning of her remark. By questioning others who had a reputation for wisdom, he came to see that he was wiser than they, because unlike them he did not claim to know what he did not know.
Funny you say that.. I've recently read a book about child psychology and it was constantly praising the progress of science (specifically neuroscience).. yet I've read all this stuff before, for example in Jung or even in the Stoics.
Ok maybe they didn't know. Or maybe it's science's job to methodically confirm knowledge, that has so far been rather anecdotal, and scientists can pat themselves on the back for it, why not.
But it somewhat annoyed me to read that book anyway. They just made it sound like some breakthrough research, which I think it wasn't.
The thing to remember about philosophy is that philosophy is more akin to mathematics than to science. It's the exercise of taking some postulates and forming a theory atop the assumption the postulates are true. Whether the philosophy you end up with actually reflects reality is a function of whether those postulates map to real things... The philosophy can (and, in history, many times, has) live independently of the postulates, describing an interesting, self-consistent world that just isn't real.
Aristotle described matter as being made up of atoms, and he's first-order-approximation correct, but his mechanistic details of how they worked were extremely wrong. Descartes' famous "I think, therefore I am" proof rests extremely heavily on an assertion he makes earlier in the Discourse where he combats the hypothesis that he's just a brain in a jar being fed lies by an evil demon by asserting that the God that would allow that reality would be a really shitty god, so he dismisses it out of hand. It's through scientific investigation that we came to discover which parts of Aristotle's philosophy matched reality and which can be discarded (and we lack the scientific insight, as of yet, to have strong evidence for or against that evil-demon brain-in-a-jar hypothesis. ;) ).
There are a thousand, thousand algebras that are internally-consistent but ultimately uninteresting because they map to nothing we observe in the universe, and then there's linear algebra. There are a thousand, thousand philosophies that are internally-consistent hot garbage, and then there's Jung. To say "Jung described this, why did it take science so long to discover it?" is like saying "Complex wave functions already describe quantum mechanics; why did it take physicists so long to realize that?"
Now, what philosophy does gift us with (if it's grounded in strong logical consistency) is a roadmap of where to look next if we notice that things in reality do align to the postulates, because the philosopher has already imagined, in vivid detail, the consequences of those assertions. And that is extremely cool.
Galling can go both ways. Why is it that philosophists have gone hundreds or thousands of years thinking on a problem and produced no tangible results and yet Science put a man on the moon? Reproducibility.
Yours is literally a philosophical question. You are doing philosophy. Surely you see the relevance and importance of your own question...
More to the point, most of what you hold dear is precisely philosophical in nature. Let us list but two, which I draw directly from your comment:
1. The value of putting a man on the moon
2. The superior (epistemological!) quality of scientific knowledge.
That last one bears repeating. The entire reason you think that you think science is better than philosophy is because of a philosophical stance. You are manifestly an empiricist, albeit one who does not understand empiricism and its rational context very well.
Respectfully, I leave you with the following quote by Gordan Fee:
Before you can say, ‘I disagree,’ you must be able to say, ‘I understand.’
It is axiomatic that before you level criticism you should be able to state
an author’s position in terms that he or she would find acceptable.
Your position betrays an ignorance of philosophy that would appall history's greatest scientists.
> Your position betrays an ignorance of philosophy
And your entire reply rests on misreading of my comment; note that I said "a problem" not "all problems". There are many problems that philosophists have made absolutely no progress on in centuries or millenia because on those narrow questions the dialog is horribly imprecise and does not give rise to reproducible, testable things.
Almost everyone who replied to me assumed that I meant all of philosophy is useless from basically missing the "a" word in my comment.
That's fair. Please accept my apology for this oversight.
Nevertheless, it makes your comparison with "putting a man on the moon" very puzzling. Why not compare this great achievement of science with a great achievement of philosophy?
Moreover, you place science and philosophy in opposition with each other, yet science is exactly a subset of philosophy. And while you might point to the higher epistemological quality of empiricism (as opposed to, say, metaphysics), you would have to concede that science is also tackling easier questions by a significant margin, precisely because they can be apprehended through epistemologically-objective observation.
Given the above, the charitable interpretation remains ignorance. One might even qualify it as "galling".
You're right, I probably don't know enough philosophy (but then again, who does...). What great achievement of philosophy should we put in the league of the moon landing or splitting the atom?
Also there is a larger conversation to be had, because we're really talking about Western philosophy in context. Buddhist philosophy has all kinds of things to say about what the mind is and isn't, but we don't carry that in our traditions because a lot of Buddhism completely nullifies our Western consumerism culture, and that's uncomfortable. So...
The founding of the United States of America was an experiment in the practical application of political philosophy. Without major philosophical disagreements between the Government of England and the colonies, the nation that sent men to the moon would not exist.
>What great achievement of philosophy should we put in the league of the moon landing or splitting the atom?
We could start with Aristotelian logic, and the idea that falsehood can lead to truth, but not vice versa. A formal definition of "true" and "false", and their semantic properties is the basis of literally all formal thinking. It should be noted propositional logic was incorporated into mathematics much later, and that this idea is inescapable in all intellectual pursuits. You can invalidate political, metaphysical, scientific and mathematical assertions on the basis of logic.
We might also point to platonic forms, which provide the basis for all engineering and scientific modeling. It is the very concept that you are appealing to every time you add an error term to a model. Until we started seeing the world in these terms, we couldn't even express the engineering requirements and scientific questions needed to get to the moon. What do you mean by "the Saturn V fuselage is a circle of X centimeters in diameter, give or take Ɛ ... ?"
If you prefer something more modern, we could point to Gilbert Ryle's demonstration that Cartesian dualism is founded on fallacious reasoning. The guy rigorously demonstrated why mind-body separation was a logical contradiction, thereby showing that science could meaningfully study the mind. On that basis, we are now able to communicate with locked-in patients using brain imaging that recognizes the neural signature of consciousness.
Have you ever wondered what your political values are based upon? It's all but certain that you were educated and shaped by culture and institutions that explicitly espouse the ideas of John Stuart Mill.
I do see the trap you're laying, however. You're going to either (a) claim that one doesn't really need the first three ideas to pursue science effectively or (b) claim that ideas such as Mill's are epistemologically subjective and therefore of lesser value than going to the moon.
For (a), the onus would be on you to show how logical reasoning, the specification of systems (surveying, architecture, mechanics, etc.) can be expressed absent form (and you will note that human inventions absent the notion of form were not the product of specification).
For (b), you will have to concede that epistemologically-subjective problems are much more challenging than their objective counterparts, and that science is -- in this respect -- playing in easy mode. It's easy to spot errors and come to a consensus when the bridge falls over. It's harder to spot glaring problems in e.g. a political theory. More importantly, perhaps, you'll have to appeal to philosophy to convince us of the inferiority of such ideas. Along the way, you will appeal to a number of great ideas, including Aristotelian logic (ideally).
If you want to claim that empiricism should trump all other modes of knowledge when it is applicable, and that empiricism is therefore better than the other modes of knowledge ... nice try! How, pray, might we determine when empiricism is applicable? How can empiricism be more important or "better" (whatever that means) than the very things it depends upon?
>Also there is a larger conversation to be had, because we're really talking about Western philosophy in context.
Indeed we are, but I hardly see the relevance. We are talking about Wester philosophy because the two are very different beasts. What we refer to as "(mostly) Western philosophy" is unique in its emphasis on the analytical method, and on uninterrupted chains of propositional logic. There is no equivalent movement in the East. The phrase "Eastern philosophy" uses the word "philosophy" in a very different sense. These thinkers are absolute treasures as well, but their methods are different: the emphasis is on holistic meaning, and (arguably, since there is an equivalent tradition in the West, in many ways) something we could label as "spirituality". If you want an easy/popular example (albeit only loosely related to philosophy), try comparing how Sun Tzu and Carl von Clausewitz think about war. The method is very distinct.
>Buddhist philosophy has all kinds of things to say about what the mind is and isn't, but we don't carry that in our traditions because a lot of Buddhism completely nullifies our Western consumerism culture, and that's uncomfortable. So...
So... what? It's an interesting question, but it has nothing to do with the value of Western philosophy itself. The irony is that the very thing you decry -- consumerism -- has its roots in liberalism, ergo in Jon Stuart Mill!
Are you referring to the (deep!) spiritual wisdom in Buddhism? Western philosophy is perfectly equipped to discuss these questions as well! There is nothing uncomfortable, here! Have you read even one medieval philosopher? Start with Thomas Aquinas! You seem like a logical positivist, so perhaps you would enjoy an attempt at a formal, axiomatic proof of metaphysical phenomena? If so, you should read Liebniez's Monadology -- it's not easy reading, but it's great!
In the same way that Sun Tzu and Clauewitz differ in their methodology, so to do Acquinas and Liebniez differ from the Buddha. Yet there is overlap in their conclusions, and unique insights on their side as well!
It seems like you're falling into the trap of exoticism. If you spent even a fraction of the time you (apparently) dedicate to informing yourself on Eastern philosophy reading great Western philosophers, you would discover a treasure-trove that is on par with what the East has produced, and wonderfully original and unique.
>You're right, I probably don't know enough philosophy (but then again, who does...).
I don't even know where to begin with this one. Many people know a great deal about philosophy, just as many people know a great deal about science. We call the former "philosophers" and the latter "scientists".
P.S.: I'm not even a goddamn philosopher. I'm a scientist.
I must say that I've loved this entire chain of conversation. Some high quality discussion here.
I do think people's critiques of philosophy stem from a need to be pragmatic, which ironically is itself a philosophy. It's difficult to reconcile the lagged "observable usefulness" of philosophy with its actual usefulness.
That which is useful is generally ignored as its been so ingrained in the psyche and that which is seen to not be useful is usually controversial and hasn't made itself to be the "norm" as of yet.
>I must say that I've loved this entire chain of conversation. Some high quality discussion here.
Thank you! I must admit I find the disparagement of philosophy tiring and irritating, especially when it comes from people (self-proclaimed empiricists) who really ought to know better. I am happy to hear my frustration has not gotten in the way of an interesting discussion :)
>It's difficult to reconcile the lagged "observable usefulness" of philosophy with its actual usefulness.
I don't think there is a lag in usefulness (observed or otherwise), when one knows what philosophy is, and appreciates the deep dependencies between it and its sub-disciplines.
Rather, I think people confuse usefulness with certainty. By that logic, I could discount the whole field of science on the basis of the inferential leap. Although science is an iterative process, we cannot be certain that it must converge on truth (though I think it probably does). Given that, the mathematician could look down his nose at science and say "Ha! These fools can't even prove anything!", and in so doing be just as much of a fool as the scientist dismissing the philosopher.
(For the avoidance of doubt: I understand you are not such a person!)
Philosophic ignorance results among other things in existential crisis and gullibility. Is it observable and pragmatic enough? Another example is fallacies that were first identified in (philosophic) criticism of scholasticism and later expanded.
So Titzer is doing philosophy here? And making a rather relevant point about philosophy’s difficulty in coming to consensus, such as whether consciousness is a physical phenomenon. So it seems you don’t have to be a certified philosopher to do it!
Well you said yourself that Titzer is doing philosophy, and I am assuming Titzer is not a certified philosopher. If these premises are correct, then the conclusion follows immediately. Nothing here depends on anything else in the thread.
No tangible results? You live in a world shaped entirely by political philosophy, which itself is almost inseparable from philosophy writ large historically. The modern political Western world is entirely built on Kantian ideals.
I think the biggest problem is the hubris to known knowledge.
One thing I liked about the author was the acknowledgment that Freud made mistakes, but shouldn't be 100% dismissed either. I like relating this to modern atom theory. How vital is the understanding of electrons to atoms these days? It's borderline impossible to imagine how atoms work without knowing anything about electrons.
Dalton developed modern atom theory in 1804. Electrons weren't discovered until the 1890s. 90 years of people probably saying, "it's impossible to get tangible results from testing atoms". Freud's theories are almost 100 years old now. We've come a long way... but how more advanced is atomic theory an extra 100 years after electrons were discovered? We are still stupid when it comes to psychology, thus unable to really make tangible results... yet. The faster that concept is respected, I truly think the faster advancements can be made because people won't get hung up on bad pop science declarations from innocent studies that are doing to their best to just discover some grain of truth.
You're right, his work helped stop further vital research into wandering uterus and methods to cure women's tendencies to hysteria and insurrection, like lobotomies. Oh, the good ol' days.
Again, he wasn't perfect by any stretch, but his work and push of psychoanalysis is the seed to our current forms of therapy. Yes, modern therapy still isnt perfect for war and trauma induced PTSD but holy shit it's better than cranking up the voltmeter on someone, pumping them full of cocaine and tobacco and snakeoil, or just locking them up in a padded room. This doesn't include some of the archaic religious torture practices some places of the world still utilize to this day.
The dude was on the right track compared to the rest of his time. Give credit where credit is due. It's not like you invented the transistor. We all stand on someone else's shoulders. Hindsight is a whole hell of a lot easier than foresight.
Again, we dont think Dalton set us back in atomic theory since he was the pioneer. Why the hate for any other pioneer then?
If he was such a pioneer of psychology and psychiatry, why is Freud pointedly ignored in Psychology and Psychiatry departments? And seriously, was the idea of subconscious motivations that innovative for his time?
By definition philosophy has come to refer to the study of things that cannot be resolved by empirical inquiry. So, every time human advancements have ACTUALLY allowed us to answer a question, it ceases to be the realm of philosophy.
For instance, Plato and other Ancient Greek philosophers theorized about the basic building blocks of matter—what their shapes and natures were. That question has become one of physics.
That’s why philosophy is referred to as the Mother of the Sciences.
This hardly makes philosophy worthy of your description as lacking “tangible results.”
> Galling can go both ways. Why is it that philosophists have gone hundreds or thousands of years thinking on a problem and produced no tangible results and yet Science put a man on the moon? Reproducibility.
Someone should probably let you know that science is a subset of philosophy (or "philosophism" as you seem to think it's called).
Or you could argue that philosophy is a subset of Science because Science is about constructing theories and exploring their implications. Science puts a heavy emphasis on reproducible results, while philosophy hasn't generally. Science has methods, one of which is a peer-reviewed publication model that other disciplines have come to adopt, even philosophy? Is Science outside everything then? I don't really think so.
So, I just fundamentally disagree that Philosophy contains everything that has something to do with thinking or believing stuff. It's an oversimplification to the point of being vacuously true.
> Although true, that's like defending all human speech by saying that philosophy is a subset of all human speech, which itself has science as a subset.
Not really. To use your analogy a bit, the GGP was basically rejecting human language in favor of English on account of it being so wonderful that English-speakers visited the moon. It's nonsense. If you reject philosophy, you reject science.
It doesn't take that much charity to interpret that what the original poster was saying as, "I reject the subset of philosophy that is not science, on the grounds that the subset which is science has all of the value," a legitimate position at least which is not tautologically false. I guess the reason why one thing being good could make another thing bad would be that time spent using nonscientific philosophical techniques to answer questions incurs an opportunity cost, that you are not using scientific techniques to answer them. Or even that trying to answer questions that scientific techniques can't solve incurs an opportunity cost, that you could be answering other questions that you had a chance at solving. (That would be a consistent belief if you thought that nonscientific thought couldn't make progress on questions, even if you conceded that scientific techniques could not make progress on all questions.)
> It doesn't take that much charity to interpret that what the original poster was saying as, "I reject the subset of philosophy that is not science, on the grounds that the subset which is science has all of the value,"
> It's a pretty classic case of celebrating science without really understanding its foundations and dependencies.
Why is it that we need to constantly go here with accusations of ignorance? Do we really need to do that all the time? Somehow philosophy contains everything, all the time. You're ignorant! No you're ignorant! My thing contains your thing! No, my thing contains your thing!
Read my comment again. Why do some philosophical problems get turned over and over for centuries with no progress and yet the empiricism of Science has absolutely revolutionized human life? Why should we be so afraid of some neuroscientist who hasn't read Searle? I mean, isn't he going to just stumble on it again? If it's true, then they would, right?
Well that is closer to what I meant. Notice that I said "a problem" and not "all problems". Other replies pointed out, yeah, some problems graduate from Philosophy to Science (or something--that's their view). But in my view, some problems graduate from confusing and fuzzy terminology and muddled thinking of philosophy into organization, empiricism, and usually, progress.
Are all philosophical questions resolvable by Science? I think no. But there are so many questions that I feel are poorly informed by a cloistered set of Philosophical priests who speak another language they made up. Frankly, I don't feel bad when they are "galled" that a scientist takes a new look at a problem they've made no progress on.
What do you mean no tangible results? Pretty much everything around is derived from some philosophical idea that's been normalized throughout the population.
Your comment piques my interest, and I'd like to know more.
My first thought was about the Timaeus (geometrical structures of matter/elements), but I haven't read any treatments about Plato's theory of forms and the development of mathematics.
The cliffnotes version is "plato's forms/ideas" are a first attempt at articulating what abstractions are. Progression of Math is a chain of ideas leading back to these early thoughts.
As you say, these topics have been carefully studies for thousands of years but these two have done a lot to muddy the waters with confusing arguments, and I don't blame anyone for disregarding them.
The Chinese room argument certainly only exists to muddy the waters. The whole premise is a misdirection by having a human execute a Chinese-speaking program by hand, then asking if the human understands Chinese, completely ignoring the fact that is the program that is responsible for the behavior, not the human. That would be like asking if neurotransmitters understand Chinese instead of focusing on the brain as a whole.
Philosophical zombies are a bit more interesting. However it flirts a bit too much with dualism in assuming something indistinguishable from a human can lack some "qualia" that a real human has. A more useful perspective would be from a human brain simulation angle.
> ignoring the fact that is the program that is responsible for the behavior, not the human
This is called the systems response to the Chinese Room Argument. [0]
Searle's response [0] goes something like this:
You're missing the point. Suppose the man in the room memorised the program, so that he could answer the questions himself with no need of external aids. Now, the man and the system are the same, and yet by your account, the man doesn't speak Chinese, but the system does.
Personally I find the whole argument to bear no clear connection to the questions of consciousness. The question of whether a system consciously understands a problem is muddled, as it isn't clear a priori that problem-solving competence has any connection to consciousness. To use Dennett's term, a system can be competent without comprehension. A pocket calculator is quite unable to explain what it's doing, but is able to perform superhuman number-crunching. The argument may succeed in tying the reader in a knot, but I figure that's just because most people haven't thought much about what conscious comprehension really means.
Another problem with the Chinese Room Argument is that, if it really holds up, it ought to hold up just as well against the human species itself. If you're going to attack the idea that consciousness can arise from non-conscious components, where does that leave us?
This is not directed at you personally but I find the whole conundrum tiresome and frustrating. Systems reply, Searle's reply, virtual mind reply, and then Searle replies something like "but a virtual mind can not be really conscious". At the end we are just left with the question "do you agree or disagree with functionalism and computationalism" and all the arguments on both sides turn out to be just empty statements of agreement or disagreement.
We agree. The Chinese Room Argument offers little insight into consciousness. All it really does is to take someone with confused thoughts on how 'understanding' works (specifically the way intellectual competence interacts with consciousness), and to tie them in a knot.
It fails to demonstrate anything interesting about consciousness. It certainly doesn't demonstrate that computer systems can never be conscious in the way we can. Nothing in the argument applies any more or less to neurons than to transistors.
If the man acts as a virtual machine that faithfully runs the program, then it's still the program that understands Chinese, it doesn't matter on what machine it runs. In this case there are two minds: the man's mind and program's mind.
Mind strikes me as a loaded term, being used to mean locus of computation or computation stream, but carrying the implication of consciousness. If you wish to make the case that computation and consciousness are in some sense intertwined, this needs to be done explicitly. The Chinese Room Argument does not do so. (Incidentally I'm of the opinion that they are indeed intertwined.)
If a person is manually executing an algorithm (the memorised Chinese Room algorithm, or naive matrix multiplication, or whatever), then we could distinguish between the algorithm and the person executing it. In that sense there are two computation streams at work. That doesn't show that a second consciousness has been brought into being, though. I wouldn't conclude that there are two 'minds'.
Your comparison to virtual machines is a good one, it's one that Dennett uses.
> If the executed algorithm is consciousness, then a second consciousness has been brought into being, same as for first consciousness.
A lot hinges on If the executed algorithm is consciousness.
I can see the sense in that argument though, in that the 'first consciousness' is acting as the computational substrate for the 'second consciousness', the way the physical action of neurons acts as our computational substrate. I'm not convinced this maps to using a card system to have a conversation in a foreign language. It doesn't seem self-evident that doing so should be considered enough to demonstrate consciousness, it's only enough to demonstrate than the algorithm is effective in having a conversation.
It's not self-evidently the equivalent of an accurate real-time computer simulation of a specific person, for instance. I think you can make a strong case that such a system would amount to a conscious person and should be treated as such. (It would follow from this that shutting down a holodeck simulation of someone is morally fraught.) The only alternative would be to morally privilege neuron-based architectures over transistor-based architectures, which seems like an uphill philosophical battle.
I don't put much stock in the question of whether the card-based algorithm 'understands' the conversation it is having. Is it competent? Yes. Is it conscious? I'm not convinced it is. Asking whether it understands is to conflate these two questions.
Competence here is indistinguishability from human in conversation. The most straightforward way to implement it is to make the algorithm have the same structure and work in the same way as human mind. We can say that it understands and is conscious because it's not different from human mind in structure and operation. It's a good part about artificial algorithms that they are transparent and we can show that they aren't only effective in conversation, but have everything to be had behind that conversation. It's due to the latter. Simply put, the algorithm isn't GPT, but AGI.
>It's not self-evidently the equivalent of an accurate real-time computer simulation of a specific person, for instance. I think you can make a strong case that such a system would amount to a conscious person and should be treated as such.
This was touched recently in art. If you're interested, it's "Sword art online Alicization". The shutdown problem is applicable to AI too. What's new is that the life of virtual people is shown at length and discussed questions as to what identity those people should have, what worldview, religion, philosophy, pride, dignity, justice.
> Competence here is indistinguishability from human in conversation. The most straightforward way to implement it is to make the algorithm have the same structure and work in the same way as human mind
I'm not sure that's the case. Simple chat programs can do a pretty good job simulating a human interlocutor. For a 'full' simulation, which we need by definition, we'd need something much more sophisticated (able to reason about all sorts of abstract and concrete things), but conceivably the solution might be very different from brain-simulation.
The thought experiment can easily be adjusted to close the door on my objection here: rather than a man in a room with an enormous card index, we have a pretty accurate real-time computer simulation of some specific person. That way the computational problem is defined to be equivalent to something we consider conscious. Laboriously computing that simulation by hand (presumably not in real time but instead over millennia) would change the substrate, but not the computational problem. If we're ok with there being an outer host consciousness and an inner hosted consciousness, the thought experiment poses no problem.
Of course, this isn't the position I started at, but it makes some sense that the real meaning of the thought experiment change as we adjust the simulated process. If our man were looking up the best moves to play tic-tac-toe, it would be plainly obvious that we're looking at competence without comprehension. If he's instead simulating the full workings of a human brain, the situation is different. The foreign language problem is somewhere between these extremes.
> It's a good part about artificial algorithms that they are transparent and we can show that they aren't only effective in conversation, but have everything to be had behind that conversation. It's due to the latter. Simply put, the algorithm isn't GPT, but AGI.
I'm not sure I quite follow you here. I agree that the depth and detail of the simulation is an important factor.
> What's new is that the life of virtual people is shown at length and discussed questions as to what identity those people should have, what worldview, religion, philosophy, pride, dignity, justice.
In contrast to Ex Machina that had the computer as a sociopathic villain with only surface level feigning of normal human emotion and motivation.
While we're vaguely on the topic, homomorphic crypto also puts a spin on things. We know it's possible for a host computer to be entirely 'unaware' of what's going on in the VM that it's running, in a cryptographic sense. Related to this, I've long thought that there's a sticky 'interpretation problem' with consciousness (perhaps philosophers have another term for it) that people rarely talk about.
If you run a brain simulator inside a homomorphically encrypted system, such that no one else will ever know what you're running in there, does that impact whether we treat it as conscious? Part of it is that the simulated brain isn't hooked up to any real-world sensors or actuators, but that's just like any old brain in a jar. Philosophically pedestrian. This goes far beyond that. Someone could inspect the physical computer, and they'd have no idea what was really running on it. They'd just see a pseudorandom stream of states. If there's consciousness inside the VM, it's only there with respect to the homomorphic crypto key!
If we allow that to count as consciousness, we've opened the door to all sorts of computations counting as consciousness, if only we knew the key. We can take this further: we can always invent a correspondence such that any sequence of states maps to a computation stream that we would identify as yielding consciousness. This looks like some kind of absurd endgame of panpsychism, but here we are.
Is there an alternative? I'm increasingly of the opinion that it seems like a non-starter to try to deny that transistor-based computers could ever be the substrate of consciousness. Short of that, where else is there to go?
>If there's consciousness inside the VM, it's only there with respect to the homomorphic crypto key!
But the entropy of the encryption key is the degree to which the consciousness is "hidden". This entropy is still massively lower than the entropy of the matter that makes up a brain. If we have some way to demonstrate the encrypted system is computing the mind program, say, by interacting with its input/output, then we can in theory demonstrate the system is conscious. The fact that the encrypted system's operation maps to a mind program with entropy equal to the key, rather than equal to the entropy of the bits in the mind program entails that the encrypted system intrinsically encodes the mind program. If the mapping were equivalent to just mapping the states of an arbitrary system to the mind program, the entropy would be equal to the much greater bits in the program. Comparing the entropy is the key differentiator.
> the entropy of the encryption key is the degree to which the consciousness is "hidden"
Seems fair.
> But the entropy of the encryption key is the degree to which the consciousness is "hidden". This entropy is still massively lower than the entropy of the matter that makes up a brain.
Sure, there are far more possible states for a brain, than possible 4096-bit keys (for example).
> If we have some way to demonstrate the encrypted system is computing the mind program, say, by interacting with its input/output, then we can in theory demonstrate the system is conscious.
Right, although if we further adjust the thought experiment we run into a sort of 'systems argument' problem.
Suppose the homomorphically encrypted system uses an encrypted channel to communicate with actuators, such that a decrypter module is needed to connect it up. (We need not encrypt the channels from the sensors.) In that case, the homomorphically encrypted brain simulator plus the decrypter module, adds up to what we call a conscious system. On its own, the homomorphically encrypted brain simulator does nothing of interest, or at least, appears to do nothing of interest.
> The fact that the encrypted system's operation maps to a mind program with entropy equal to the key, rather than equal to the entropy of the bits in the mind program entails that the encrypted system intrinsically encodes the mind program.
That's what we'd typically expect of a cryptographic system, but now we have a sliding scale. What if the key were so large that it outweighed the state-space of the machine itself? Do we conclude that the length of the key determines how conscious the system is?
We could say that the contribution of homomorphic crypto is just that it permits us to use vastly smaller (pardon the oxymoron) keys to scramble states and their progressions.
What if we define another cryptographic scheme such that the correspondence formerly represented by a very long key, is instead represented by just a few bits? (Finally a way to tie philosophy of mind to Kolmogorov complexity!) Or perhaps I'm misunderstanding the point about entropy here?
The point of introducing entropy was to give us a principled way to identify which systems intrinsically capture some process. This is to avoid pancomputationalism, the claim that every system computes and that we project particular meanings onto computational systems. If some operation is in a state space of 1x10^1000 bits and using some external system we can perform the operation in 100 steps (e.g. we did 100 guess-and-check steps), we know that system intrinsically captured approximately 1x10^1000 bits of the operation. If pancomputationalism were true, all systems are equally computational in nature and so no system would be better than any other at supporting the performance of any operation. But this obviously false.
But the entropy considerations are just a practical way for us to identify computational systems with certainty. It isn't an identity criteria. Consider your homomorphically encrypted program where the key is longer than the state space for that program. Presumably we cannot tell this encrypted program apart from some random set of operations that computes nothing (I doubt this is true in practice, but lets go with it). How can we say this program is in fact computing something? The assumption that the program is homomorphically encrypted also says there is a magic string of bits that unlocks its activity. Further, this magic string is independent (i.e. has zero mutual information) with the program in question. Essentially the key is random and so it cannot provide any information about the program itself. So when the key is combined with the encryption scheme to produce the decrypted program, we know that the program was embedded in the encrypted system the whole time, not added by the application of the key.
The key point is that information doesn't just pop into existence, information requires upfront payment in (computational) entropy, a computation-like process that does something like guess-and-check over the state space of the information. If ever you have a string of information, you either got it from somewhere else or you did guess-and-check over the state space. In the case of the homomorphic encryption, we know the key is independent of the program and so the key does not secretly contain the program. Thus the program must already exist in the behavior of the encrypted system.
We don't know the key but we know it exists by assumption. "Exists" here just means the upfront computational cost has already been paid for the relation between the hidden program, the encrypted system and the decryption key. Indeed, we can in theory recover the encrypted program with comparatively zero computational work by using the key. This is in contrast to recovering the mind program from, say, the dynamics of my wall. No upfront computational cost has been paid and so I have to search the entire state space to find a mapping between the wall and the mind program. Thus the wall provides no information about the mind program, i.e. it is not computing the mind program.
> A more useful perspective would be from a human brain simulation angle.
>The Chinese room argument certainly only exists to muddy the waters.
Does not compute. :)
The Chinese room argument, as I understand it, is not about whether the human understands Chinese. The question is whether the room understands Chinese.
Indeed. If you reject that "the room as a whole" speaks Chinese, you reject functionalism. If you think philosophical zombies are conceivable, this is also a rejection of functionalism. Ultimately the acceptance or rejection of functionalism is (as far as I can tell) only a matter of dogma, and these two arguments serve to find out which dogma you (implicitly) subscribe to.
Personally I place the rejection of functionalism on basically the same level as the acceptance of dualism. I don't know what precisely has happened to make most serious researchers reject dualism, but perhaps non-functionalism will run the same course.
> The Chinese room argument certainly only exists to muddy the waters. The whole premise is a misdirection by having a human execute a Chinese-speaking program by hand, then asking if the human understands Chinese, completely ignoring the fact that is the program that is responsible for the behavior, not the human. That would be like asking if neurotransmitters understand Chinese instead of focusing on the brain as a whole.
I really don't understand why people have such a hard time with this argument. Seems like there's some unaddressed materialist/eliminativist prejudice underneath it all.
The program isn't doing anything because it's just a bunch of symbols. You need something to "animate" the program, but that something need not comprehend the symbols it is given (hence the whole idea of an effective method). (Actually, Searle goes further when he describes computation as observer relative, and I would extend this to programs. That is, there is not an objective fact of the matter that a computer is a computer and that it is computing. If you looked at it, there's nothing in what's going on that says "oh, yeah, this machine is adding numbers". Kripke also gets into this with his "quaddition" argument.)
When you understand Chinese, reading a symbol or legal string of symbols leads your mind to form conceptual content with some intentionality or signification. We can have other thoughts in response and this can lead to the production of signs with some other signification. The program lacks this semantic component and merely performs what amounts to a syntactic translation of the input signs into output signs. This process is entirely stripped of any semantic element. And that's the point. Computers are highly systematized patterns of convention that permit the simulation of some of what we would typically expect of human beings or maybe some other animal, but strictly speaking, they don't even compute anything, much less understand.
Your neurons aren't doing anything except spiking in response to action potentials (among other physical processes). They're just producing outputs given some chain of inputs and have no semantic understanding either. Where does the understanding of Chinese (or English) come from then?
Also your assumption of the program not forming any conceptual content, will depend on what exactly the program is doing. The Chinese room says it's carrying out a conversation in a manner indistinguishable from a human, but that could mean anything from running a very fancy GPT-3 to running a whole-brain simulation of a Chinese person.
In either case, the human running the program is a distraction. You might as well ask if the pencil or the book he's using to compute the instructions understands Chinese. If the program was running on a CPU, it'd be the same thing - the CPU doesn't understand Chinese, the program does.
>The program isn't doing anything because it's just a bunch of symbols. You need something to "animate" the program, but that something need not comprehend the symbols it is given (hence the whole idea of an effective method).
But the symbols aren't the interesting piece of a sequence of computation, but rather the structure being instantiated. There is a map/territory ambiguity here. The symbols are just placeholders into an abstract structure that describes the dynamics of some system. When the structure is reified, i.e. made actual by being implemented in a system, then it is the structure that has causal efficacy in the world and it is that which we interact with. Dismissing the idea of an implemented program understanding Chinese because "the program doesn't understand the symbols" is confusing the map for the territory.
>Searle goes further when he describes computation as observer relative, and I would extend this to programs.
I totally disagree. If I have a program that inverts an arbitrary matrix, this is not a function I project onto the program. This function is intrinsic to the sequence of operations it carries out. An intelligent alien studying the operation of the program would be able to figure out its function given enough time and effort.
>The program lacks this semantic component and merely performs what amounts to a syntactic translation of the input signs into output signs. This process is entirely stripped of any semantic element.
The CPU is performing syntactic translation, but the structure being implemented is an existing thing beyond CPU operations. And this is the misdirection in the Chinese room argument. To ask if the man understands Chinese is just to ask the wrong question.
You haven't really done anything here but ignorantly dismiss philosophy categorically (citing two relatively recent arguments made in the context of the modernist tradition). Frankly, I don't loose sleep over zombies personally because I don't find this to be a real problem. It is the result of modernist presuppositions. What I find rather philistine is the cockiness of science fetishists who fail to recognize their own metaphysical presuppositions, which typically seems to be, because of historical accident, preposterously crude like materialism or dualism.
Funny, I've been repeating around "Consciousness is just a feeling" for a while. My question is also "why has it evolved at all" ? Keep reading for a hint.
Hunger, for example, is just a feeling.
Thirstiness is just a feeling.
But we don't build complex theories of the world around Thirstiness. Somehow we do with Consciousness, because I suspect it tricks us, playing recursively with our thinking.
Consciousness, which nobody really ever defines clearly, it's probably just a name around a bunch of feelings we have.
It's clear why Thirstiness has evolved: to get us to find water. Not salty one.
Probably Consciousness does something like that. For example, it might be just a feeling of oneness to keep us intact, across peripherals (legs, arms) and time (now is really a continuation or whatever we were doing before, you need a feeling to enforce that).
>Consciousness, which nobody really ever defines clearly,
I think this is really the core of the problem. My non-scientific belief is that when learn enough about the brain, we'll learn that "consciousness" is a good descriptor for the human experience of having a mind, but not really a meaningful word scientifically.
Stack depth is finite. If consciousness is cognitive recursion, we only can get so far down before the results are garbage. My working theory is that "max cognitive stack depth" is a measure of consciousness.
The scarier concern is that consciousness is a useful intelligent behavior bootstrapping tool, but once we encode everything it has helped generate into cultural DNA or literal DNA it becomes an evolutionary redundant appendix-like organ that will atrophy in the coming generations (See Blindsight by peter watts)
That analogy doesn't work at all, since "the body contains the brain" does not describe a recursive relationship.
By contrast, you see people trying to "explain consciousness" by telling a story that assumes consciousness. When someone makes a statement like "Consciousness emerges from mechanism X in the brain.", every observation that lead to this statement originated in someone's consciousness.
It's less obvious than, but completely analogous to, how it's impossible to decide whether we live in a "base reality" or some sort of simulation - everything you could say about this reality that we perceive is contingent on that very reality.
It's not weird because where you expect it to be is where it is, if it were somewhere else (some distant ansible transmission) you would be used to that and think it weird to imagine it being in the body.
Although I understand what you're pointing at, I think you have focused too specifically on feeling or reason (i.e. thought). I am willing to go as far as to say that all our actions, thoughts and feelings are mechanical/predetermined - but this all doesn't cover "existence". That's where I believe consciousness truly lies. It doesn't think or decide what to think, nor does it decide how to feel and it does not plan or take actions. It does, however, experience it all. Your life-story is a roller-coaster and consciousness is that thing going on for the ride.
IMO, the true nature of consciousness sits at the same level as the nature of the universe. It touches on what it means to simply "exist".
Having said that, I'd be happy to know what others think.
I see what you mean. The "existence" part I suspect is a trick. I think "That thing going on for the ride", is actually just a feeling evolved for a bodily purpose.
The body is doing the ride, with all its chemical gradients pulling the levers, and the thing we call Consciousness it's just "the feeling of the ride" which has evolved to keep some temporal/spatial unity. You could have legs, and memories, without being able to connect them to you.
For example, without that feeling of unity, the brain wouldn't not know which subject all the things are related to.
Something would be thirsty or hungry, but it wouldn't know that is the same thing with those legs and memories that it was referring to a just a moment ago.
True, but that ability to relate a certain feeling to an internal issue (e.g. thirst or hunger) is somewhat of a learnt behavior. It was part of the childhood ride that has now long been forgotten.
This still, however, doesn't solve the issue that those signals "exist" somewhere from a certain point of perspective. I think sight highlights this to me the most clearly. Although vision serves a purpose in deciding motor function, it also simply exists. I don't "feel" that I see - I simply "see".
Going back to how you phrase it - who or what is "feeling"? Everything we do or think may be mechanical, but there is a distinction between I and my dog. I am not riding the dog-life roller-coaster, I am riding my own human-life roller-coaster.
If I cloned myself, I'd be happy to state that both versions would think and behave as me, but I would only exist in one of them.
I wouldn't describe thirst as a feeling, but maybe it's because we can obviously tie a physiological state to it. If consciousness was a signal/feeling too, what process would this be entertaining ? the need for the brain to keep chunking the world and produce new memories, ideas ?
This seems like a very accurate description, especially the observation about temporal continuity, which we take for granted, but on occasions it can feel like a fragile illusion.
> Consciousness, which nobody really ever defines clearly
"An organism has conscious mental states if and only if there is something that it is like to be that organism -— something it is like for the organism." [1]
We don't know what consciousness is OR what it's for. There is no theory of what a brain is capable of with consciousness vs without or if this is even a valid question.
I don't think this is an accurate statement, and I also do personally have my own theory. I would be quite shocked if NO ONE had a theory of what consciousness is for.
You can claim non-cognizance on just about anything. The ability to reflect on first order sensory inputs has clear evolutionary advantages. Either that is part of what you're calling consciousness, or it is not. If you fall into the latter camp then you're just debating semantics of an english word.
The ability to reflect on inputs (or even on internal processes) doesn’t necessarily imply a subjective experience—which is what most people mean by consciousness even if they’re unclear about it. Taken generously, it would be appropriate to replace “consciousness” in the GP’s argument with “the subjective experience”. In so doing I think I’d be hard-pressed to argue with his claims.
I'm talking about the hard problem, which is nothing to do with whether a system can analyse/reflect on inputs. Non concious computers can do this. The truth may be that analog computation in physical systems like the brain results in conciousoness. I.e you get conciousoness but it doesn't do anything, it's the result of something. The other option is that it enables functions that are not possible without it. We just don't have answers to these questions.
Decent interview, but surprised no mention or allusion to the hard problem[1]—saying consciousness is "just a feeling" doesn't do much to chip away at the core issue.
Oh? I thought he said clearly that consciousness (your hard problem i.e. the experience of qualia) is just a feeling (from the brainstem).
And then he explains that cognition takes place in the cortex, which isn't the seat of consciousness. Cognition is built on top of and relies upon the hard consciousness relayed from the brainstem for "drive".
So, with cognition, you can calculate that to satisfy your desire, you need to walk three blocks north and two blocks east to the store to buy food. The drive for this behaviour originates lower in the brainstem through the quality of hunger, which is referred from the body when a sensor in your tank registers empty.
Fair question, but my problem (and I could be misunderstanding his position) is that he doesn't provide any cohesive explanation or line of reasoning why there's subjective experience effectively accompanying these "drives" at all.
The hard problem would put this in the category of easy problems[1].
Perhaps I don't understand what the hard problem is?
Is it this thing of trying to find the little man inside of your head that is viewing the television screen that displays the images that go in through your eyes?
Could you help me to see what the hard problem is?
Consciousness is the ghost in the machine that seems to be unnecessary to the functioning and behavior of the machine.
All of your interactions with the outside world can be explained by the signals in your brain, even our discussion of consciousness can easily be explained by physical/measurable processes.
This conversation would still happen in a universe without any conscious entities . Of course you can build a machine that can recursively think about its self, and in turn speak about this self awareness. This should all be able to happen without there being a subjective experience of that self-awareness.
So, the problem is figuring out where the little man is inside of your head that is viewing the television screen that displays the images that go in through your eyes? And experiencing the hunger that comes from your stomach? The problem is finding the "I" in the sentence "I feel"? Where is "I"? Is that the problem?
If that is the problem, how about the answer "nowhere"? There is no "I".
I suspect you won't like that answer. It will feel unsatisfactory, is that your reaction? I don't have the same reaction, but I'd like to understand how it is for you?
Ok, well all this stuff that you're coming up with is called the Hard Problem of Consciousness, and it's a big topic with many many words written on it. And it's what the thread parent was saying wasn't satisfactorily answered by the glib "just a feeling" answer.
Roughly, although I would say it's not so much "where" as "what".
Your answer that "there is nothing doing the experiencing" strikes me as obviously unsatisfactory, given that "I" experience things all day long every day, and "I" assume "you" do too.
Right at the start of the day, when I've just finished sleeping, when I'm awake, but I haven't yet figured out where I am, or what day it is, or what I've got on today, or anything.
It's a lovely, easy feeling.
Then it hits me. I'm in my house on the east side. I've got that meeting today. My girlfriend is still angry at me.
In those few moments before all that data is mounted, there is no "I".
>> "I" experience things all day long every day, and "I" assume "you" do too.
> Not all day long. Right at the start of the day, when I've just finished sleeping, when I'm awake, but I haven't yet figured out where I am, or what day it is, or what I've got on today, or anything. It's a lovely, easy feeling. ... In those few moments before all that data is mounted, there is no "I".
You have just described the "I" of awareness perfectly! That is the true, indeed only, "I", the one who is aware of all the temporary, passing phenomena, whether they be thoughts, feelings, sensations, perceptions, etc.
The fact that you're not conscious when you're not conscious doesn't prove that consciousness doesn't exist. On the contrary: the fact that you sometimes are conscious would seem to prove that consciousness does exist.
Oh man, who cares about the article, it's the consciousness thread! ;)
I also have my strong opinion, but anytime such heated discussion emerges it's worth remembering that disagreement is often more about mapping between some word and reality, rather than discussing the reality associated with that word.
I'm always surprised in these discussions that the ideas of Tesla, Planck and others don't come up. Someone once tried to convince me that consciousness arose from the brain by saying that if he removed a part, then consciousness would change and diminish. I answered along the lines of Tesla that perhaps the brain is a like a radio receiver and if you remove parts from the receiver it is not going to work.
My brain is only a receiver, in the Universe there is a core from which we obtain knowledge, strength and inspiration. I have not penetrated into the secrets of this core, but I know that it exists
― Nikola Tesla
All matter originates and exists only by virtue of a force which brings the particle of an atom to vibration and holds this most minute solar system of the atom together.
We must assume behind this force the existence of a conscious and intelligent mind. This mind is the matrix of all matter
― Max Planck
It is known you can remove parts of brain, thus reducing ability to think, sometimes abruptly (but not completely). One of the abruptions will be fatal for ability to stay conscious.
You can remove the antenna from a radio and remove the ability to hear the ballgame. That does not mean the game was happening in the antenna.
Before getting hung up on the idea of transmission, understand this is just an analogy: breaking something by removing a part does not prove much at all.
“Since the cerebral cortex is the seat of intelligence, almost everybody thinks that it is also the seat of consciousness, Solms writes. “I disagree; consciousness is far more primitive than that. It arises from a part of the brain that humans share with fishes. This is the ‘hidden spring’ of the title.””
This is a very cerebral approach (pun intended) not at all shared with many or even the vast majority of non-western philosophical schools. Until science begins to integrate (or reject) these models, we are a long way off from even approaching if mechanisms of consciousness can actually be 'known', let alone making affirmative assertions about it
I must've read about 100 articles in a similar vein to this over the last 10-15 years. I feel like I've learned literally nothing from any of them. I read Descartes and later David Chalmers, and I feel nothing has been added to our understanding of the fundamental issues at all. And to be honest, I shouldn't be surprised or particularly disappointed. The Mind-Body problem has existed for millennia for good reason.
b) Affect is real, and not simply an imagined experience.
In other words, question the validity of the above. Are they in fact true? How do we know them to be true? Why is a feeling given more credence than hallucinations? What if all feelings are an imagined experience, one that is "programmed" into birth by the genetic material? Just because an imaginative experience is encoded genetically it shouldn't make it any more real, should it? How would humans look like to some alien species that is capable of cognition and consciousness (without being robotic, as our species' scifi literature imagines) but not affect and identity? Wouldn't they see us all acting as if suffering from some mass delusion?
You are using very strange definitions of terminology. Consciousness is just the word we use for the fact that a stream of experiences exists. To say it might be "imagined" is nonsensical - the experiences are there regardless, and their existence is the mystery.
You are misrepresenting my comment, which I wrote only because you said "I feel like I've learned literally nothing from any of them".
I indicated that affect, not consciousness, is imagined. Furthermore, I indicated (which you entirely overlooked) that consciousness is independent of affect. Take that as a working theory, and you just might discover something new.
Discard whatever you've read in the last 10-15 years, and start afresh; otherwise, you'll just be rehashing the same old same old.
I'm not misrepresenting anything. You asked "what if feelings are an imagined experience" and I responded directly to it. Feelings are a type of conscious experience. Consciousness is not independent of affect, affect is an aspect of conscious experiences. These are just the definitions of words. Nothing new is "discovered" by redefining words in this way.
I believe that consciousness is a simulation our brain is running. Our brain is a computer that runs human simulations.
This is why we anthropomorphize everything. Why we buy our dogs a little dog-house that looks like a people house. Dog doesn't care.
Why we play such different roles - a cruel boss may be a loving father a few hours later. Likewise, why when people adopt a nickname "What would X do?" they find courage, or get to act differently (common trick in sales).
It is why religions made god in man's image. Gods that fit human archetypes, roles or feelings (the father, the mother, lust, war) etc.
And of course, why people with multiple personality disorder get to have such vast personality changes and ups and downs.
What is the evolutionary benefit? Empathy.
If I can simulate what you feel, I get to understand what you are going through, I may be kinder to you. And this way we get to cooperate and build a civilization that is not based on swarm mechanics (like ants, or bees).
Consciousness will soon have it's Gallileo moment. And we will be shocked to discover there is nothing special about our current human-centric world of "consciousness".
Calling consciousness a simulation doesn't resolve anything. A simulation is the imitation of a process, what is the process that consciousness is imitating?
Also saying there's nothing special about human-centric world of consciousness is somewhat of a contradiction. Being human-centric is incredibly special; as far as we know the brain is the only structure in the solar system that exhibits consciousness, and may very well be the only structure among a very small percentage of solar systems in the universe.
If that isn't special, then nothing is special and the word has no meaning.
I'm also sympathetic to explanations in that vein. In particular that one of the main pieces of it is our simulation of ourselves - a recursive mirrored lense.
Yes, I am with you. I really like Joscha Bach's theory that our brains hallucinate ourselves, almost as a fantasy character that goes through life. (checkout his appearance on the Lex Fridman podcast, mind blowing talk by Joscha)
Not sure what you mean about assuming a self-sustaining thought.
The point is that you can never experience anything outside your own point of view, which means that you can never know if anything outside of your own personal experience is at all separate or independent of it.
Sure, you can “trust” others when they say something to you, but you can never experience anything from someone else’s perspective (unless you could become them, but then you’d stop being you). All experience is subjective. We can all try to agree on what something (a physical process) means, but that is only an agreement, it is not “true reality”. So in that sense, you can never know what “true reality” is, except for whatever you subjectively experience.
The article assumes that consciousness is something that results from our brain's behaviour. (The big bang creating reality, and life, and therefore consciousness.) Further, it seems to almost reduce it do a conceptual feeling. (It also implies your consciousness does not exist before your birth and after your death.)
A different view may see the brain as merely a processing, storage, and interaction peripheral device to our consciousness. (With consciousness being either an everlasting fundamental building block of the universe (e.g. God, etc), or consciousness being things in a different non-physical universe with different laws (a mirror copy of our mind that is used to exert spooky action at a distance, etc)).
That is, consciousness may also exist in some form without the physical body, and be a source of physical reality, or exist next to it.
As opposed to a consciousness existing in the brain that cannot even be explained? Not really.
Consciousness as a fundamental building block of all reality is a very simple solution.
So is consciousness in a symbiotic dual-universe configuration. Yin-yang configurations are quite natural.
Don't confuse a life with consciousness. Memories may only exist in your brain. Consciousness goes two ways. Not only does your consciousness observe your senses, you also make your brain aware that you are observing, that is, your consciousness also gives input to your brain.
If consciousness is a part of the brain, then this becomes merely a stray feedback loop, which is quite unnecessary, and should not be required for life to exist.
If the brain is a (temporary) peripheral to consciousness, and biological life and consciousness enhance each other in a symbiotic configuration, then the interaction of consciousness with the brain makes absolute sense.
The brain by itself is merely reactive. It recognizes learned patterns, it executes learned behaviors, and it learns from all that.
Consciousness observes the brain (and by extension consciousness sees, hears, and feels). The brain reacts on the input of consciousness (and by extension, consciousness makes our body act).
Consciousness as a feedback loop in the brain would only explain self-awareness of the brain. It does not explain the observer (nor debatably free will or conscious choice).
It is not a problem with the statement itself, more with the quantum physics.
Besides, you seem to have incorrect assumption, that quantum physics has any sort of "reality" problem, likely referring to collapse thing, which is not even a problem in quantum field theory. Just an attempt to square field interaction into "understandable" interpretation with particles.
This question is hard to answer, but appears irrelevant to the contention point. The topmost comment is nonsensical because it claims consciousness "exists" (whatever that means) outside physical reality, which directly contradicts the definition of physical reality which is roughly all(X): exists(X) === (X in physical reality).
I’ve found consciousness to be like a muscle and most of us have never done more than step foot in the gym.
If I tell you right now to be aware of yourself, you would experience a brief moment of consciousness. Try it right now. See yourself from an outside perspective, sitting in your chair, doing stuff that matters or maybe not so much. Procrastinating, if you’re like me.
It’s a powerful “feeling".
But as soon as you continue reading, you’re already back to your normal state — in your head, not aware of yourself at all.
You have the consciousness-stamina of a 100 pound weakling.
It’s extremely difficult to develop yourself so you’re conscious all the time. I’ve been working for years, and still not there. However, I’m more conscious than not.
The goal is that the consciousness program is running in the background all the time, and doesn’t require difficult mental work to maintain. The effects and benefits of getting to that point cannot be overstated.
I don’t agree that consciousness is a feeling, although it shares some of the same features. Feelings for most people are similarly fleeting and dictated by outside environment. Most people do not have the skill to affect their feelings, or their consciousness, for long.
I think consciousness is being able to do something without knowing how. We can play chess, do math, run, etc but we don't know how we do it. Presumably it has something to do with our brains, something to do with neurons, but we don't have any insight into exactly how we do it...we just do it. If we knew how we do these things, there would be no mystery.
It is astonishing what an obvious phenomenon consciousness is, and how little we know and think about it. It’s a massive scientific mystery, neuroscience can’t even define it well.
It sometimes seems science, e.g. neuroscience has not truly left dualism behind. We still seem to talk about and study mind and body as separate entities.
I couldn't read past the second paragraph, it was so poorly framed. It failed to show how this white man suffered "under apartheid", as he was able to just move to England when things became too inconvenient for him. It talks about him being protest adjacent, but did he actually do anything? He surmounted his obstacle with.... privilege. I don't fault Solms in any way, but I do fault the author for trying to garner sympathy for their subject while failing to explain why he deserves it.
> “Since the cerebral cortex is the seat of intelligence, almost everybody thinks that it is also the seat of consciousness,” Solms writes. “I disagree; consciousness is far more primitive than that. It arises from a part of the brain that humans share with fishes.
It’s interesting that he says that, but also later on says we are embodied creatures, which can be taken as meaning that consciousness arises from our full body experience, not from one particular specific location of the body.
'Consciousness' is one of those viral memes which generates value for its users much like a pump-and-dump scheme, it gets more real and intriguing the more people you get on board, sorta like 'Race'.
You can tell because, like Race, the minute you start asking after details, or why its even operative or a term of art in the first place, the coherency reveals itself as brittle spaghetti statements all the way down. Its fascinating that we're kinda okay building a science around a term that isn't simple or exhaustively unpacked in terms of all constraint on its definition, like anything we can describe mathematically.
But humans are fascinated by meaning (which is completely tied to notions of providence [I can't think of a human culture that doesn't have an idea of providence/final destination]), truth hiding beneath the surface, possibility lurking just right around the corner. We know by looking at so-called animal life: YAGNI as far meaning goes. Human beings are addicted to revealing, to un-covering. Even though in reality there is nothing hiding, nothing "covered', no truth beneath the surface, no depth to any of the superficiality.
But perhaps, if our brains really are 'prediction machines', then perhaps past a certain threshold, it has to start "making meaning", which is to predict truth (at least at a social level, when it no longer has more to process in physical reality. It does this merely by virtue that the machine cannot stop predicting, even when there's nothing more to predict. Hence Nietzsche: "Man would rather will nothingness than nothing at all".
The upside for human animal life is that all this meaning-seeking activity makes us super unpredictable in response to inputs, moreso than most if not all animals. And unpredictability creates more possibility, so we simply will reap more rewards than other animals by shear frequency of being batshit, but we equally creates new and different problems for ourselves as well.
But its also why the vast majority of people cannot be scientists (or sociopaths, for that matter): their brains have a hard time sticking with utter simplicity; there always has to be more than there is.
This is how our form-of-life drifts through the universe. Gorillas sometimes eat their own shit. We don't know why. But is it any more or less valuable than our meaning-making activity? Maybe not.
Consciousness is a type of thought. When we zone out, we stop being conscious to an extent. Bugs don't experience those thoughts about themselves so they're not conscious (just a wild guess).
Consciousness ceases when we are focused solving problems. It returns the moment we think about ourselves.
So, what is a thought? Why are there some types of thoughts that only humans can have? I do not know.
Probably part of a media campaign. The Royal Institution just put up a talk by Mark Solms https://www.youtube.com/watch?v=CmuYrnOVmfk "The Source of Consciousness". As pretty usual for Royal Institution talks, it's all a part of a book promotion blitz.
"How can it happen that a physical creature comes to have this mysterious, magical stuff called consciousness? You reduce it down to something much more biological, like basic feelings, and then you start building up the complexities. A first step in that direction is “I feel.” Then comes the question, What is the cause of this feeling? What is this feeling about? And then you have the beginnings of cognition. “I feel like this about that.” So feeling gets extended onto perception and other cognitive representations of the organism in the world."
Seems to me like:
consciousness = a feeling or narrative of 'feelings', which arises out of a function that takes in all perception.
Yes - for the origin of consciousness, look at the simplest organisms - protozoans like slime mould on a rock under the sea 4 billion years ago.
In the morning, the slime mould colony positions itself on the western side of the rock to maximise exposure to solar energy coming from the east.
As the day grows long and the sun moves overhead and then to the western horizon, members of the slime mould colony on the edges detect the variation in radiant solar energy and, at thresholds, issue the command to move.
The cells on the south west edge of the rock cannot just move towards the north and then the west, because all the other cells are in their way. So they have to initiate a communication. "Move". The neighbouring cells in the colony begin to move.
Eventually, the message propagates to the other edge of the colony and the whole colony moves. It keeps moving until the new position of the centre of the colony reaches equilibrium for the radiant energy of the sun in its then position.
We can say in the aggregate that the colony moved because it felt like it. To say that is to ascribe "consciousness" to a blob. But really this was just the result of sensors in individual cells registering a drop in radiant energy reception, followed by a communication to move, followed by a behaviour, terminated by a new homeostasis when the sensors again reached a threshold.
It would be worthwhile to pinpoint which definition we are discussing here:
the state of being awake and aware of one's surroundings (fish, humans)
the awareness or perception of something by a person (fish, humans).
the fact of awareness by the mind of itself and the world (humans, ?)
Fuck Freud.
He was a con artist storyteller, not a scientist.
Edit: I accept all the downvotes. I refuse to teach Freud in college psychology except to say, Fuck Freud. We are better off without him. psychoanalysis = psychbullshit = painful holiday conversations about my profession.
Just because Freud wasn't very accurate in psychology doesn't mean he isn't important for other fields of work. His influence has spread far beyond psychology, whether you like it or not.
"Robert Stickgold: "Psychoanalysis is about as useful to neuroscience as creationism is to evolutionary biology. Freud, who aggressively defended his theories in spite of a lack of evidence, would have been at home in the Trump administration."
Freud Is Widely Taught at Universities, Except in the Psychology Department
Here's a comment I read on Reddit a while back which supports the idea that Freud wasn't entirely without his merits, and still stands with some merits, even scientific ones, today with citations I think are pretty convincing: https://www.reddit.com/r/AskReddit/comments/9caomv/philosoph...
Not sure why you're being downvoted. The consensus among most scholars is that most of Freuds ideas are BS. He's liked because he was closer to the truth than many of the other frauds in his day and he used techniques and methodology which was innovative for his time.
But the whole left-wing infatuation with him (e.g. see the work of Foucault, Deleuze and Guattari, Lacan, Sartre) needs to end. Somehow, many of our intellectuals still believe that most of the core concepts of Psychoanalysis are legitimate (e.g. ego-id-superego or Oedipus complex) and they're just actually not. They don't exist!
I always thought Hofstadter's argument (I see he's discussed elsewhere in this thread) was that consciousness was a convenient illusion, which is kind of like the idea that it's a feeling here. And that the key insight is that consciousness arises through self-reference in a non-obvious way. It's kind of like bootstrapping an operating system. It's the only explanation I've ever read where it all made sense and you could see all the mechanics of how it worked if they were explained in-depth. Maybe psychoanalysis and philosophical arguments can point you in the right direction or develop your intuitions, though.
I think once we figure out what consciousness actually is, we will regard it as more than "just a feeling" but not the be-all and end-all of cognition or intelligence either.
My guess: Consciousness is your brain's logging facility. Conscious brains tag their experiences with symbols to help the brain organize and index them for reflection and introspection later.
I would be careful not to take software/CS metaphors too far. I don't mean to say one domain's concepts can never be applied to another domain, but developers hold certain assumptions about our domain that may not apply elsewhere. Even basic observations can rely upon more deep-rooted axioms.
I only say this because it's something I've been reflecting on lately - I've found that by being too immersed in the context of software development, I've often fallen victim to this mindset:
this doesn't explain anything at all about the will, active choices, and introspection, all of which are a huge part of one's consciousness, and can even further extend it (for example by integrating the concepts discussed in threads like this to one's own self reflection and experiences)
Saying that "consciousness is just a feeling" is equivalent to saying that "consciousness is just consciousness". It may sound like and explanation, but it is just a tautology.