Hacker News new | past | comments | ask | show | jobs | submit login
Temporal circuit of brain activity supports human consciousness (advances.sciencemag.org)
416 points by hhs on April 9, 2020 | hide | past | favorite | 143 comments



Man, I love seeing more research like this. My personal experience, as someone who has dealt with a variety of issues in psychiatrists' offices and rehabilitation rooms, is that a clear scientific understanding of what is actually going on under my skull provides a much firmer basis for any therapeutic approach. I'm really hoping that in my lifetime we will see connections made between the physiological elements of consciousness and the modern psychiatric plagues of depression, anxiety, and addiction that finally produce the targeted, universally effective therapies that are desperately needed.


I was recently talking to a registered psych nurse, and we got talking about Cognitive Behavioural Therapy.

I believe I've been self administering a form of it for a couple years, and I summarized my understand of CBT as "moving more thinking from the amygdala to the prefrontal cortex", and she confirmed that with "in laymen's terms; yes".

It's not like the fields are completely isolated, I guess is what I'm getting at with that anecdote. It's hard to go from neuroscience to psychology, but that's always being looked at. I reckon most big advancements will start coming when we start understanding the connectome more, but it's not like all advancements will come from there, and it's not like people aren't working right now to bridge neuroscience and psychology.

Also I want to hang out with the laymen she does.


I'm not qualified to contest this, but I do remember a side blurb from "Principles of Neuroscience" (Kandel, Schwartz, Jessell) that said overactive mPFCs are attributed with autism and below is some more research on it.

I don't think you're saying this overtly but I have seen people from the Thinking Fast & Slow crowd glorify their PFCs as arbiters of cognitive bias while forgetting that healthy social, emotional processing required integrated functioning between all neural correlates involved, including the amygdala. I would venture to guess CBT is effective because it stops overactive PFCs which is the opposite of what the nurse's guess is. But as a laymen here, I can't say one way or the other.

I remember a decade or two ago, the ACC+vmPFC combo was getting a lot of praise as this balancing force between the dlPFC and the amygdala saying strong ACC+vmPFC could be the clue to healthy brains. I think the answer will always be, "hey all these parts are important. Just meditate, exercise, and eat right. And don't believe your thoughts too much (CBT)"

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5192959/ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4688328/


As a depression patient, that's more or less healed, here's my take: - In depression you can get up lost in these cognitive negative loops

- One of the feedback loops is between what you feel and what you think - a bad feeling induces you to think some bad thoughts, and as those thoughts are combined with the bad feeling, they are validated as true. Now the thought is associated with a bad feeling, and one will give rise to the other , just like tinkle of the bell induced pavlov's dog to drool. But in this case the drooling of the dog can also make the bell tinkle, causing more drooling...

- the amygdala response is about the bad feelings, and how the bad feelings can induce more physiological discomfort due to amygdala kicking in and doing "what it's supposed to" - now your bad feelings are superchardged as well

- so, I would say it's not only about cortex or amygdala, but in depression the negative thought patterns and the physiological response can get linked into this destructive loop of continuous feedback. Hence, it does not matter that you can rationally say to yourself the bad thoughts you had are not that serious because they just launched a full scale amygdala based storm of bad feeling and anxiety

- my ssri:s kind of felt like they cut out this bad loop. I felt like my cognitive self was insulated from the physiological response, giving me space to unlearn both cognitive and emotional bad habits one at a time without the disruptive loop taking control


Do you mind sharing your approach to how you make that shift from "amygdala to the prefrontal cortex"? Is it similar to practicing mindfulness with a focus on the now?


No, prefrontal cortex is about planning, reasoning and inhibiting emotions (we do it all the time without realizing) among a lot of other things. Amygdala is based on reacting to fear, among other emotions.

CBT gives you a toolset to ask yourself questions to understand (a) which perspective you're currently looking things from and (b) which other perspectives you could use.

The 10 cognitive distortions and recognizing them is a good start. Cognitive distortions happen mostly through emotional processes (e.g. the amygdala but the whole limbic system really).

Mindfulness meditation is an emotional-based approach as it mostly relates (for laymen like me) to scanning the body. Scanning the body gives marked improvements to the insular cortex. It also gives marked improvements to the PFC (the inhibition part, not the planning part).

This is all written way too short and my knowledge is a bit stale on it. I used to be really into this a couple of years ago. It was during the time when I studied psychology (I even published a neuroscientific literature review :D).


I love the spirit of your response, but I feel the need to disagree a bit and elaborate about your statement: >Mindfulness meditation is an emotional-based approach as it mostly relates (for laymen like me) to scanning the body

The REAL essence and power of mindfulness is becoming aware of the contents of your attention. For some reason, focusing the attention inwards on bodily somatosensory experiences tends to encourage that, but the two are not the same. Body-scanning is more a technique to help encourage the development of mindfulness rather than the end goal in itself.

The reason this distinction is so important and powerful is that the brain regions which are feeding the contents of your attention are the ones that get reinforced. When you combine mindfulness with practice in redirecting your attention, it becomes an insanely powerful tool to fundamentally reshape your reality by restructuring your brain.


There's some irony there given that excessive body-scanning and hyper-vigilance can be common symptoms related to anxiety.

Though the CBT stuff in general and being aware of your attention does seem empirically helpful, I just find the body scanning focus as a common start may not be the best.


The primary difference is that the kind of awareness cultivated during meditation is non-judgemental. Mindfulness helps put thoughts into perspective, where you can observe them rather than feel absorbed by them. So in this sense you can pay attention to your body without getting carried away by the stream of anxious thoughts.


And therein lies the difficulty of the process: shifting towards objectivity, in a sentient being that is primarily (if not entirely) a subjective experience. I appreciate CBT, but often feel saddened to see that this doesn't get addressed in most of the resources attempting to educate about the practice.

"Mindfulness and psychotherapy" (by Germer, Siegel, Fulton) has helped me with these issues by giving multiple perspectives on the process to develop. It's a book for therapists, which is one of its strengths, since we essentially are trying to get people to sustain being their own therapists in the long run.


Yes exactly. The "mindfulness" designation is a recursive one, where first you are mindful, then you are mindful of that which is mindful, and all the way down.


My problem with these terms is there's no chance of not being mindful. It seems to refer at once to both immediate awareness of surroundings and kind of meta self-awareness. You're in the present moment no matter what you do but redirecting focus seems to help get a grip on emotions.


Until you find the first turtle, then it's turtles all the way down.


I will send turteCore, but you have to ask for it. Ribbit.


Could you expand on the "practice in redirecting your attention". What kind of practice you do? Thx!


I'm not the person you replied to, but you might want to check out The Mind Illuminated by John Yates, a modern meditation guide (based on Buddhist practices) that delves deeply into mind's systems of attention & awareness.


There are some studies that seem to indicate decision making is inhibited when emotions are inhibited. That is, you can ask someone to explain what the rational choice is, and justify it, but they will not actually make that choice until an emotional prompt spurs them on. They can be very, very good at planning extensively but won't take action. The study I'm thinking of was on brain-damaged patients without emotion but perfectly intact reasoning.

So I wonder how that fits in with what you've experienced. Is analysis through CBT in fact opening up new information to change how you feel about certain things (rationality induced emotion, spurring change)? Or perhaps I am misunderstanding the conclusion (I've seen it presented as such elsewhere so I'm not the only one).

> His insight, dating back to the early 1990s, stemmed from the clinical study of brain lesions in patients unable to make good decisions because their emotions were impaired, but whose reason was otherwise unaffected

https://www.technologyreview.com/2014/06/17/172310/the-impor...


> Mindfulness meditation is an emotional-based approach as it mostly relates (for laymen like me) to scanning the body

Just to clarify, body scanning is just one type / approach to meditation. Many practices don't utilize it at all, or only do so in conjunction with other techniques.


I appreciate the clarifications. I did write it a bit too hastily. Sorry about that.


Funnily enough, many combat vet friends of mine have had the most returns from two things which both seem to affect the brain in the same manner: cannabis and CBET.

On the former, I can't recall who at the moment but there is significant research that has shown the primary cause of cannabis being so well received by those with ptsd is that is reduces amygdala activity and increases pfc activity.


The anti-correlated behaviour of these two networks, and even their default mode vs. attention functions, reminds of the attention schema theory of consciousness [1].

Specifically, the attention schema theory posits that some constant back and forth signal switching between internal and external models of the world results in the illusion of subjective awareness, in an analogous manner to how task switching provides the illusion of parallelism on single-core CPUs.

[1] https://www.frontiersin.org/articles/10.3389/fpsyg.2015.0050...


So psuedo basic for the consciousness algorithm:

  10 look at world
  20 look at my reaction to world
  30 goto 10
Which generates consciousness like frames per second generates motion. Or like the colored lines over this black and white photo generate a color image:

https://twitter.com/SteveStuWill/status/1248000332027715584/...


> 20 look at my reaction to world

who is the "my" you are referring to? you have an 'a priori' conflict.


By "my" I guess he's referring to an internal state that has been built up over previous interactions with the external world? So more like:

  10 receive information from world
  20 do operations on information
  25 update state based on operation result
  30 goto 10


I think it's more like:

10 receive information about the world, body and reward signals

20 evaluate the current situation and possible actions

30 act

40 goto 10

Emotion results in step 20 when we judge situations and actions in the context as good or bad. In this step two subsystems cooperate:

1. a system for fast reaction - works best when there is no time to reason, or when the action is repetitive, or when available information is uncertain

2. a system for slow, reasoning based reaction - works when we can build a mental model and imagine possible outcomes, is especially necessary when we encounter novel situations and have the concepts necessary to reason about it

System 1 is based on instinct and system 2 is learned. They are both essential as they are specialised in different situations. Using system 2 all the time would be too expensive and probably impossible, we need to rely on instinct which in turn relies on evolution to be fine tuned.

Learning happens through the reward signal. We reevaluate situations and actions based on outcomes. Emotion is just a synonym for the value we assign to our current state with regard to our goals and needs.

Our goals include adapting to the environment in order to assure the integrity and necessities of life - the primary goal, then as secondary goals - being part of a social group, learning, mastery, conceiving children, curiosity and a few other instincts. We are born with this goal-program which is in turn evolved.


Yeah, this "perception-action cycle" has been well known and taught in neuroscience for a long time. What's new, at least since I studied it, is the anti-correlated tick-tock of these two key networks that seems to be happening. Amazing how similar to a game engine consciousness seems to be panning out.


> The default mode network (DMN) is an internally directed system that correlates with consciousness of self, and the dorsal attention network (DAT) is an externally directed system that correlates with consciousness of the environment

DMN : self awareness :: DAT : non-self awareness. The distinction is at least partly hard wired. If there's a self, there's a "my".

Another name for consciousness is self-awareness, which requires a self. And what else could a "self" be but a neural construct? This article is a theory of its construction.


I would say the kalman filter is quite appropriate where the state is our model of the world, the 'page flip' as a state update based on observation. Also the same as a Bayesian model update:

> state = kalman(state, observation)


with a kalman filter the model always stays the same and is used to pick which sensor is reporting most accurately. changing the model would change everything, maybe that's why changing your worldview has such an impact on the way you "see" things.


Taking this further, I suppose you could consider persistence of vision [1] as analogous to consciousness.

It's an artifact of the limitations of the system.

[1] https://en.wikipedia.org/wiki/Persistence_of_vision


Persistence effect has to exist in one way or another to get real-time self awareness; if it were not an input alternation, it would exist at the reasoning level.


Except without the frames and colored lines. Lines 10 and 20 don't provide the experiences. They're just behavior. Somehow all the sensations have to be added in when looking at the world and looking at one's reaction to the world.


let experience$ = INKEY$


Calling it an illusion is not interesting. We define consciousness and subjective experience to match the very experience we understand.

There is literally no way for it to be an illusion, the definition itself precludes it. No matter how consciousness and subjective experience are implemented in the hardware of our brains, it is still a concept that we use to describe the experience, and the experience is real no matter what.


> No matter how consciousness and subjective experience are implemented in the hardware of our brains, it is still a concept that we use to describe the experience, and the experience is real no matter what.

What does it mean for something to be "real"? Physics says your car is not actually real. There is no "car" particle or field in the physics ontology, there is no physical experiment we can run to definitively test whether something is a car or is not a car, such that aliens that evolved on another planet would agree perfectly with your assessments. If physics is our best theory of what actually exists, then your car doesn't really exist.

Analogously, this is the crux of the hard problem of consciousness: is the qualitative experience of consciousness actually real, or is it reducible to third-party objective facts, like every other phenomenon we've encountered, and so the irreducible properties it seems to have are actually an illusion that is reducible to non-conscious particles and fields?

Given the way you've phrased your post, that the brain "implements" consciousness, I expect you might agree that such a reduction is ultimately possible. In that case, you too might be an eliminative materialist, which asserts that consciousness does not really exist.

That said, all materialists agree that phenomenal experience requires an explanation, it's just that they assert that explanation will come from neuroscience. Antimaterialists assert that no such explanation is possible.


And yet most people call it a car, consider it real, and at the same time don't see a problem in reducing it to its physical properties.

"is the qualitative experience of consciousness actually real, or is it reducible to third-party objective facts"

Why not both?

The "real" refers to our subjective experience. That there is something that is like to be me. Something that is like to be a bat. And at least under certain definitions, that's what we call conscience. That something I know I experience and that I doubt a computer is experiencing too.

Why would this be incompatible with reducing this experience to third party objective facts? We simply don't know but I don't see why we couldn't.


> > "is the qualitative experience of consciousness actually real, or is it reducible to third-party objective facts"

> Why not both?

Because those are mutually exclusive options. Either something is ontologically fundamental, or it's not. It can't be both.


After posting my comment yesterday, I read some of your other comments and I don't think we actually disagree.

The problem is probably the definition of "real".

I'm not arguing that consciousness is a fundamental property of the universe when I say it is real. The jury is still out but that's not what interests me the most. When I say "real" I'm talking about my subjective experience. It is real in the sense that it is something that I know I experience, regardless of the mechanisms involved.

Ultimately, what I'm interested in is finding out how it arises. Explaining it. Reducing it to its "third-party objective facts" if possible. Being able to look at a machine that mimics us and tell if that machine is experiencing something comparable to what we experience.



Will do, thanks!


Can you visualize a car?


Sure.


Consciousness is basically the only thing we can conclusively say is NOT an illusion, right?


Exactly! The illusion argument (let us call it ArgIllusionCons) that applies to consciousness can be applied with just a few extra steps to everything we perceive, including ArgIllusionCons itself.


Perhaps, but there's no guarantee that it's anything more than a transient state, that lasts at most until you next lose consciousness. The consciousness that you have today may have no relation to that of yesterday or tomorrow, apart from running on the same brain "hardware" and having access to the same memories.


No, you can safely assume that your thoughts exist in the moment that you have them. Consciousness is more than simple thoughts.


I think therefore I am.



I can see how that would lead to an illusion of continuous subjective awareness, but I don't think it supports the notion that subjective awareness is entirely illusory. I think therefore I am, the existence of qualia [1], etc.

[1]: https://en.wikipedia.org/wiki/Qualia


"I think therefore I am" assumes the conclusion. "This is a thought, therefore thoughts exist" is the valid, non-circular version.

The attention schema theory addresses the specific problem of how we apparently infer first-person subjective facts when no such concept exists in physics, the latter of which consists entirely of third-person objective facts. The answer is that we erroneously conclude that the facts we perceive are first-person, but this perception is a sensory trick, similar to an optical illusion.

The question of qualia is larger than this specific question, but subjectivity was probably an important problem to overcome for a materialist explanation of consciousness. Dennett has long held that what we call "consciousness" is very likely a bunch of distinct phenomena that all get muddled together, and the fact that we have started to pick it apart hints suggestively that he was right.


> The answer is that we erroneously conclude that the facts we perceive are first-person, but this perception is a sensory trick, similar to an optical illusion.

I'm extremely skeptical of answers that involve labeling difficult challenges to a theory as "illusions."


> I'm extremely skeptical of answers that involve labeling difficult challenges to a theory as "illusions."

So calling [1] an optical illusion warrants skepticism because it's attempting to dismiss the challenge of having to explain how water can physically break and magically reconstitute pencils? Don't you see the problem with this sort of argument?

The point is that integrating all of our knowledge leaves no room for first person facts. Additionally, every time we've tried to ascribe some unique or magical property to humans or life (like vitalism), we've been flat out wrong. No doubt there are plenty of challenges left to resolve in neuroscience, and no one is claiming that a materialist account of qualia is unnecessary.

[1] http://media.log-in.ru/i/pencilIn_in_water.jpg


> So calling [1] an optical illusion warrants skepticism because it's attempting to dismiss the challenge of having to explain how water can physically break and magically reconstitute pencils? Don't you see the problem with this sort of argument?

Yeah, but that's a bit of a straw man.

The kinds of claims-of-illusion that warrant particular skepticism are the ones that deny fundamental observations in defense of some particular (usually sectarian, for lack of a better word) philosophical perspective.


What makes an observation "fundamental"?


naasking, as I see it, Dennett (in Consciouness Explained) engages in a sleight of hand. He redefines consciouness as "a bunch of distinct phenomena that gets muddle together", but that doesn't touch the mystery of qualia, it tries to just deny that there is anything to explain, owing to the fact (Dennett claims) that the problem is mischaracterized from the start.


And your mind plays sleight of hand all the time, which Dennett clearly establishes in his work. Or do you actually see the physical blind spot that's a fundamental feature of your eyeball?

So why would you trust your direct perception over the mountains of evidence that clearly demonstrates that we can't trust our perceptions?


No literate scientifically minded person disputes your point, but it doesn't address my point. My point is this- qualia as a phenomena exists. Even if I think a red thing is blue, I am still experiencing some color and the experiencing itself- aside from its accuracy- is what needs explaining.

So experience, aka qualia as a phenomena unto itself is in need of explaining, not any particular qualia and not the presence of absence of any correlation between the qualia and objective reality, i.e. the "truthfullness" or "accuracy" of the qualia.


> My point is this- qualia as a phenomena exists. Even if I think a red thing is blue, I am still experiencing some color and the experiencing itself- aside from its accuracy- is what needs explaining

Dennett wouldn't deny that either. He would simply say that we have no reason to think the qualitative experience of our perceptions are anything other than a cognitive trick with a functional purpose. Certainly how this trick works should absolutely be explained, and I don't think any materialist would deny that.


It is what Dennett thinks. Actually, it's what Dennett thinks he thinks, because that idea of Dennett's is inherently non-sensical; it is self-contradictory.

Here's why.

To explain qualia as a "trick" is to void the onotological status of qualia itself. He can't do that. It doesn't matter if it's all an illusion or a trick, it doesn't matter what its ultimate epistemological status is. Qualia is experienced and it's the experience itself, whatever its biological underpinning turns out to be (you can't have sight without eyes), which is relevant.

Yes, all experience could be fallible and illusory but the fact of experience itself cannnot be an illusion.

Experience qua experience is the thing no scientific theory of perception and cognition needs. So why does it exist? In other words, why are we not as not-conscious as rocks and chemical processes and planets and electrical activity, doing all we do, saying all the things we say? It's certainly possible.

Dennett, and I am inferring this I haven't heard him say it, is an ontological positivist. Only those things which the methods of science reveal to exist are "real" and everything else is, as you say, some kind of illusion. Sonunds good. But an illusion (which is some experience whose epistemology we have misconstrued) is not itself an illusion. Its ontgological status as "a thing which does exist" is secure.


> Yes, all experience could be fallible and illusory but the fact of experience itself cannnot be an illusion.

Sure it can, and it remains only to explain how and why this illusion works to fool us into making erroneous statements, like "the fact of experience itself cannnot be an illusion".

> Experience qua experience is the thing no scientific theory of perception and cognition needs. So why does it exist?

It probably doesn't! Although I'm not as convinced as you that qualia are entirely non-functional.

> But an illusion (which is some experience whose epistemology we have misconstrued) is not itself an illusion.

What is an illusion? To my mind, an illusion is a perception or inference thereof that, taken at face value, entails a falsehood. So to call phenomenal consciousness an illusion is to say that the claims inferred from our direct perceptions are false, eg. "I have subjective awareness". There's nothing problematic about this that I can see.


>What is an illusion? To my mind, an illusion is a perception or inference thereof that, taken at face value, entails a falsehood.

But that is not the part of the illusion we're interested in. The part of it we're interested in is the part it shares with all other experiences. It was an experience. Stop. That fact can't be gainsayed.

What you're using to deny this is the epistemological status of the illusion experience. So that's things like "it was caused by brain cells XYZ firing" or "it did not accurately represent reality" or "it did not correspond to anything in reality at all". All those things could be true but they are beside the point being made.

Either one gets this fundamental idea or they don't in my experience (lol).


We're discussing the ontological status of phenomenal experience, so its illusory nature is very much relevant to this question.

No one, not even eliminative materialists, would deny that people have what they believe to be phenomenal experience. See Frankish [1]:

> Does illusionism entail eliminativism about consciousness? Is the illusionist claiming that we are mistaken in thinking we have conscious experiences? It depends on what we mean by ‘conscious experiences’. If we mean experiences with phenomenal properties, then illusionists do indeed deny that such things exist. But if we mean experiences of the kind that philosophers characterize as having phenomenal properties, then illusionists do not deny their existence. They simply offer a different account of their nature, characterizing them as having merely quasi-phenomenal properties. Similarly, illusionists deny the existence of phenomenal consciousness properly so-called, but do not deny the existence of a form of consciousness (perhaps distinct from other kinds, such as access consciousness) which consists in the possession of states with quasi-phenomenal properties and is commonly mischaracterized as phenomenal.

[1] https://nbviewer.jupyter.org/github/k0711/kf_articles/blob/m...


If we can't trust our perceptions, then there is no mountain of evidence to say the mind is playing a trick on us regarding consciousness. That's because the scientific evidence is empirical, which is knowledge based on perception. Dennett's argument risks undermining the foundation for scientific knowledge.


The question of knowledge is indeed tricky. Your specific objection has kind of already been answered by science: you can't trust your senses, so you build instruments to extend your senses into domains you can't sense, you translate that data into sensory data you know is somewhat reliable, and you adhere to a rigourous process of review, replication and integration of all observations into a coherent body knowledge. Eventually, you converge on a reliable, replicable body of knowledge.

And so far, this body of knowledge suggests strongly that we can't trust our perception of consciousness.


> And so far, this body of knowledge suggests strongly that we can't trust our perception of consciousness.

But perception is a part of conscious experience. We don't perceive consciousness independent of things in the world. They go hand in hand. So we know about the world because we have conscious experiences of perceiving the world.

What Dennett and others are trying to argue is that only the qualities of perception which are objective exist, even though those qualities are accompanied by the subjective qualities. So we know the shape of an object by color and feel. If you abstract the shape out and argue the colors and feels aren't real, then what status does our knowledge of the abstract shape have?


goatlover said: "only the qualities of perception which are objective exist,"

If you change "objective" to "scientifically validated" you have defined a position known as "ontological positivism", and I would say Dennett does subscribe to this.

The funny thing is, the school of thought which says consciousness is computation by our brains and any computer properly programmed can be conscious, a school known as "functionalism", itself gives ontological status to a non-corpreal abstract thing, namely computation.

If computation can be abstracted from not just the brain but any substrate at all then it exists. So computation can take place on an AMD chip and a Intel chip and a Turning Tape and anything you'd care to rigged up made out of anything whatsoever so long as it could represent the computation of a Turing Tape.


> If computation can be abstracted from not just the brain but any substrate at all then it exists.

Existence is a tricky as a proposition, as Kant famously argued. Does the following make sense to you: the law of non-contradiction exists.

Computation has similar logical character as other rules of logic. In fact, intuitionism ensures a 1:1 correspondence between the two. So computation is not a "non-corporeal thing" any more than any other form of logic. If you take rules of logic to also be non-corporeal things, well then this "problem" you speak of was present in functionalism from the start, and yet it doesn't seem to trouble anyone.


> and yet it doesn't seem to trouble anyone.

Do mathematical things exist independently of the minds who conceive of them ? The ontological status of abstract things, right? My point is, materialists deny these kinds of things. That's disembodied spiritual bunkum and it has no place in modern thinking.

Then, later in the day they're perfectly happy to deal with things just as abstract and non-corpreal without feeling like they're cheating in any way.

The fact is the philosophy of science has not caught up to the advances in science as any good QM thread here will show.

>Does the law of non-contradiction exist?

The fact that neither of us can answer this (assuming we both agree to what it implies about the world, which actually, heh... I am not totally convinced of, but that's another matter) ... anyway the fact that neither us can answer this in the way you meant it is an interesting fact in the same family of interesting questions as raised in this discussion.

The quarks->atoms->molecules->neurons->brains->experience (consciouness) chain of causality, which is the standard model of reality and has been for a few hundred years now, is broken at both ends by which I mean the descriptive philosophical ideation at both ends is to no one's real satisfaction.


> Do mathematical things exist independently of the minds who conceive of them ? The ontological status of abstract things, right? My point is, materialists deny these kinds of things.

Sure, and they would have to provide some sort of naturalist account for mathematics. There are some proposals for this kicking around.

> is broken at both ends by which I mean the descriptive philosophical ideation at both ends is to no one's real satisfaction.

Indeed, there is no hole-free reduction along the chain you cite, but those holes are continuously shrinking. This is why I consider the special pleading around consciousness a god of the gaps. There are some very interesting puzzles around consciousness, but I think ascribing a special status to consciousness will ultimately be abandoned, just like vitalism.


Platonism has been an ongoing debate for centuries, so the status of numbers and logic do bother some people.


Agreed, but I was referring specifically to it not bothering functionalists.


Science is a set of evolving traditions about how to fix errors and it relies on the consciousness/perception of individual scientists. Consciousness/perception is error-prone but it does seem intimately connected with the correction of error, too, as we strive towards better understanding. Aren't we compelled to trust it in this regard? That things will seem to be more like they really are, including consciousness itself?


>"I think therefore I am" assumes the conclusion.

But the thought is self-referential. The thought is about itself thinking, and so the thought instantiates the sufficient case for a subject. And so there is no question-begging.

>The answer is that we erroneously conclude that the facts we perceive are first-person

But as long as these facts are presented or represented as being first-person, the sufficient case for first-person acquaintance has been established. Whether these first-person facts are ultimately grounded in third-person descriptions or phenomena doesn't make them illusions.


> But the thought is self-referential. The thought is about itself thinking, and so the thought instantiates the sufficient case for a subject.

You just assumed the existence of a subject again. Where is the proof that a thought requires a subject? One that isn't vacuous or doesn't just assume its own conclusion?

> Whether these first-person facts are ultimately grounded in third-person descriptions or phenomena doesn't make them illusions.

It does for the technical purposes of the consciousness debate. The terminology we're using, like "illusion", has a technical meaning for the debate between materialists and antimaterialists, wherein antimaterialists argue that a first-person fact cannot be reduced to third person facts, even in principle.

Obviously even materialists speaking informally would still use first person language and speak normally about their experiences.


>Where is the proof that a thought requires a subject?

I'm not sure what you're asking. If the thought is self-referential, then the subject is inherent in the self-reference of the thought, namely the thought itself. I am not assuming some kind of conscious subjectivity here. Merely that the content of the thought is instantiated in the case of self-reference. If this were not the case, then the thought could not self-reference.

>has a technical meaning for the debate between materialists and antimaterialists

I'm familiar with the usual suspects in this debate (e.g. Dennett, Frankish), and I don't find their usage of illusion particularly "technical". They use it to mean that phenomenal consciousness doesn't exist or isn't real. But its this very usage that I take issue with.


> If the thought is self-referential, then the subject is inherent in the self-reference of the thought, namely the thought itself. [...] If this were not the case, then the thought could not self-reference.

What property of self-reference do you think entails the properties of a "subject"? Does a self-referential term in a programming language also have a subject?

I suspect you would deny they do, so why does a self-referential thought entail a subject when self-reference in other domains does not? Because the only answers I can come up with either amount to special pleading on behalf of thoughts, or admitting a superfluous concept into every self-reference that makes no useful distinctions and conveys no useful properties that I can see.

> They use it to mean that phenomenal consciousness doesn't exist or isn't real.

And what do you think they mean by "isn't real" or "doesn't exist"? Because in this setting, such a claim to me means that phenomenal consciousness is not ontologically fundamental. That's a pretty technical meaning. Dennett certainly didn't mean that phenomenal consciousness doesn't exist as a concept deserving a philosophical or scientific explanation; clearly it does or he wouldn't have written about it.


>so why does a self-referential thought entail a subject when self-reference in other domains does not?

I have a feeling you're taking "subject" to be a far more substantial concept that I am. A subject at its most austere is merely a distinct target of alteration. That is, it is subject to change from external or internal forces. So to have a self-referential thought is to say you are subject to change: from not having a thought to having a thought, thus you are a subject of some sort (the thinking sort). Although a thought itself does not necessarily imply a subject, recognizing one is having a thought (a self-referential thought) entails subject-hood on its bearer.

But to address the specialness of thought: thoughts have content. For example, a thought about an apple intrinsically picks out an apple. The thought somehow contains the semantics of the object of its reference. This is in contrast to, say, a bit that gains meaning from the context in which it is used (e.g. night/day, heads/tails, etc). And so a self-referential thought intrinsically contains the necessary structure to entail subject-hood. Self-reference in other contexts do not have this intrinsic content requirement.

>Because in this setting, such a claim to me means that phenomenal consciousness is not ontologically fundamental.

Frankish is very clear that he is eliminativist about phenomenal consciousness. He is not intending to simply be reductive about phenomenal consciousness, i.e. that it is "real" but reduces to physical structure and dynamics. His claim is that we misrepresent non-phenomenal neural structures as being phenomenal. This can be cast as a sort of reduction as your referring to. But he is clear that this is not what he intends.

As far as Dennett goes, he seems to endorse Frankish's project so I would take him to also be eliminativist about phenomenal consciousness. But reading is own work he's a lot more slippery about whether he's eliminativist or not so I'm not sure where he actually lands.


> But to address the specialness of thought: thoughts have content. For example, a thought about an apple intrinsically picks out an apple. The thought somehow contains the semantics of the object of its reference. This is in contrast to, say, a bit that gains meaning from the context in which it is used (e.g. night/day, heads/tails, etc).

I don't see any distinction beyond the complexity of the information content. That thought about an apple carries an implicit context consisting of eons of evolution in a world governed by natural laws, ultimately bootstrapping a perceptual apparatus that you've used to accumulate relational knowledge throughout your life.

That digital bit you speak of was generated within the context of a program also governed by rules that gives it some meaning within that program, although that context is considerably smaller than a human's thoughts. I honestly can't think of any non-contextual expressions besides axioms, so I don't accept the distinct you're trying to make.

And I'm not assuming anything about what you might mean by "subject". If a thought is like any other datum tied to a context, then I don't see that a subject is necessary, unless we explicitly define "thought" and "subject" to be mutually recursive. I just don't see how adding a "subject" in the context of every self-reference makes any meaningful distinctions, and so it appears entirely superfluous in that context.

Anyway, as a fun exercise, someone had asked in this thread about a formalization of this argument, so I played with expressing the self-reference and existence proof using Haskell using Curry-Howard. This is what I have so far:

    {-# LANGUAGE ExistentialQuantification #-}
    data Exists a = Exists a
    newtype Thought a = AThought (Thought a -> a)

    thisIsAThought :: Thought a -> a
    thisIsAThought this@(AThought x) = x this

    thoughtsExist :: forall a.Exists (Thought a)
    thoughtsExist = Exists (AThought thisIsAThought)
I don't think this quite captures it, but expressive yet sound self-reference is tricky in typed languages.

> He is not intending to simply be reductive about phenomenal consciousness, i.e. that it is "real" but reduces to physical structure and dynamics. His claim is that we misrepresent non-phenomenal neural structures as being phenomenal. This can be cast as a sort of reduction as your referring to. But he is clear that this is not what he intends.

This doesn't read as consistent to me. Frankish's paper [1] might be the best bet for clearing this up. The way he breaks it down seems consistent with what I've been saying, that we seem to perceive certain subjective qualities but that these qualities aren't really there, they're a trick, and so we simply need to explain why we think we have them.

I admit that my simplified analogies don't perfectly capture the nuance between Frankish's "conservative realism" vs. illusionism, but most discussions don't get so detailed.

So this all seems to hinge on the meaning of "real". Phenomenal consciousness is "real" in the sense that it can drive us to talk about phenomenal consciousness, at the very least. But like belief in other supernatural phenomena, it's likely a mistaken conclusion. This seems to be essentially what Frankfish says:

> Does illusionism entail eliminativism about consciousness? Is the illusionist claiming that we are mistaken in thinking we have conscious experiences? It depends on what we mean by ‘conscious experiences’. If we mean experiences with phenomenal properties, then illusionists do indeed deny that such things exist. But if we mean experiences of the kind that philosophers characterize as having phenomenal properties, then illusionists do not deny their existence.

[1] https://nbviewer.jupyter.org/github/k0711/kf_articles/blob/m...


>I don't see any distinction beyond the complexity of the information content. That thought about an apple carries an implicit context consisting of...

Sure, if our measuring stick is information, then there is no difference in kind, merely a difference in complexity. But the complexity difference between the two is worlds apart, thus substantiating the distinction I'm pointing to.

But information is a poor measurement here. The quantity of information in a system tells you how many distinctions can be made using the state of the system. But information doesn't tell you how such distinctions are made and what is ultimately picked out. For something to be intrinsically contentful, it has to intrinsically pick out the intended target of reference, not merely be the source of entropy from which another process picks out a target.

So in a structure that has intrinsic content, the process of picking out the targets of reference is inherent as well. This means that structural information about how concepts are related to each other are inherent such that there is a single mapping between the structure as a whole and the universe of concepts. This requires a flexible graph structure such that general relationships can be captured. It's no wonder that the only place we currently find intrinsically contentful structures are brains.

>I honestly can't think of any non-contextual expressions besides axioms, so I don't accept the distinct you're trying to make.

Do the thoughts in your head require external validators to endow them with meaning, or do they intrinsically have meaning owing to their content? If the latter, then that should raise your credence that such non-contextual expressions are possible in principle. But to deny the notion of intrinsic content because you can't currently write one down is short-sighted.

>Phenomenal consciousness is "real" in the sense that it can drive us to talk about phenomenal consciousness... This seems to be essentially what Frankfish says

In the paper you link, Frankish is circumspect about his theory being eliminative about phenomenal consciousness:

>Theories of consciousness typically address the hard problem. They accept that phenomenal consciousness is real and aim to explain how it comes to exist. There is, however, another approach, which holds that phenomenal consciousness is an illusion and aims to explain why it seems to exist. We might call this eliminativism about phenomenal consciousness. The term is not ideal, however, suggesting as it does that belief in phenomenal consciousness is simply a theoretical error, that rejection of phenomenal realism is part of a wider rejection of folk psychology, and that there is no role at all for talk of phenomenal properties — claims that are not essential to the approach. Another label is ‘irrealism’, but that too has unwanted connotations; illusions themselves are real and may have considerable power. I propose ‘illusionism’ as a more accurate and inclusive name, and I shall refer to the problem of explaining why experiences seem to have phenomenal properties as the illusion problem.

So he seems to accept eliminativist about phenomenal consciousness as accurate, but with unwanted connotations. But later on he takes a more unequivocal stance[1]:

>I do not slide into eliminativism about phenomenal consciousness; I explicitly, vocally, and enthusiastically embrace it! Qualia, phenomenal properties, and their ilk do not exist!

[1] https://twitter.com/keithfrankish/status/1182770161251749890


> Sure, if our measuring stick is information, then there is no difference in kind, merely a difference in complexity. But the complexity difference between the two is worlds apart, thus substantiating the distinction I'm pointing to.

I agree 100% that computational complexity can be used to make a meaningful distinctions. It's not clear that that's the case here though. That is, I agree that the quantity of information is worlds apart, but if the information content is all of the same kind and requires no special changes to the computational model needed to process it, then I don't think the magnitude of information is relevant.

> For something to be intrinsically contentful, it has to intrinsically pick out the intended target of reference, not merely be the source of entropy from which another process picks out a target.

I don't think this distinction is meaningful due to the descriptor "intrinsic". It suggests that an agent's thoughts are somehow divorced from the enviroment that bootstrapped them, that the thoughts originated themselves somehow.

The referent of one of my thoughts is an abstract internal model of that thing that I formed from my sensory memories of it. So if "instrinsically contentful" information is simply information that refers to an internal model generated from sensory data, then this would suggest that even dumb AI-driven non-player-character/NPC in video games have thoughts with instrinsic content, since they act on internal models built from sensing their game environment.

> But to deny the notion of intrinsic content because you can't currently write one down is short-sighted.

Maybe, but I'm not yet convinced that there's real substance to the distinction you're trying to make. I'm all for making a meaningful distinctions, and perhaps "mental states" driven by internal sensory-built models is such a distinction, but I'm not sure "thought" or "information with intrinsic content" are necessarily good labels for it. "Thought" seems like overstepping given the NPC analogy, and per above, "intrinsic" doesn't seem like a good fit either.


hackinthebochs: Your point seems to be that a thing has to exist- have some ontological status, to have thoughts at all, even if that "thing" is only itself a thought or, more vaguely, a temporary convergence of "something or other." When you say "self" or "I" as in, "I think therefore I am" people drag in a whole load of qualities they attribute to a self or an I. I feel like that is what your interlocutor is getting caught on. I understand and take your point.


> we erroneously conclude that the facts we perceive are first-person

this doesn't make sense... the perceiving itself is what makes something first-person, not the object of perception


That's not the technical meaning in the consciousness debate. I invite you to read about Mary's room for an introduction to the distinction I was describing.


Oh okay... I thought you were denying the reality of first-person subjective experience as commonly understood, not a narrowly-defined technical term. That seems more reasonable then (though also less interesting).


It's still pretty interesting! Mary's room thought experiment is pretty short and simple, but it'll make you think pretty hard.


I hadn’t heard of that one before.

As a real human story, it would be fascinating. As an abstract philosophical thought experiment, it seems close to worthless. How is it actionable?

Why bother with a thought experiment, when there are real world examples? The book Fixing My Gaze is a great place to start, written by someone whose eyes were out of alignment as a child and only gained true stereoscopic vision as an adult.


These intuition pumps are useful philosophical tools to test the limits and cohesiveness of a particular idea. Mary's room is intended to challenge materialism because it's very difficult to exlain using materialism how Mary could not learn something new upon seeing red for the first time, even in principle. So it's about the logical cohesiveness of the whole theory. That said, there are plenty of strong refutations of Mary's room, but intuition pumps typically just confirm your own biases, so materialists accept these refutations and antimaterialists are not convinced by them.


intuition pumps typically just confirm your own biases, so materialists accept these refutations and antimaterialists are not convinced by them.

Yeah, I think that’s a good way to describe my complaint about philosophical thought experiments. They’re fun to argue over, but very rarely actually cause anyone to shift their positions, and as far as I can tell have zero practical value.

I think it’s interesting to compare it to e.g. Einstein’s thought experiments around relativity, like photons bouncing between mirrors on a fast-moving train. Those are useful because they help explore the ramifications of some mathematically rigorous physics proposals, and point us towards real world experiments. The underlying postulates are well defined.

Similarly, Einstein again, the EPR paradox is a very strange thought experiment around quantum mechanics that has turned out to be amazingly fruitful (if not in the way he wanted). Again, the underlying assumptions being tested are well-defined.

In contrast, the “Mary’s room” scenario tells us nothing useful and has no connection to the real world, because it’s not actually about the physics or physiology of colour at all. It’s about “qualia” which nobody can agree on how to define in the first place. There’s no starting point for the discussion, so there’s always an escape hatch for the point of view you support.


> In contrast, the “Mary’s room” scenario tells us nothing useful and has no connection to the real world, because it’s not actually about the physics or physiology of colour at all.

But it could be. One materialist response to Mary's room actually disputes the physicality of the entire arrangement. Consider that in order for Mary to be able to answer any question about the optical system of the human brain, she would have to be able to answer nearly every question possible; it requires ungodly amounts of data and computing power from the quantum level and up.

That amount of information in a finite space would collapse into a black hole, per the Bekenstein Bound, so Mary's room is actually incoherent at its core. You can see shade's of this in Dennett's response to Mary's room.


ianmerrick: I never understand this. How is the fact of experience in any way debateable or something which people disagree on? I am not asking about the epistemological details which appear to be a prerequisite to experiences we have, neurons and neurotransmitters and brain networks and the physics of light and phosphorus and the eye etc , I am talking about experience itself. How can it be confusing?

Often when I have this conversation it appears to me that somehow, impossibly, the other side suddenly gets fuzzy on this thing I call "experience".


Well, they don’t call it “the hard problem” for nothing.

I’m reminded of The Hitchhiker’s Guide to the Galaxy, where Deep Thought points out that they can’t begin to answer the ultimate question of Life, the Universe and Everything because they can’t even clearly state what the actual Question is.


  "This is a thought, therefore thoughts exist" is the valid, non-circular version.
Let us cast this argument in formal logic.

What are the axioms? What is the conclusion?

https://isabelle.in.tum.de


Fascinating, an actual idea about consciousness that I havent heard before


Does this mean the experience of pain, sound, color, etc. are illusions?


They are an illusion in precisely the same way your car or your day job are illusions: we don't admit any notion of qualia/cars/day jobs into our ontology of physics, and of qualia must ultimately be explained by appealing only to our ontology.


>the same way your car or your day job are illusions

But what work is calling these things illusions doing for you? That they're not fundamental units of the furniture of the universe doesn't mean they don't exist or play necessary causal or explanatory roles.


> But what work is calling these things illusions doing for you?

It serves to help distinguish that which is reducible to more fundamental ontological entities, from that which is irreducible and thus ontologically fundamental.

I agree that these concepts certainly fulfill useful causal and explanatory roles. Whether they are "necessary" has some room for debate.


How do we know when we have arrived at something that is irreducible?


In a similar manner in which we settle on the primitives of physics theories: parsimony in explaining the available data.


What about "parsimony in explaining the available data" indicates that it is irreducible though?


Every theory posits some axioms. These are irreducible by definition.

The challenge is choosing which axiomatic basis we ought to prefer given our incomplete information. This is answered by induction [1].

[1] https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_induc...


What you're describing is "the best way to do science, so we can make some progress". That's all well and good but what other people are talking about is the nature of reality which, in pursuit of, those crazy people, they are perfectly willing to doubt axioms.

As well they should and as is their right since a set of axioms are effectively ground facts which are selected to make logical reasoning across a domain possible, nothing more.

That doesn't make them true in the big sense of True, it makes them expedient, productive of theory, generative, a lot of wonderful things, maybe even strongly implied by all evidence, but not apriori true. They're dubitable.


> That's all well and good but what other people are talking about is the nature of reality which, in pursuit of, those crazy people, they are perfectly willing to doubt axioms.

Solomonoff induction does doubt and change axioms. It's a fundamental part of the whole process in fact.

> That doesn't make them true in the big sense of True, it makes them expedient, productive of theory, generative, a lot of wonderful things, maybe even strongly implied by all evidence, but not apriori true. They're dubitable.

Logic is used to make distinctions. Two theories with differing axiomatic bases will make different distinctions, but if they make the same predictions in all cases, then they are logically the same, ie. there is a fundamental isomorphism between them. In this case, it literally doesn't matter if one is "actually really true", and the other is a mathematical dual of some sort.

For instance, polar and Cartesian coordinates are completely equivalent. A theory cast in one might be easier for us to work with, but even if reality really used the other coordinate system, it quite literally doesn't matter.

In the case when the two theories do differ in their predictions, we should epistemically prefer one over the other, and Solomonoff Induction shows us how to do this rigourously.


Someone, a woman, I forget her name, a physicist has lately questioned the parsimonius assumption or more accurately the whole beauty assumption, meaning, roughly, the most beautiful or parsimonious theory is correct. Just a data point to this conversation, not an argument.

Re: Solomonoff, if you're chucking away unnecessaries, which is what I understand Solomonoff to be doing (I had to be reminded what his theory was truthfully) then that's all well and good, let's chuck. But you still are left with the problem of whether the axioms are true. That's a different thing entirely except to the extent we define true operationally, as having predictive power over the things we understand and know about in the way we understand them.

We moderns are all deeply enmeshed with Scientism which is an ism that says logic and reasoning and the scientific method etc. are the only valid tools for aquiring certain (indubitable) knowledge. What if it's just not true ? Then what ?


So, is this to say we approach the irreducible best with induction?


You can only really "experience" the model your brain makes of the world, not the world directly. You end up assuming the world exists and there's no Descartes' Demon.


Right... we are just colonies of cells which behave in programmed ways to generate predictable responses from other cells in other parts of the colony who have their own jobs to perform in maintaining the homeostasis of the colony. The illusion of self is useful because it ensures that the collection of cells entrusted with executive functions act in the interest of the entire colony by perceiving it as a unified whole.


Another way to put it: the experience is real, but it might be misleading.

A hallucination is an experience of something that doesn't exist, but the experience itself does.


Misleading in some respects, but overall helpful. Like a computer desktop.


Sounds like the temporal circuit is acting like a clock, as in a computer chip's clock. Pretty cool.

> We demonstrate that the transitions between default mode and dorsal attention networks are embedded in this temporal circuit, in which a balanced reciprocal accessibility of brain states is characteristic of consciousness. Conversely, isolation of the default mode and dorsal attention networks from the temporal circuit is associated with unresponsiveness of diverse etiologies. These findings advance the foundational understanding of the functional role of anticorrelated systems in consciousness.


This feels right in terms how I experience things when I've had bad migraines and notice parts of my capability going away temporarily such as understanding speech or being able to read words or missing visual areas entirely. Things get jumbled or confused, all the while, I am aware of these things happening yet unable to control them. It feels like there are separate parts of me like modules that go offline but the one that is constant is the sense of "me" or the consciousness part. These episodes are few and far between but I am still thankful to have a different perspective of our bio-mechanical nature. It also makes me feel closer to my pets in the sense that awareness or consciousness doesn't correlate with cognitive ability or intellect but that is just my guess.


relating to this...it makes me very curious about Neuralink or similar tech impact on humanity once that becomes more advanced. If it is true we have these specialized processing areas while our awareness is another specialized area, it would follow that all sorts of capabilities the new tech brings would simply be incorporated and assumed just as we assume our ability to remember a name or add numbers in our head. It will be a crazy future that I hope to see.


> Here, we conservatively use the term “unresponsiveness” instead of “unconsciousness” to allow for the possibility that covert or disconnected consciousness could occur in the absence of behavioral response.

Conservatism is very wise! Given what they say in that quote, I'm very confused why they think it's justified in the intro to suggest they've identified 2 systems responsible for consciousness. Shouldn't they replace every use of the word "consciousness" with "responsiveness"? They're relying on a purely behavioral understanding of consciousness

Descartes famously thought that consciousness lived in the pineal gland, and similar arguments has tended to generate some well deserved criticism from philosophers of mind. Pointing at a physical thing and saying it's the source of conscious experience should come with pretty extraordinary evidence.


>They're relying on a purely behavioral understanding of consciousness

They're relying on the fact that consciousness has physical manifestations in behavior. The alternative is epiphenomenalism. While it may be a philosophically interesting position, its useless scientifically and so its fair to assume consciousness has some physical artifacts in a scientific context.


I don't think we're forced to choose between behaviorism and epiphenomenalism.

But my point is more internal to the paper. They make claims about the physical basis for consciousness and seem to believe they've generated evidence for it, but they also explicitly say they've only gathered evidence about responsiveness.

EDIT: To be clear I'm objecting to the semantics (which I consider important), not the potential value of the research.


The brain is the most complicated structure in the known universe. The probes currently available to science- fMRI and GSR - are both gross measures of cortical electrical activity. They're enough to start to explore apparent structural and (gross) electrical correlation between brain areas and (gross) alterations in "consciousness", in this case unconsciousness invoked via propofol and ketamine. Fair enough.

However, it irritates me when I hear scientists loosely throw the word "consciousness" into these studies and here's why.

In these studies, consciousness is always implicitly defined operationally as the electrical activity in some identified networks- DAT and DSM and front-parietal and sensory motor etc.. But the concept of consciousness has another life in philosophy where in works by people like Patricia Churchland and others, it references something more subtle- the mystery of why there should be anything we call experience at all.

Experience itself doesn't seem to be necessary to the working of any machine, including our brains. We don't think our TVs have any experiences despite (being capable of) accurately representing all human visual experiences. The reason we don't think they experience what they're displaying is because we know how they work and we know there's no ghost in the machine. Adding on "experiences" to an explanation of how TVs work is gratuitous and unnecessary.

But that's not the case with humans-just the opposite. Experience is absolutely foundational.

Descartes tried to boil his world down to what he could know with absolute certainty and arrived at his famous "Cogito ergo sum" formulation, but actually, he skipped a step; that step is simply- "There is experience".

Experience is perfectly gratuitous to any explanation of brain activity since all that activity, like an electrical storm, could take place in exactly the same way without it. We (our brains) could be, and most scientists believe are, very complicated, but purely mechanical machines. They could be exactly as they are with no more awareness- not to say feedback loops- than a blender.

But that account leaves the problem of experience or consciousness completely untouched. That would be O.K. except we know we have it.

The mystery of consciousness is not totally defined by questions like of "can I make you unconscious or conscious?" or "can I cause you to have this or that illusory experience by stimulating your brain?". The mystery of consciousness is why is there anything like experience at all ?

So whenever I read a paper that makes some confident assertion about consciousness, it gets under my skin. It's electrical activity and perhaps human behavior and speech they are actually examining, not consciousness. I hear these papers gratingly assuming the consequent with respect to the biggest mystery there is. They are implicitly saying "this is consciousness, this pattern of electrical activity in the brain and here is what we have discovered about consciousness". That's one perspective, but to philosophers, both academic and non-academic, it's a form of punting on the real question.

Consciousness is to brain science what AGI is to AI. Researchers just love to make assertions and grand predictions.

Actually the correlation between the two is closer than that since strong AI claims that consciousness can be captured in a computer; Kurtzweil and his Singularity concept is in this school of thought.

He and people like him claim that not only does experience arise as a direct result of brain activity but any substrate- including general purpose computer platforms- will similarly give rise to the same experiences if only they are programmed in a particular way, specifically, if the computations are functionally equivalent to the brain's computations.

Are badly programmed computers therefore experiencing chaos? Well, why not? Are simpler computers, like a thermostat which "experiences" temperature changes, also somehow dimly conscious? If that seems like a straw man argument to you, you should know Marvin Minsky bought it and so do a lot of other scientists whether they realize it or not.

All of this is just a non-starer to people like me. You don't get to skip a step because it keeps your theory neat or provides you the promise of immortality because you uploaded your "you" to a machine.

Consciousness, understood in this way, is a genuine mystery which for now at least I don't think we have the conceptual tools to even define much less make pronouncements about.


I think we're a long way from a good understanding of how consciousness works, but I also think a lot of people are going to subscribe to a sort of consciousness-of-the-gaps idea no matter how much progress is made in understanding the actual mechanisms. Even if we fully understood and and could reproduce it, there would be scores of people who would flat out refuse to see the evidence and would simply assert that the ineffable "experience" does not exist within beings for which they don't want to acknowledge it. The very concept of p-zombies illustrates this a priori refusal to admit any possible evidence whatsoever of consciousness. Another person could simply decide that I am in fact a p-zombie and lock themselves in a closed system of thought out of which there is no path to demonstrating that I "experience" anything at all.

I think if you want to put forth a hypothesis that there is some ghostly ineffable part of consciousness called "experience" that cannot ever be touched or measured by scientific means, then you have a self-defeating argument that cannot be supported. You might as well go full solipsism. There's nothing stopping you.

Consciousness is a genuine mystery at this point, but I think some people will still see it that way even if we solve it, and this is clearer to me every time I see people trash any kind of effort or progress made by science in understanding the brain, claiming that it is not in fact progress at all.


On the other hand, I think that many people are emotionally invested in believing that science must be capable of solving the hard problem of consciousness even though there is no reason to assume that science is.

It is perfectly possible that the hard problem of consciousness is in principle and forever beyond the reach of scientific investigation.


Just to toss off one more worthwhile idea to you since it seems like you're interested in this topic. p-zombies is not the most challenging scenario strong-AI deniers are likely to face. Brain cell replacement is.

With p-zombies you have two observers outside the system arguing about the system's inner life. With brain cell replacement, you have the subject directly and quite authoritatively experiencing the system in question and reporting back.

It seems many times more likely some of us will live to see this, but you just never know. Newtonian mechanics had it all locked up save for a few details and look what those details held.

Every brain science / cog-sci paper it seems has some alternative amputating conclusions to pronounce about consciouness.

They sort of have to do that because of the funding model they live under. Positive results only ! It's not the researcher's fault; I don't fault them. I just adopt a highly skeptical, wait and see, there's-probably-more-to-the-story attitude generally in science, that, and the more concrete counter-arguments I mentioned in my other comments make me a very highly dissident observer of this field.


"The very concept of p-zombies illustrates this a priori refusal to admit any possible evidence whatsoever of consciousness. Another person could simply decide that I am in fact a p-zombie and lock themselves in a closed system of thought out of which there is no path to demonstrating that I "experience" anything at all."

This is a good point and makes the problem interesting in an additional way. We (I) assume something like p-zombies exist in non-human consciousness, dogs and cats for example. It's like something to be a dog. How far down do we want to go ? Frogs? I'll bite; it's like something to be a frog:

https://www.youtube.com/watch?v=w8IY2eTBqd8

But here's a counter to the p-zombies argument, OK?

The p-zombies argument is usually taken to mean there comes a point where what has been created is so indistinguishable from "real" people, ala Ex Machina, that arguing over it is a form of ideologically motivated perversion.

Let me turn that round and say that the p-zombie argument is (accidentally) making the following strong claim- it is impossible to build a machine which in every way acts human but has no experience.

That's a very very strong claim on this universe. I wouldn't take the bet, because someone's going to do it.

But if someone is going to do it, how can we tell when they have or they haven't? The Turing Test is outdated (as I see it) and anyway already passed for some judges ( re: ELIZA).

To me, this circles back to the original problem. We can't distinguish between the high probability that someone can eventually create an actual zombie and "real" experience-having artificial intelligence, and why is that ?

The issue is just another form of the basic problem- we don't have the conceptual framework to get our minds around what experience is.

Our basic assumptions may be off. Instead of quarks et.al. being the basic building blocks of matter and matter of brains and brains of consciouness, some people take experience to be the most basic building block of the universe.

This was my conclusion and I thought it would just brand me as an eccentric so I never pushed it, but now I see it's being kicked around by people with careers.

Another assumption is that experience/consciousness is comprehensible to the level of scientific causality/reality we're aiming at, (let's just shorthand it to "ultimate reality"), because there are separate, distinct things in the first place.

But what if separate things is not a fact about ultimate reality? What if they're more like a hardwired perceptual compulsion we can't escape? Then we might very well find truly insoluable mysteries on the foundational tier of our conceptual scaffolding, because none of the "things" we think about are real in the first place. Things which don't exist, don't have to "add up".

So this would mean our minds and ultimate reality are just not made for each other, even as that reality directly impinges on our personal daily lives in ways we can and do readily experience and talk about.

It seems like the most far fetched and deflating hypothesis possible, but consider we'd merely be joining the rest of the animal kingdom in this regards.


The thing is, if you're an atheist (and I write from one of the non-USA countries in which being an atheist is entirely unremarkable) then experience (or qualia) and consciousness itself are still very mysterious, but it's hard to avoid the conclusion that it must all be a side effect of processing or information somehow.

Daniel Dennett has some good stuff on this (see Consciousness Explained, etc). Its not that he knows the answers, but his point is that consciousness might not be exactly what we think it is, there are lots of thought-traps around it, so we have to carefully unpick some of our assumptions about it to get anywhere - e.g. what he calls the cartesian theatre is one very powerful misconception (too long to explain here).

Also I always like to drop this Iain Banks quote in these kinds of discussions (from A Few Notes About The Culture)

Certainly there are arguments against the possibility of [strong] Artificial Intelligence, but they tend to boil down to one of three assertions: one, that there is some vital field or other presently intangible influence exclusive to biological life - perhaps even carbon-based biological life - which may eventually fall within the remit of scientific understanding but which cannot be emulated in any other form (all of which is neither impossible nor likely); two, that self-awareness resides in a supernatural soul - presumably linked to a broad-based occult system involving gods or a god, reincarnation or whatever - and which one assumes can never be understood scientifically (equally improbable, though I do write as an atheist); and, three, that matter cannot become self-aware (or more precisely that it cannot support any informational formulation which might be said to be self-aware or taken together with its material substrate exhibit the signs of self-awareness). ...I leave all the more than nominally self-aware readers to spot the logical problem with that argument.

Edit: changed 'cant really avoid the conclusion' to 'its hard to ...'


Never take the arguments of a side from their opponent's mouths.

The arguments I offered have nothing to do with any of the three he claims they all boil down to.

If you think I made one of these three, please tell me which one so I can clarify the argument.

Assuming it's a side effect of processing- known as an epiphenomena- immediately commits you to answering the question- does a badly programmed computer have a form of consciouness? Does a thermostat have a primitive form? Is it specifically impossible to create AI which emulates human thinking to the last detail, but has no consciouness, i.e. really is just an empty machine with zero experience? Is that an impossible task which could not be achieved by anyone by any means?

Suppose I debate with someone who has a computer programmed to be conscious. Here's what I'm going to do. I'm going to very very slightly change the programming so whatever output it's producing which is proving, my opponent claims, the computer is conscience, starts to degrade.

I'm going to do that then ask my opponent- still conscious? I'm going to do this and I'll guess my opponent will say "less so perhaps" , which would be his best reply.

Then I'm going to repeat until I get a "probably not" and then a "no" from him, which by his own hypothesis has to happen.

Then I'm going to diff the conscious program and the unconscious program and ask him if he really thinks those slightly altered lines of code are the difference between consciousness and a humdrum computer.

Because that's where this goes, this idea that a certain type of computation is consciousness.

It also goes to consciousness being granted to a machine like a Turing Tape. You may not think that squishy biological matter should be bequeathed with a "magical" property which hosts consciousness, but tell me, how do you feel about a Turning Tape?


I'm going to very very slightly change the programming so whatever output it's producing which is proving, my opponent claims, the computer is conscience, starts to degrade.

[...]

Then I'm going to diff the conscious program and the unconscious program and ask him if he really thinks those slightly altered lines of code are the difference between consciousness and a humdrum computer.

Is that not equivalent to giving a human being alcohol, observing that they become progressively less conscious, and asking if you really think that a few centiliters of alcohol is the key to consciousness?


It is somewhat, yes. Or I could cause gradual cell death in someone's brain. Same idea.


Doesn't that kind of reasoning lead you down a path towards panpsychism or panexperientialism?

Either there is a phase transition of consciousness or there is not. If there is, we have no idea where it is because we can't prove that another being has subjective experience the way we can ourselves. If there isn't, then something roughly panexperientialist follows and even, say, a gas cloud has (very occasionally and very limited) experience. But which is it?

The problem to science itself seems to be that we can't make any comparison of the "what it is like to be" sense of experience. I experience things right now. I can't tell you what it's like with the kind of certainty that is usually associated with science. I can't even tell my future self with that certainty, because memory is a sense in itself and when I recall something, I'm just experiencing something with the sense of memory.

If whatever experience is can't be "frozen", then science has nothing to work on, apart from trying to get at it from the objective side of things. But it seems like it's very easy to get sidetracked, hence the argument that Dennett just redefines consciousness as executive function and then proceeds to explain the latter in a materialist framework.


My replies to each of your comments, in order:

panpsychism or panexperientialism can't be right because they're not weird enough- to paraphrase Bohr. Would it surprise anyone to find out that, in our exploration of the brain we come across something as weird and upsetting to standard theory as QM is to physics ?

__________

If we do a gradual, over a long time, brain cell by brain cell replacement of a living human's brain, that human's self-report is our best bet to get around the impenetrability of the subjective experience of other minds. It is also the biggest challenge to people like me and could point strongly to consciousness as a thing supportable by machines.

__________

I agree that this is a problem that science, as it is right now, can't deal with. But that doesn't mean it's not real. The Big Scientific Inquiry, the spirit of science, seeks to explain and understand everything. Many really dramatic upheavals come out of corner cases in science; the things that are slighly off or not accounted for in an otherwise productive theory.

_______________

It's not important to anyone's brain research, but it is important to society because making a mistake about what is and is not conscious has the potential for huge negative repercussions.

When Dennett dismisses the issue and effectively assumes the consequent of the argument he's supposedly engaged with, not only is he making an error but the consequences of that error are far-reaching into how we act towards one another.

What I am arguing, to the extent I am arguing for anything, is that people like Pat Chruchland have a point and it's not an "academic" one; it's substantive. We are making a mistake if we ignore it.


>if you're an atheist [...] you can't really avoid the conclusion that it must all be a side effect of processing or information somehow

Why? I'm an atheist-leaning agnostic, but I think that the hard problem of consciousness might well turn out to be impossible for scientific investigation to tackle.

I cannot think of any valid logic that would show that "there are no gods" implies "consciousness is a side effect of processing or information somehow".


Well yes OK. I guess I'm jumping from 'being an atheist' to 'general distrust of the so called supernatural'.

Do you mean 'impossible for scientific investigation to tackle' because its just too complex (in the same way we can't predict the weather very accurately) or do you mean more like: because you suspect there is some outside-of-known-physics involvement that we wont ever be able to get a grip on?


Not because it is too complex, but because I suspect that there may be something to consciousness that is outside of knowable physics. There is no reason to assume that scientific investigation is in principle capable of getting a grip on all of reality. That does not mean that consciousness is some mystic woo-woo, it just means that scientific investigation may in principle be limited. Consciousness might well turn out to be impossible in principle to tackle using mathematical modeling, reproducible experiment, theories of physical mechanisms, etc. - but that would not mean that consciousness is not real. It does not require scientific inquiry to show that consciousness is real. Subjective experience is immediately obviously real, as subjective experience.


I agree that what you say is possible, but it's also possible that consciousness does lie inside known physics, so I reckon it's worth people investigating that angle, as formidable as it seems.

I've edited my comment above to be a bit less absolutist


I wouldn't say it's impossible, although honestly I cannot even begin to imagine how consciousness could lie inside known or even knowable physics. But if people want to try, more power to them. I'm open to my suspicion being wrong.


I personally feel largely the same. We need measurement to do science, and if the best we have is the Turing Test that's not good enough.


All true enough, and I think any honest scientist would say that the most we can hope for is to notice a few patterns in the wallpaper on Plato's Cave. There's no reason to think that any real insight beyond that is possible.


ELI5 "anti-correlated" here? What I envision is the temporal circuit acting like a computer clock, and the other as input/output. But "anti correlated" makes it sound like that's not the case?


Not ELI5 exactly, but the article does a pretty good job of explaining in the first paragraph:

> The default mode network (DMN) is an internally directed system that correlates with consciousness of self, and the dorsal attention network (DAT) is an externally directed system that correlates with consciousness of the environment... the DMN and DAT appear to be in a reciprocal relationship with each other such that they are not simultaneously active, i.e., they are “anticorrelated.”

The "temporal circuit" the paper describes is the neural architecture that facilitates the transitions between these two networks.


Your description parsed back into electronics-land sound to me like: "temporal circuit" is a clock signal, DMN ticks on raising edge, DAT ticks on falling edge.


Totally fascinating. It makes me wonder what kind of dysfunction would result from both the DMN and DAT being active at the same time, and especially what my subjective experience of that would be if it were happening to me.


I am not sure if this is accurate but intuitively, in strong psychedelic experiences it feels that both the DMN and DAT are active at the same time, which leads to, among many other things, a clearheaded view of mental processes that are hard to observe otherwise. One example would be observing emotions and how they affect your state of mind, while at the same time being totally detached from them. Some studies [1] propose that this happens because of an increase in connectivity between various parts of the brain, which could also be the thing that leads to ego dissolution.

[1] https://www.cell.com/current-biology/fulltext/S0960-9822(16)...


I may have found a partial answer to that, or at least a track to explore. I read some research articles from Robin Carhart-Harris on psylocibin/psylocin ("magic mushrooms") last year. The effect of psylocibin on the Default-Mode Network (DMN) seemed to be a critical part of his research, and so I searched if there was also some observations on the antiphasic nature of the DMN and the Dorsal Attention Network (DAN), and I found something rather interesting [1]:

"The following example may help to illustrate what is meant by competition between conscious states—and the loss of it in primary consciousness. Functional brain imaging has identified distinct brain networks that subserve distinct psychological functions. For example, the DMN is associated with introspective thought and a dorsal frontoparietal attention network (DAN) is associated with visuospatial attention and is a classic example of a “task positive network” (TPN)—i.e., a network of regions that are consistently activated during goal-directed cognition. If the brain was to be sampled during a primary state (such as a psychedelic state) we would predict that the rules that normally apply to normal waking consciousness will become less robust. Indeed, we recently found this to be so when analysing the degree of orthogonality or “anti-correlation” between the DMN and TPN post-psilocybin. Post-drug there was a significant reduction in the DMN-TPN anticorrelation, consistent with these networks becoming less different or more similar (i.e., a flattening of the attractor landscape). The same decrease in DMN-TPN anticorrelation has been found in experienced meditators during rest (Brewer et al., 2011) and meditation (Froeliger et al., 2012). Moreover, decreased DMN-TPN inverse coupling is especially marked during a particular style of meditation referred to as “non-dual awareness” (Josipovic et al., 2011). This is interesting because this style of meditation promotes the same collapse of dualities that was identified by Stace (and Freud) as constituting the core of the spiritual experience. The DMN is closely associated with self-reflection, subjectivity and introspection, and task positive networks are associated with the inverse of these things, i.e., focus-on and scrutiny of the external world (Raichle et al., 2001). Thus, it follows that DMN and TPN activity must be competitive or orthogonal in order to avoid confusion over what constitutes self, subject and internal on the one hand, and other, object and external on the other. It is important to highlight that disturbance in one's sense of self, and particularly one's sense of existing apart from one's environment, is a hallmark of the spiritual (Stace, 1961) and psychedelic experience (Carhart-Harris et al., 2012b)."

[1] https://www.frontiersin.org/articles/10.3389/fnhum.2014.0002...


>We demonstrate that the transitions between default mode and dorsal attention networks are embedded in this temporal circuit, in which a balanced reciprocal accessibility of brain states is characteristic of consciousness.

I read it is as the temporal circuit manages the anti-correlated relationship between the two networks to produce consciousness. Almost like the temporal circuit is a function whose goal is to return 1 given two anticorrelated inputs that add up to about 1 and the computation to arrive to the solution is what consciousness is. Weird unresponsive stuff happens when the sum of the inputs is above 1.

Consciousness is an emergent side effect of trying to keep two input systems synchronized


In general activity in the DFN decreases when a person is engaged in a task; the more demanding the task, the more the decrease. The attention and salience networks do the opposite. A classic experiment in this case would be to contrast BOLD signal in an fMRI experiment between easy and hard blocks, or between active periods and rest periods. The general observation about the anti-correlation of the DFN and task-activated networks is a very robust result, seen over and over again.


Language is a Virus. Consciousness is an OODA Loop. (You could write an alternative set of lyrics to the Laurie Anderson song.)


I'll be the asshole here: this is crap. They took some weak correlations (anti-correlations whatever) between conscious brains and 'unconscious' brains induced via Ketamine. The rest of the paper is just creating new jargon for large vague regions of the brain, and tons of jump-the-shark assumptions about how they work together.

Most notably, there's no mention of STDP, nor the cingulate gyrus, or any reference to the fine-structure internals of the brain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: