I have a conjecture, that we understand language primarily through proprioception. So when we see and hear someone talking, we mirror and model their muscle movements, and we understand it fundamentally, in terms of our own body moving. I suspect proprioception plays a role in abstract thinking, too. Conceptual metaphors are movement based, as if our hands were manipulating thoughts. We pick ideas apart, reassemble them, sort through them, toss them out.
You should read „A thousand
brains“ by Jeff Hawkins (if you haven’t already done so). A large part of their theory is that the brain has evolved to orient and move in a 3D world and that this is the basis of our abstract thinking.
Personal anecdote, there's a lot of your nervous system that is indeed simulating other personas in your head. And the closer you get the deeper the mirroring (children, spouse).
After some catastrophic events, whenever I was smiling, the shape was altered, and took how my ex was smiling (very different facial structure) and triggered hallucinations of her face onto mine and other various emotions. It happens whenever (at various intensities) I mirror her smile.
Different people have different kinds of cognition. For instance this Joseph Conrad quote from the article: “My job as a writer of fiction is to make you see, to appeal as accurately as possible to the visible, and that is everything.” is amusingly wrong to the many aphantastic people on Hacker News.
Blind people understand language perfectly well, which rules out your conjecture.
[ETA: Yes, this is a response to "see and hear", which should perhaps be "see or hear", and because "we mirror and model their muscle movements" implies more than just vocals/sound.]
I must have expressed it poorly, because my conjecture seems to apply equally to blind people. Looking back, maybe it's because I wrote "see and hear"? Just an artifact of my perspective as a hard-of-hearing person who relies heavily on lipreading. I suspect when blind people hear speech, or read with braille, they mirror the muscle movements involved, as if they were speaking the words themselves. Sight isn't required for that. Neither is hearing. The deaf-blind can learn sign language through feeling the motion of the signs. I suspect it all works the same way, hearing language, seeing language, feeling language acted out physically - the audience understands it by acting out the same motions themselves, internally.
I assume you mean by auditory information. Echolocation pertains specifically to the use of echoes to determine location, which would not help at all in this case, with the granularity required.
Even assuming that auditory information it is, I am unsure if it enables mimicry as much as visual information, which apart from being our predominant sense, it gives clear clue as to which movement led to which action/sound
Well, there is often a good correlation between things which make socialising difficult and things which appear to make aspects of spatial reasoning difficult.
We learn to read aloud, and there's usually a long transitional period of subvocalization. And even people who can read silently, if they want to very carefully understand something, often subvocalize it - even moving the lips silently. My conjecture is this process never goes away entirely.
> talking on the phone
Same mechanism. We hear the sounds and imagine ourselves making the muscle motions to produce such sounds.
This comment is disrespectful and dismissive and goes against the community guidelines. Please refrain from personal attacks on other community members
Every "Perception for VR" class I give focuses on understanding proprioception. Designers ignore it until they experience spatial planning just with proprioception:
• Start by closing your eyes and touching your nose. Why can you do that?
• Continue by trying to touch two fingers on opposite hands behind your head. Why can't you do that? (because your proprioceptors are saturated).
I just touched two fingers behind my head with eyes closed (although does that really matter, hands being out of sight?). So I still don’t get the point about proprioception
I tested this out too. I noticed a few things where GPs explanation is probably abbreviated:
I noticed that finger-to-nose, I was able to do the first time, every time. 100% accuracy, and with no "do a little circle to figure out the last 0.5in". As opposed to touch-fingers-behind-head, I could "feel" the uncertainty and successfully touching fingers the first time, while not impossible, was probably 10-20% success rate. I was able to improve it by doing a little (radius of perhaps 5mm?) circle with both fingers once I knew I was about in the right place, since I was usually within about a fingers-breadth. But the whole process was just a lot less bulletproof than finger-to-nose. Which I think was GPs point.
N=2, nose is a hit 100% of the time with pinpoint accuracy but fingers behind the head only hit if I repeat the movement. Going much slower increase accuracy but it's never as good as hitting the nose anyway. At normal or increased speed I usually miss by a finger width.
Continue by trying to touch two fingers on opposite hands behind your head. Why can't you do that?
I can, easily, every time, any pair of fingers, and also behind my back and many other positions. Really, I don't care for this assumptive style of explication.
I can readily believe it is an issue for many or even most people (you have investigated and know far more about this topic than I). I am able to do this from a combination of spacing out on this sort of thing since childhood and a lot of athletic training, but surely this isn't that rare.
Not only am I unable to touch two fingers behind my head with my eyes closed, I'm not able to do it in front of my head. I can easily touch my nose with my eyes closed. Is there something going on where there's one less degree of freedom with the nose, which makes it easier?
Seems a reasonable hypothesis to me. I noticed I can improve or reduce accuracy in many different positions by bending arm/finger/hand at an unusual angle until it seemed that my mind was a bit less certain exactly how twisted or crooked was the final configuration, and the uncertainty would lie most strongly in the axis I expected (the most bent angle) and most accurate in the most direct axis
Fascinating, I never knew there was be a word for this, Proprioception is the word! It is definitely a thing about the body, time, and space that I have thought about. Thank you for posting!
The author is using the word “proprioception” differently from its established meaning. It’s not just a general idea of where you are in the world. That includes so many other senses. It’s specifically the sense of the body’s position in relation to itself. Is your arm straight or bent? This is a proprioception question. Having the feeling of being in a shack is not proprioception.
I agree proprioception is important in fiction, but I think of senses as being more immediate outputs of our sensors: touch, pain, temperature, sight, hearing, smell, taste, balance, etc. Proprioception seems like a more downstream integrated signal inferred from sight, balance, touch, and hearing.
Not really. You actually have a unique set of sensory neurons that tell you the spacial orientation of your body by measuring muscle/ligament stretch:
https://en.m.wikipedia.org/wiki/Proprioception
Edit: yes your general "body sense" is intergrated over multiple senses, including sight, touch, inner ear, ... but my point is that proprioception is an independent signal in that mix.