Hacker News new | past | comments | ask | show | jobs | submit login

A small correction: cited study used MEG, not fMRI, as modality.

The conclusion itself doesn't look very surprising for me. We already know that the sound processing in general is the same in both hemispheres while the speech processing is very lateralized. From the continuity I could say that there should be a border where speech-like sounds sound like speech and therefore they are processed differently between hemispheres. This study seems to estimate this border.




Thanks for the correction!

I misremembered because my professor's group did a lot of fMRI stuff, as well, and in the seminars we mostly talked about those.

Speech/language and brain is fascinating. There are resident linguists at major hospitals who are consulted before neurosurgery. Speech sounds are processed faster than other sounds in our brain. Rearranging sentences from active voice to passive voice, silently in your head, lead to easily seen activity in fMRIs, distinct from non-linguistic mental actions. And so on.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: