Hacker News new | past | comments | ask | show | jobs | submit login

> Humans can’t process multiple audio streams simultaneously.

Yes they can, they can process two, and no more than two. Being able to listen to a conversation where multiple people are talking at the same time isn't processing different audio streams. It's assigning a set of sounds heard over time by timbre, subject (meaning) or relative volume to subsets representing different theoretical sources. People have very little problem doing this with a small number of potential sources (probably another seven plus or minus two thing.)

Even in large crowds, we can often pick out a few individual voices of interest (if distinctive enough in timbre, position and movement (judged by volume and pitch changes), or accent/wording/subject, and follow those voices while disregarding the rest of them.

edit: we might suck at multitasking, but that's not a problem a computer would have. I can't follow multiple speakers at once if they're not interacting with each other, and they're speaking quickly and overlapping, but that's a problem with context switching. With a computer I can just route the separated voices to systems that will handle them. Everybody gets their own waiter.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: