Hacker News new | past | comments | ask | show | jobs | submit login

"the demo's facial expressions were controlled by button presses, not facial analysis"



I'm honestly surprised by that, since it seems like just using a simple webcam to read your expression and pick it that way would be a much more natural choice, and probably the least technologically complex aspect of that entire setup. Though I guess you can't always be in an environment with a webcam pointed at your face... but if you have an Oculus on your head, my guess is a webcam isn't far away.


Except the Oculus is occluding your face.


Not disagreeing with you (because it's definitely not just a simple webcam), but there's been progress in this area:

https://www.youtube.com/watch?v=rgKkEnaaSDc


Nice! I hadn't seen this before. Interesting approach to capturing the facial expression. I'd bet the first commercial implementation of something like this will be based on cameras inside the helmet combined with IR illumination, We're going to want those anyway for gaze tracking and it might be that they can do double duty. This is clever, but from a practical standpoint I bet capturing expressions will be easier with cameras.


You can definitely see where this is going. My first thought was the same: 'we need eyetracking for foveated rendering anyway, so we can get realistic eyes for free', and if you can do that, you can track the eyebrows and muscles around the eye (doesn't need great fidelity), and I wonder if that gets you all the way to the rest of the face as well? Can you smile/frown without it tugging on the parts closer to the eyes which the future headsets can observe?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: