I'd like to know what goes into the loss in accuracy. At first you'd be willing to try to say the extra background, or the quality of the camera.
But after a little thinking I bet its more because of the unknown angle, possible variation in screen size, and where the user is sitting/the camera mount position.
Is it users that don't track well, or a combination of them and their hardware set up? If someone doesn't track well, does another person with a different eye color then sit down in the same position and track better?
The loss in accuracy between us and a custom hardware setup for eye-tracking is primarily due to the fact that we're using a very different eye-tracking method. The "gold standard" for eye-tracking involves shining an infrared LED at the user's eye and measuring the reflection.
Since we're trying to use no custom hardware at all, we have to use machine learning techniques to extract features from the visible-light image of the user's eye. This is a lot more difficult than locating the bright spot in a reflection, so there's a lot more that can go wrong.
70 pixels? that ought to be good enough to implement an optical alt-tab.
i would agree to arbitrarily invasive monitoring of my web activity in exchange for that.
details: special behavior for, say, the vestigial enter key to the right of the space bar on my old MBP. When it's depressed, trigger exposé and start tracking my point of fixation. As it's released, return from exposé with focus on the window i was looking at.
or maybe aapl would pay? it's got more wow factor than spotlight or time machine.
But after a little thinking I bet its more because of the unknown angle, possible variation in screen size, and where the user is sitting/the camera mount position.
Is it users that don't track well, or a combination of them and their hardware set up? If someone doesn't track well, does another person with a different eye color then sit down in the same position and track better?