A major problem here isn't just the interface technology, but that we have such a poor understanding of how the brain works (my girlfriend is working on her PhD in Neuroscience and I hear about this almost daily). Consequently, our interfaces will be crude and clunky at best for a long time.
We've barely got the faintest ideas of how to measure some things that would seem really basic. Memory? Perception? Forget it. When using one of these you're basically trying to train your brain to stimulate huge areas to try to get some readout. It happens that we aren't great at this. Even playing a game as basic as Pong is very hard with this type of interface, and it isn't as if we're going to get significantly better at this anytime soon.
Its about as graceful as trying to make your entire leg smash forward in order to press a small delicate button. We can hit things, but when you have lots of small delicate buttons, the resolution isn't going to get much better since we're still smashing blindly.
To be fair, we can make huge steps forward once the interface technology has improved resolution. I will trust your girlfriend's expertise, but I do know from mine in Machine Learning that we can extract a surprising amount of signal without knowing the neurophysical processes which generate the data.
A huge problem right now with the products coming out of Emotive and similar companies is that the level of noise is enormous and the resolution of the sensors is very poor. As we continue to remove the noise from the data (and come up with better sensors!), we can train better and better models to differentiate these various activities.
From a non-scientist's point one issue I can see that humans have problems with relating to these devices is a lack of feedback. I've tried a few of these and it takes a few seconds to know if its working or not. That's a pretty unnatural feeling for a human, since we normally know if an attempt of ours had an action quickly or not. Like you said, there is a huge amount of noise that makes it very difficult, and much of the lag we have with feedback is due to us trying to smooth out this noise and find a pattern.
Hopefully one or two smart discoveries can give us a nice little leap ahead forward until we have a better brain model to work with- bringing the technology to at least a semi-usable level since as it stands it is pretty much a novelty at best.
God, not another EEG-based BCI article (especially of Emotiv, whose core competency as a company is mostly being a hype machine). This stuff is trash from a neural interfacing perspective; your keyboard is much higher throughput and better fidelity, and as much a "brain interface" as this is (it translates brain activity into computer signals!).
EEG mostly detects scalp, eye, and face muscle activity, and some very gross brain state (such as awake/alert and drowsy). Subjects can get ok performance in demos by confusing this muscle activity with "brain activity", but it's absolutly not a "neural interface".
A keyboard requires fingers or at least some sort of functional limb, however. This doesn't. It doesn't really have to be called a neural interface, if that offends you.
For anyone else hung up on this, I recommend mentally substituting the term "scalp-eye interface" or "hands-free interface" and focusing on the possibilities instead of the limitations.
I won't argue with your feelings toward Emotiv, since I don't especially trust their demos to be well controlled for various muscular cues.
That said, there is good work done which puts a lot of effort into actually measuring the non-muscular signal from EEGs and training on that instead. EEGs can capture signal from alpha waves (roughly analogous to alertness as you describe), and researchers have come up with numerous specific responses to watch for. One such example is the P3000 response, corresponding to the reaction a person has when they are shown an object which they were picturing in their head (a target).
There are numerous other techniques, many dealing with predicting a type of pattern by looking for activity coming from the corresponding area of the brain. If the sensors and algorithms are written correctly, EEG can absolutely be a neural interface, although one with quite a lot of noise. Done poorly, it can be entirely swamped by the physical motions you describe.
When I was a wee lad, probably 1980 or so, I was a regular at a market testing group for video games. It was usually just whatever new game was coming soon, but once it was a headband that you wore, with electrodes. It would control your Atari 2600 "by thought". I remember playing River Raider with it at the testing, you could go left and right and fire.
I don't think it actually read your brain activity, I think it was more muscle control dressed up, but it sorta worked.
With "wetware" on the rise, and with the recent passing of Leslie Nielsen, I think this is an appropriate time to remind people of the movie Forbidden Planet. Besides being one of Nielsen's few (old!) bits of "serious" acting, the movie has an interesting point: computer interfaces must distinguish between unconscious thoughts and purposeful instructions. Otherwise the machine will start executing instructions directly from the subconscious ("id") whether you really wanted to do those things or not.
Well of course I expect people to examine the idea critically instead of lifting it straight from a 1956 movie and applying it to next-gen interaction design problems. It's just an idea I think people should consider.
I'd love to play with one of these and just watch the inputs, during various mind states. Resting, relaxing, watching tv, playing music, listening to music, drunk, high, bored, focused, reading, panic etc, etc. I don't know what I would do with the data, probably end up posting it online see if someone can do something with it.
Idk, I feel like with practice you could get pretty good at using it. But here it has a $300 price tag, looks like I will be waiting for someone else to get after those ideas.
> "It can be a transformational experience," Garten says, of the moment users first don a headset. "For the first time, you’re consciously interacting with your own brain."
So I was at WonderWorks < http://wonderworkspcb.com/index.php > in early November, and they have a litle game that works based on your brain waves. You put a headband with metal nodes on your head, and it moves a ball forward based on how calm you are. You sit down across from someone and relax as much as possible, and whoever is the calmest gets the ball closer to the goal.
I have to say, it was very cool. There wasn't as much shock and awe as I'm sure the headsets in this article must inspire, but you can put that down to the primitive interface I was using. It was still extremely interesting, because I really felt like I was "consciously interacting with my own brain". Making something happen just based on how calm you are... I couldn't get the smile off my face afterwards.
Maybe I've been living in a cave but I had no idea that tech like NeuroSky and emotive was so cheaply available. Anyone with experience on which one is better?
I can't speak for the NeuroSky, but I reverse-engineered the protocol and crypto used for the Emotiv to create the open source Emokit ( https://github.com/daeken/Emokit ) a couple months back. While no one has really done anything with it yet, it opens up some fun stuff. The Emotiv has the benefit of more sensors, and it isn't much more expensive. I'd love to see someone do something really cool with it.
I just found that 20 minutes ago and was so excited that I ordered an Emotiv right away. Great to see that you're an HN user, I'm looking forward to playing with the Python library.
Awesome. Feel free to hit me up on #emokit on irc.freenode.net. One note: the new headsets may have a different key than the older ones. This has not yet been confirmed, but if it is the case, I'll get it working again regardless.
1) Their API is Windows-only (they have Linux support in beta, though) and is expensive as hell, and 2) The API to get raw access to the EEG data is $750 bare minimum. Silly, considering the consumer and developer headsets are identical, outside of the AES key used.
Do you have a writeup anywhere detailing your progress and/or what is capable using the api you have created so far? I'm super interested in this I just want a little more to go on than is in the readme.
For example, it's not even clear to me if I need to buy the regular headset (299) or developer headset (500)
No real writeup, but as far as what's possible: all data coming from the headset (gyro, EEG, contact quality, and battery meter) is accessible from Emokit (although the latter two are not merged into my branch yet). This means that you can do effectively anything you want with it. Also, you only need the regular headset, although it's compatible with the developer headset as well.
>We’re still a long way from real wetware (direct brain-computer connections) . . . but last week an NYU professor had a digital camera implanted in his head. It’ll be many years (if ever) before that goes mainstream, but the line between the mind and its tech is growing finer.
So, really, here doesn't come the wetware. But the author is correct to disclaim "if ever." The simple reality is that medicine isn't anywhere near the point yet where sticking something in transcranially is anything apart from a major, expensive, and risky operation (the NYU prof's camera implant is transdermal, but stops well short of the cranium). The popularity of breast augmentations and the like may prove the societal acceptance of elective surgery, but until medicine has advanced to the point where you can do a brain job as routinely and safely as a boob job, it's not going to happen.
We've barely got the faintest ideas of how to measure some things that would seem really basic. Memory? Perception? Forget it. When using one of these you're basically trying to train your brain to stimulate huge areas to try to get some readout. It happens that we aren't great at this. Even playing a game as basic as Pong is very hard with this type of interface, and it isn't as if we're going to get significantly better at this anytime soon.
Its about as graceful as trying to make your entire leg smash forward in order to press a small delicate button. We can hit things, but when you have lots of small delicate buttons, the resolution isn't going to get much better since we're still smashing blindly.