This is an awfully contrived title for an article that could be summarized as "people can find out whether or not you recognize something shown to you by monitoring electrical activity along the scalp."
If you manage to hack out a list of all 4-digit numbers that you recognize, it's trivial to bruteforce which of those numbers are for your cards or for some other security PINs.
Also, it has other practical uses - think of it as a better-than-polygraph test for questions of type "have you seen this person" or "does this account-password belong to you".
These articles are funny, coming up with all the negative possibilities of future technology. Elysium (new movie) showed another one, allowing a full download of people's brains, also to ill effect.
I can't wait until this technology is improved, so I can "search" my own brain to find all the stuff I seem to forget. I'm sure it's locked in my unconscious somewhere...
>If you manage to hack out a list of all 4-digit numbers that you recognize, it's trivial to bruteforce which of those numbers are for your cards or for some other security PINs.
While correct, I think you're missing something huge.
I'm not sure how long it takes for your brain to recognize a 4-digit number as something you "know". Absolute fastest would be something around 32/second, as I believe that's about as fast as your brain can view an image (movie frame rate). However, I'm reasonably sure it's much slower than this, and we haven't even factored in how long it takes the computer hooked up to your brain to recognize a change. So for the purposes of this argument, I'm going to say about one PIN per second.
So, for a 4 digit PIN, you can spend 9999 seconds to "hack" the mark's brain, and then try all those combinations that showed a recognition pattern. Or you could just brute force all 9999 permutations, likely at a much faster than 1 per second, without needing physical access to the mark, and without all sorts of crazy hardware.
Now you just show your mark each symbol to check if it's a part of the password, which would drastically and usefully reduce your search space (unless it a password that uses almost all ASCII characters, but those are extremely rare...).
No, the proposed method can't check if it's a part of the password, it can check if it's a part of a password/something the person has ever known. All alphanumerics would be included naturally.
The reason for PIN's is that if your pin is '8243', then that number will provoke a "recognition" response much different than, say, '8244' which (to you) is just a random number with no specific associations.
I absolutely agree that the technology has plenty of use cases and that the paper the article's written about presents an interesting new perspective on security. I was, however, annoyed that the article's title and opening paragraphs seem to take something relatively elementary (it's a feasibility study!) and frame it as some sort of Inception-esque mind-hacking situation.
In other words: I agree about the value of the idea. I think, however, that there's a huge disconnect between the information and how it's being presented.
I completely agree here. Good information here, and lots of potential when you think about it could be applied with other machine learning algorithms that are currently being used with "real-time fMRI". Indeed, I don't think we're far off from reducing the gap between neuronal activity and behavioral patterns, but we need to keep in mind that the brain is incredibly complex, highly variable between individuals, and most of the time, presents quite a poor signal-to-noise ratio with the technology currently available.
Perhaps it's more practical and better with temporal resolution, but it depends on how concerned you are with spatial resolution too (which EEG is quite poor with).
That's the point of this article. The experiment routes around the lack of good spatial resolution in the data. It's like a much more sophisticated, much less easy to game polygraph.
a polygraph measures physiological responses which are related to the EEG spike, but are a less direct measurement. As a result the polygraph is much easier to mess with by playing around with your physical state. For example you can confuse the polygraph by doing things like clenching your toes and fingers, as well as performing other jedi mind tricks.
Well the SCIFI-future scenario is valid though. Imagine a world in which people use such devices regularly. It's not that hard to envision some social media application or game that can extract some information without you being aware of it.
This relies on an unsuspecting victim wearing a complicated nonstandard headset and then looking at a series of images / numbers slowly enough to register each of them consciously.
In what world would the victim not become suspicious?
(I appreciate things may change in the future, and if brain control headsets become common then a malware model (ad popups, for example) could provide a plausible vector for this attack.)
It's my understanding that the headset is in fact standard:
(from the actual paper) "The experiments are implemented and tested using a Emotiv EPOC BCI device"
(from the hyperbole article) "For $200-300, you can buy an Emotiv"
In what world would the victim not become suspicious? I think this result is framed as "if BCI-controlled gaming takes off, it doesn't take much to harvest personal data from gamers".
Also, I wonder what are the implications for interrogation methods (think CIA, not local police). They didn't test what happens if the victim is actually trying to resist, maybe even if the victim has had guidance on how to resist. I would love to know.
I apologise - I meant "nonstandard" as "my mum doesn't have one".
resisting this sort of thing is easy, just think "loud" alternative thoughts and close your eyes so you don't see the stimulus. Sing a song in your head. Anything.
The research(both in this paper and the previous one at Usenix security 2012) is over hyped bullshit. The experiment was: remember this pin number to enter at the end of the experiment and then we show you numbers and look for a recognition signal. Or they check that you recognize an image of your bank.
This is just image/text recognition research from 1980's and 90's neuroscience regurgitated as security publications with far shittier experimental methodology and consumer equipment.
At no point did they actually demonstrate they got access to secrets you knew. E.g. your real PIN number and they certainly didn't demonstrate they could do so surreptitiously. There is no reason to believe you could actually do this and these experiments tell us nothing we didn't already know from actual real experiments done by real clinical researchers: you can use the p300 signal to tell if someone recognizes a specified stimulus.
This implies the possibility of "something you know" may be only just as secure as "something you have."
As people integrate and evolve to include technology, the security aspects of bio-technical interfaces are going to get really interesting and damn important.
"Thought crime" will soon have a much darker and more dangerous meaning. Of course NSA will want to tap everything people are thinking, just like they're already treating all human communications "to keep us safe". I don't think it's a stretch to think they'll want to do that, too, if nothing changes, and people continue to let them do anything they want in the name of "national security".
Wow I wasn't aware that EEGs are this cheap. Does anyone know how well these 200-300$ thingies play with Linux and how easy it is to hack around with them generally?
I'd love to log my brain activities while learning, reading or playing poker :D
Edit:
Seems like the Emotive EPOC has an SDK that supports Linux and also an open source library called Emokit that was build from reverse engineering the device's communication :D
Turns out they aren't actually that cheap. To get a real EEG from Emotiv, it's $750 just for the device - the $300 version doesn't seem to actually be an EEG, they call it an EPOC, and don't exactly explain what it is, but do mention that it will not give you access to raw EEG data, which is what you need for any sort of legitimate experiment. On top of that, if you want to use the SDK, licensed properly, you need to pay an additional $500 or more. So if you want to play with an EEG and it's API, the minimum price you're really paying is $1250 - far from the $300 mentioned in the article.
In addition, these cheaper consumer EEGs don't produce research-grade data, so while they are good for messing around and experimenting, if you want to get serious, you'll need to upgrade to a more expensive headset.
Quickly browsing the github page of the OS-library it seems like you can extract the raw data from the EPOC which would turn the 300$ one into a decent enough device.
Granted you'd have to write the unfolding algorithm and infrastructure stuff yourself (eventhough I'd guess someone probably has done this already)
Seems like a neat enough toy to add it to my xmas-wishlist. Time to build a light version of the "Ready Player One" cyberworld :P
I used to play poker semiprofessionally and could see this as a very useful device to identify tilt (and shutdown the pokerclient or at least give you an alarm of sorts) or generally wear it while grinding and see what helpful info you can extract when comparing to your hands database.
This is pretty common for how Emotiv presents itself. If you look through their site and write ups about their Epoc headset, you'll find the same kind of overhyped and misleading information.
It's cool that home BCI is so cheap now, I just wish they weren't trying to captilize so heavily on it.
This is how it will go down. First, the government is going to own these companies. Then they are going to declare the technology illegal to use in private hands. Third, they will train operatives that can only be certified by government agencies to use these devices.
Sensationalist title designed to gain unjustified views. Accurate title would be "$200-$300 buys you an off the shelf polygraph test". Same principles, this has been known as a "lie detector" test for years.. and it's defeatable..
It seems completely different than a lie detector. Classic polygraphs, in essence, measure stress-responses. This measures [success of] pattern recognition. You can't use it for many yes/no lie-detector questions, however, it has a potential to be much more accurate (and less spoofable) for questions like "Do you remember this face?" or "Have you seen 'ox9j$lkjew' before ? It's a password to a child-porn site we found on your computer - wondering if you have used it.."
Assuming something like this actually works some day, I wonder if you could avoid it by having your secret be something that can't be encoded visually - eg haptic feedback/gesture rather than passwords.
Neat idea. The debit card pin bit does not seem feasible though, at least in a brute force setting - finding out a 6 digit pin, showing each number for 1 second, takes > 11 days in the worst case.
But in any case showing pins that way wouldn't work anyway - most people have a muscle memory for their pins, but would not recognize them when written down.
I recently got a new card and remembered the PIN spatially. After a few times of typing it in I realised that, though I was typing the digits of the new PIN, I was subvocalising the digits of my old PIN. It was a really odd sensation.
Having said that, I would recognise both PINs as both a string of digits and as a spatial sequence... so that would probably just be another attack vector.
> I realised that, though I was typing the digits of the new PIN, I was subvocalising the digits of my old PIN.
I trained myself to do this on purpose; subvocalising a different number. If I'm drugged out in a hospital bed and someone asks for my CC PIN, I want them to get an incorrect number.
You get a bunch of positives and check/bruteforce afterwards. This system couldn't distinguish my creditcard PIN from my office alarm PIN code, but it can give a shortlist to try.
No, since all isolated digits would have similar responses. The attack vector is not "is x your PIN?" but it's "is pattern xyzw meaningful to your brain whatsoever?"
My reading of the article is that if you show someone something that is significant to them, such as "Is the first digit of your PIN the number 1?", then it'll trigger a measurable response, and the first graph in the article is "1st digit PIN"
So I'm not sure where you're reading that it wouldn't work using the single digit approach.
But that is when you are specifically interrogated and know that you are. With the tinfoil hats I was referring to the ability of them to unknowingly do this to you from a distance (which I guess may work at some point...).
Maybe we can sue God or something for misconception? I am waiting his HN post where he will say, we have learnt something with 0-day and improved the security of your brain. Maybe a sheep as the reward for the scientist! :)