Should be clear - if my understanding of this is correct, this is the computer reading letters directly from the VISUAL CORTEX. So, this isn't the computer reading a mind, so much as, a computer tapping the visual processing conduit. You probably couldn't "think of a letter" and have the machine figure it out. Something similar but more crude had been achieved in cats (horizontal lines vs vertical lines, using direct electrode implantation), about a decade ago.
What is impressive (if this article is not fraudulent or overinterpreting) is that it's a) done in humans, which realistically shouldn't be too much of a stretch from cats, and b) done non-invasively using MRI. We're NOT entirely sure what we're measuring with fMRI - it's supposedly increased bloodflow to the brain, but what that has to do with voltammetric activity is not 100% sussed out.
Aside: When I was in grad school there was this brilliant girl who somehow got sidetracked and burnt out in the lab she was in, started dropping out for weeks to isolate psychogenic compounds from desert cacti. For her qualifying independent proposal her presentation was basically two powerpoint slides that said "test out LSD in cats". Naturally, she failed, but she had this amazing hypothesis about how LSD works, and I understand why she wanted to do in cats.... And I'm 99% sure she failed to communicate this to her committee. She did, however, get a nice severance package and got to attend Albert Hoffman's 100th birthday party.
the vertical/horizontal line thing (http://web.mit.edu/bcs/schillerlab/research/A-Vision/A5-2.ht...) is cool b/c it demonstrates an actual neural correlate of consciousness and gives this concrete example of how information from the retina gets turned into higher-level abstractions as it moves into the cortex.
Visual cortex is not just processing what we are seeing trough our eyes, there is also feedback from higher brain regions back to the visual cortex. So when you are dreaming, your visual cortex activates.
Things get more messy though. For example when they take images from cat's visual cortex, they are much clearer because the cat is anesthetized and visual cortex only processes data from eyes. If cat would be awake, it would make reading the image very hard to read.
Sure, I guess I should have mentioned there's some feedback. That's why I said you "probably couldn't"... I don't think we know how much the activated visual cortex corresponds to the imagery we see when we are dreaming.
NSA denies mind-reading allegations: "We do not 'view' Americans' thoughts, they're stored in a secure database that only non-sentient neural networks can access". What do you think? Share your thoughts with CNN with our free app.
A more apt comparison would be that in 20 years it's discovered that all baseball caps of the previous decade had the NSA's brain scanners in them, which NewEra denies the whole time.
The researchers ‘taught' a model how small volumes of 2x2x2 mm from the brain scans - known as voxels - respond to individual pixels.
There's got to be millions of neurons per 8mm3 of brain matter. I'd be interested to see what the images looked like before the prior knowledge was introduced.
I think that's part of what makes this so incredible. The current model uses only 1,200 voxels. With the higher resolution scanner mentioned at the end of the article, they will be able to use 15,000. With that in mind it seems this approach could have a lot of potential for further improvement.
While that's something to think about, keep in mind that the level of correlation between those voxels is ridiculously high. Simply adding more voxels isn't necessarily adding useful information. Even more so, fMRI is based on the BOLD effect [1], which is highly blurred across a fairly large area. While there is potential for improvement, there are a number of pretty fundamental limitations to this technology.
Yeah. Although images are directly reflected in some way in the brain, it's mostly in certain layers of the visual cortex and it sure as hell isn't on the order of 2x2x2mm.
From what I understand, both reconstructions involve setting up models of brain activity for vision, learning the parameters by machine learning from patients, and then using Bayesian inference to determine what is being seen.
While incredibly cool, we are still a long way from reading thoughts, and even longer if we're not allowed to learn the parameters for that subject first. Right now, we can only kinda reconstruct what someone is seeing, but that's really not much better than a camera.
I'm really looking forward to my brain powered keyboard. I was close to buying an Emotiv headset a few times to attempt a build, but I don't think the resolution was there, nor was I able to build the machine learning end.
I'm considering doing this. Have you tried playing around with simple neural nets? Andrew Ng's machine coursera course is really phenomenal and drops you into doing neural nets using octave, which makes the understanding really easy. After doing it in octave, and writing some simple discriminators, I was able to really rapidly write neural nets in several languages - I even wrote one in python - a language I don't 'know' - to play around with in quantopian. Needless to say, the neural net lost a lot of virtual money, but I figured out why. =)
Not long ago lasers, phones, compters were all very large. MRI machines today are very large, but one day. Not saying this approach is the best or that there are alternatives that are easier to adapt to something consumerable.
One thing I do know, that in the not so distant future - HATS will come back into fashion and with that I hope that somebody is not allowed to pattern using hats to contain sensors or any kind. But I have hope that the whole patatent area will be in a far better state of play by then.
I also suspect a whole new area of social issue will arise in the form of thought tourretes, be it having SIRI searching for porn or downloading the latest XRAY filter for Glass - will be interesting times. Me I'm still waiting for a grammer nazi app that fix's the mistakes instead of complaining about them. We all have out dreams and to think beer and have a robot fetch you a cold one is still a dream. But getting closer.
How groundbreaking is this? On that note, what is the state of the art for brain-computer interfaces, invasive or non-invasive, with which the user can actually input data into a computer?
As far as I understand the method described in the article, it could eventually be employed as an alternative to eye tracking for computer input, i.e., instead of determining what letter the user's eyes are looking at by using cameras pointed at their face and computer vision you would scan the user's visual cortex directly. One can immediately think of applications this would have even outside of the assistive technology market, e.g., for mobile input.
It would be really cool if we could develop this to the point that it would work in humans and with a minimal amount of hardware. Imagine the possibilities, coupled with wearable computing: we could digitize SO much information, from landmarks to museums to captchas and more... all without an obnoxious camera.
The reason a small camera is obnoxious is not the hardware. I think people would raise much the same objections to being recorded without permission if it was a human eye plus brain scanner.
(I'm not making a statement about whether these objections are right or wrong, I'm just saying this technology will not change the debate)
If I'm understanding the description correctly, they are just training it to recognize what the image is closest to and taking an slice of a youtube video that most closely matches it.
I imagine if they used a more efficient method or trained it more, they could do way better. It seems like most of the data to build an accurate picture of what they are seeing is already there.
What is impressive (if this article is not fraudulent or overinterpreting) is that it's a) done in humans, which realistically shouldn't be too much of a stretch from cats, and b) done non-invasively using MRI. We're NOT entirely sure what we're measuring with fMRI - it's supposedly increased bloodflow to the brain, but what that has to do with voltammetric activity is not 100% sussed out.
Aside: When I was in grad school there was this brilliant girl who somehow got sidetracked and burnt out in the lab she was in, started dropping out for weeks to isolate psychogenic compounds from desert cacti. For her qualifying independent proposal her presentation was basically two powerpoint slides that said "test out LSD in cats". Naturally, she failed, but she had this amazing hypothesis about how LSD works, and I understand why she wanted to do in cats.... And I'm 99% sure she failed to communicate this to her committee. She did, however, get a nice severance package and got to attend Albert Hoffman's 100th birthday party.