Hacker News new | past | comments | ask | show | jobs | submit login
Wristband Lets the Brain Control a Computer with a Thought and a Twitch (scientificamerican.com)
89 points by 0x4542 on April 1, 2018 | hide | past | favorite | 40 comments



Can electrodes recording muscle activity be a better input device than a keyboard and a trackpad? Only if humans have the ability to control individual motor units separately. All of the evidence indicates the opposite: that our nervous system is hard wired to recruit motor units together in an orderly fashion.

You can make an argument that a wristband input device is more portable or has entertainment value. But the idea that this is going to give you more degrees of freedom needs some serious evidence.


There have been a lot of discussions on HackerNews in the past about EEGs and other potential input devices, but these non-invasive devices lack the required throughput to be used as a full keyboard replacement. EEGs, for example may allow you to transfer on the order of a few bits per second. I have always been interested in the potential of humans to "learn" (with the help of some sort of feedback training) how to control these macroscopic oscillations in a way that gives them a higher bit rate.

Most people conclude that it's not possible and that in-vivo solutions (e.g. invasive BCIs) are required for anything interesting. IIRC, there is research out there that suggests humans, with practice, can learn to "control" the amplitudes of certain large scale oscillations (alpha, theta, etc.). I think this has been done in certain ADHD studies, but it's been a while since I read about it.


The funny thing about this whole discussion is that invasive BCIs are also quite low-bandwidth. Even a Utah array under ideal circumstances recording 50ish neurons is still much worse than a mouse or a keyboard.

People seem to assume that it’s possible to get a higher bandwidth connection by tapping into neural activity directly, but it’s not obvious to me that should be true. After all, our whole nervous system is optimized for controlling our bodies. I’m not saying it’s impossible to build a neural interface that’s better than the one already plugged into your brain (your body), but I am saying nobody has come close yet.


"Rainbows End" by Vernor Vinge has all the younger people (and those grown-ups who are hip) "wearing". Their clothes are (in some unspecified way) I/O devices, so that they learn from quite young how to communicate and control things through imperceptible movements. Teaching her grandfather how to "wear" is part of how the main protagonist gets caught up in the events of the novel. The novel is pretty vague about how much bandwidth is involved, but the convenience is the thing - you are wearing clothes all the time (or at least during waking hours if you prefer to sleep naked) and so if you've learned it then replying "Yes!" to a friend's question about whether you want them to save you some apple pie becomes no harder than nodding is, except it works at a distance.


There are stroke victims who are trapped inside a body that is no longer operational. Sometimes, all they can manage to do is move one or two appendages. If this computer can free these kinds of shut-ins, it will be life changing.


If anyone is looking for more information about locked-in syndrome, the book "Consciousness and the Brain" by Stanislas Dehaenehas has some chapters about a few of these patients. Dehaenehas details how they use fMRIs to communicate with some of the patients. Its a very interesting read.


Hello, if anybody on this project is reading this then you really really need my help. I am quadriplegic and this looks like it would make my life about 14.3 million percent easier (roughly).

My email address is in my profile and I beta test new devices from an accessibility perspective professionally and just as part of making my life easier!

Very cool advances, would love to be involved.

Thanks, Stuart.


I smell a hype cycle revving up.

There are no actual images of the device, nor any videos of it being used. Just a bunch of supposed in use stills and descriptions of how it works.


I have personally used a similar device and administered research on the ability to train yourself to control interfaces using EMG readings at a muscle site. A challenge to making this attractive as a consumer product is it required an adhesive electrode pad, and doesn't work so well if you are hairy where the electrode needs to be. If this isn't resolved with this new armband, that could be why they aren't showing much.


Looks to be a similar device to the Myo Armband: https://www.myo.com/


Indeed, Thalmic Labs (YC W13) had a article about the Myo nearly five years ago in the same publication:

https://www.scientificamerican.com/article/forearm-gestures-...


Yes, a lot similar. I wonder how they differentiate.


From the article:

> Previously, researchers have been able to detect up to 25 individual motor units. Reardon claims to have greatly surpassed that, recording about 100 individual units

So basically, they have achieved much greater throughput, and that number is going up.


Hopefully it's actually useful. I pre-ordered the Myo, used it a couple of times and have left it in a drawer since. Doesn't work reliably enough even with simply gestures to take on a basic task in your workflow.

I had hoped to use it to switch work spaces or bring up frequent apps, but it was easier to do things manually rather than have the gesture fail 50% of the time.


I had a similar experience with the Myo, I found that if you use the conductive gel that they use on heart monitors it helps considerably.


The world's most advanced BCI - the hand!

But seriously, I don't understand the problem they're trying to solve here. Nobody is going to learn to stimulate their muscles in new ways for an interface - an interface is supposed to make things easier. And if they're not using new stimulation patterns, or firing individual motor units, they're just moving their (arms|eyes|faces|...), which can be detected easily enough with a camera.

Not to say that the technology isn't interesting, but I think until we understand the brain much better than we do currently, the only use for BCIs is precisely the market they dismiss in the introduction - the severely disabled.


One of my dreams is to be able to move freely -- take walks! -- while editing text in my head. I'd absolutely spend hours and hours to learn to have a keyboard's functionality with, say, fingertip movements in the air. I don't know how a display would work. Of course the real dream is editing the text directly in the mind, some kind of thought-files, but I've imagined that glasses might be best as they're the least intrusive and possible in my lifetime. But even some kind of physical monitor might work: hold with one hand and write with the other.

Isn't that a pencil and notebook?

Yes, so far that's the best for walks, but I'd love to have a computer's functionality, be able to code, pull up files, copy and paste, etc., while moving, as opposed to stopping to jot something down and then moving again.

Slowly walking is just so great for thinking.

I'd even thought of a walking harness for my laptop, but it's just so absurdly awkward and even dangerous... :)

So what about a smartphone or tablet?

A great new input method might do the trick. I already use Swype on iOS despite the many (often embarrassing) mistakes I make. It's the second-best for walks after a pencil and notepad.

But a wrist sensor that let me type with little finger movements? I'd be so so so into that.


Hi. I've got something to show you. I have the same problem as what you describe, and that's the inspiration for what I've built. I can let you try it if you'll give me feedback. How can I reach out to you? My contact is my username @opdig.com.


Have you considered learning stenography? The Open Steno Project is very DIY, and I'm sure they'd be thrilled to help think up an even more mobile design for a steno writer (though AFAIK they're already very usable when walking around with a harness). When it comes to novel input methods, I really think it's a great idea far too few people know about.


Sounds like you'd be interested in this:

https://www.tapwithus.com/

It's probably junk if not vapourware; at best maybe a stepping stone towards our VR-ified future. I ordered one just to check it out.... so we'll see :)


I use a twiddler while walking. It took a couple of weeks to learn to type with one hand, but eventually it becomes less of a headache. I don't use it with any device, I just use it to write down my thoughts while I walk the dogs and get some air. When I get back home I just dump what I've written. You can easily use it for other things such as a specific cord to key off starting a voice recording, or other computational tasks.


It sounds like text-to-speech narration is honestly your best bet, for walking-and-writing. I can't imagine much that will help with walking-and-coding because of the concentration required. I'll trip on stuff even without any impediment to my vision. Just my guess!


Right, nobody is going to learn how to type on a keyboard when they could just write with a pen. /s

Interfaces are supposed to make things easier, but they are not required to improve every possible application. If this interface makes just some things easier or possible at all, it will be a win.

Using a camera would require mounting it somewhere where it can see the hand at all times, which precludes mobile uses. There's also the problem of exhausting yourself by using gestural interfaces for too long, where just minimal muscle twitches would probably be more comfortable.


Interfaces are not necessarily supposed to make things easier, they're supposed to allow the user to achieve things that they otherwise could not do (there is no easing of "can't do it") or that they otherwise had to exert decidedly more effort in attaining results than the effort of learning the interface (there is an easing, but it's tied to the cost of change/learning). If you can't meet the bar of those criteria you should always revert to the principle of least surprise.


> (there is no easing of "can't do it")

On the contrary, if something is impossible then making it possible most certainty makes it easier!


"I am able to do X." "It is now easier for you to be able to do X." "I am unable to do X." "It is now easier for you to be unable to do X." The latter makes no sense. Moving from the impossible to the possible isn't an easing, it's a fundamental change in the nature of a thing.


Depends on your definitions, I guess. If you define "easier" as "decreases the difficulty of doing X" then it makes sense as you've reduced an infinite difficulty down to a finite difficulty.


Note you changed the nature of the problem by turning "infinite difficulty" into "finite difficulty." Subtract all the difficulty you want from "infinite" and you still end up with infinite, unless of course you subtract infinite difficulty from it... which means you qualitatively changed it from infinite difficulty to merely difficult.


Hmm, you're right about keyboards. My gut still says that learning a new interface is an unlikely thing to expect of consumers, but it'll obviously need more thought.

My issue certainly isn't with interfaces being easier for all things, I was challenging that this benefited in any substantial situation at all, but I'll concede my point until and unless I refine my argument about people being unwilling to invest effort in a new interface.


The same could have been (and was) said about the personal computer in the 70s! While this exact wristband may not replace keyboards, I think biological interfaces are definitely worth researching. All it takes is a kid creating a beautiful painting or piece of music with a new tool to start the snowball effect.

That said, the theremin was a new musical interface and didn't ever take off because it produced a strange sound quality and did not map well to preexisting human motions. Furthermore, music creation was already well understood.

Indeed, machines need to be built for the human, not the other way around. I think it's a trade off between two terms: how easy the interface is to learn and what it enables us to do. For (a contrived) example, an interface which allows humans to teleport will be learned regardless of how steep the learning curve is. Teleportation is a "zero to one" technology, it more or less competes with nothing. A new kind of keyboard on the other hand, must compete with the existing solution for keyboards, so it must be easy to learn, otherwise there would be no incentive to switch.


I’d say your comment put us at stage 2.

New ideas pass through three periods:

1) It can’t be done.

2) It probably can be done, but it’s not worth doing.

3) I knew it was a good idea all along!

Sir Arthur C. Clarke


Sure, but the point is only good as a quip. We can probably make nuclear powered vacuum cleaners, but it's not worth doing.


Except that they exist already...anytime you plug in a vacuum cleaner that is run off of electricity generated by a nuclear power plant.


Learning a new interface isn't especially novel for consumers, most of the interfaces you take for granted were adopted in the last 30-40 years. The necessary tipping point is that the reward for the effort of learning a new interface must, in the preference interests of the consumer, exceed the cost of the learning effort. If it does not then you should revert to the principle of least surprise.


Mobile computing I’d assume


"An interface is supposed to make things easier"

I disagree with this, but I'm struggling to counter it with something pithy. The problem is that there are so many ways to compare interfaces. The best I can do is point at humanity's adoption of the keyboard, which was a monumental shift. There are so many angles that you could write a book on the topic, and I'm sure plenty have been.

The keyboard predates the computer, but they're so closely connected that it's hard to pick their relationship apart. It wasn't just a happy coincidence that typewriters were laying around with the perfect input method when we invented the computer. Babbage built his computing machines without any obvious keyboard-like device (afaik), but Turing laid the foundation for general-purpose computing to crack the Enigma, a machine with a keyboard.

So during the World War II era, it was inevitable that the typed character was the future, but you'd struggle to convince people that there was a point in learning how to use a keyboard. You don't need a typewriter to writer a letter to someone, and besides, it's so much more ugly and impersonal than your own handwriting. Women could learn to type and get jobs as secretaries, but this also meant that their bosses didn't have to learn. My grandfather was an accountant during this time, and just like any other accountant, he did his work on specially-ruled sheets of paper, pen in hand. Even in academia today, we have stuff like LaTeX but people don't actually work in that, right? I claim ignorance but I'm assuming people still do everything by hand and then figure out how to format it on a computer.

So what defines easy? None of that was easy. In many examples, it's so much harder, not to mention more expensive. Most importantly, no interface comes naturally. Typing is a learned skill, and even if you're young enough to have grown up with keyboards, you still sucked at one point. The same is true of handwriting. In my mind, "easy" only comes in to play when you're comparing two interfaces that use the same fundamental principles. If I build a QWERTY keyboard that requires you to apply ten pounds of pressure to register a keystroke, you can point and say, "that is supposed to be easier."

I guess my point is just that you have to weigh the potential use cases. A full-size keyboard is best for programming, or for writing an internet comment, but it's not ideal for keeping a journal while hiking in the mountains.

To me, it seems like the handheld computer is still yearning for the right input method, and the need grows stronger as the computers get more powerful. Forget comparing the latest smartphone processors to the computers that got us to the moon, they're faster than the ones that we bought a few years ago and still have in our closets! But they aren't that useful, and it's so clearly because of that goddamn little touchscreen keyboard (or depending on how you look at it, our own stubby thumbs). We've tried so many things-- even returning to handwriting-- but they haven't worked. So we keep trying.

The way I see it, we're blocked and the only clear way forward is to devise a novel character input method, which is what these guys are trying to do. I hope that works out. If not, maybe we'll eventually be driven to completely discard the typed character in search of something more expressive and natural for human-computer interfaces. Of course, we can just give up and admit that this 19th century invention is the best we'll ever do.


I wonder if the same technology could be used on the neck or spine effectively.


It probably can. With some training, the 'ear wiggling muscle' is often functional even for quadriplegics. You can read more here, from the lab I RA'd for in college: https://research.engineering.ucdavis.edu/rascal/publications...


I can control my computer with my brain using my fingers.


Definitely read that as Twitch.tv.

Oh the ideas...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: