Hacker News new | past | comments | ask | show | jobs | submit login
My journey to a cochlear implant (spottedsun.com)
40 points by veb on Dec 6, 2012 | hide | past | favorite | 20 comments



Many people assume these sound much more like hearing aids they do. At best, they sound like a steampunk old time tube radio set with considerable static

http://www.utdallas.edu/~loizou/cimplants/cdemos.htm

for some demos. Ones with few channels sound positively terrifying to me.


Well, you have something like 21 electrodes wound around the hearing nerve. Of those, usually four or so do not work. The rest is all you have available. Simplistically said, you can encode only 17 distinct frequencies.

There are some more intricacies involved, but the general message holds: Frequency resolution is severely impaired. Time resolution and loudness levels are diminished, too. In fact, many CI patients can not percieve music any more.

Funny enough though, they can usually enjoy music they knew from "before".

Still, it is a wonder of modern medicine to give hearing to people with nothing more than a working hearing nerve. CIs are true nerve-computer interfaces.


> "Ones with few channels sound positively terrifying to me."

I am told (one of my family members works with cochlear implants) that the ideal candidate for a cochlear implant is a young child who has had severe or total hearing loss. This is for two reasons: if you have never heard sound before then the fidelity is far less of an issue, and children can adapt far easier than most adults.


But I do see how this could be terrifying to not have sound in your head (a sense you can't easily get away from), then start having this sort of sound in your head.


I know a kid who passed out and hit his head and became deaf as a result. He now has a CI. I've talked to him a little bit about it and how it sounds. He described it as "tinny."

I never thought about asking him about static and what not.


That is fascinating, it makes me wonder if you couldn't use an array of multiple implants to achieve a better level of quality.


There are a number of issues around quality: the number of electrodes is not always the most important factor. The processing algorithm is also vital to good quality, and that can be improved in situ. There's a lot of research that has gone into algorithms that are best suited to speech, music, etc.

More importantly, many patients receiving implants do not have a healthy cochlear, in the sense many of their auditory nerves may be damaged. In this case the electrodes stimulate the surviving nerves, and additional electrodes may not provide much additional benefit.

Again, the audio quality is really down to the algorithms: because each patient is different the processor can require intensive tuning and individual work to get it sounding reasonably decent. Far more research goes into these audio processing algorithms increasing the number of electrodes (not least because improving the processing benefits both new and existing patients).


There are approximately 3,000 nerve endings in a working cochlea that need to be stimulated by a few (8, 10, 12, 24) electrodes. So no matter what, it not going to sound like what a fully functioning cochlea. You can't get the resolution. The algorithms are important, but so are the number of electrodes. Ideally you want 3,000 electrodes perfectly aligned with the nerve endings.

It must be a difficult choice for parents to implant their children. Children make the best candidates, but the technology is changing at such a fast rate. 24 electrodes today could be 3000 a few years from now. Once they are in, you can't replace the electrodes, only the signal processor.


This would be true for a consumer electronics device, but this is something that's inserted inside the body and is expected to function for many, many years. 24 electrode implants have been in the market for over twenty years. The major, major strides in implant technology have generally happened outside the body - speech processing packs have gone from shoulder worn packs to behind-the-ear models.

You need to bear in mind that we're not dealing with traditional technology here: we're dealing with implanted medical devices. This is not a situation where Moore's law applies. It is vastly cheaper to spend R&D developing speech/music processing algorithms than trying to cram more and more electrodes into implants which require extensive testing and certification by the likes of the FDA.

To be honest, it's not a difficult decision at all. If you're happy with your child having an implant[1], you would be crazy to wait a few years 'just in case' the technology improved. Children implanted at an early age can often adapt far more rapidly than those who are older.

But it's a moot argument, because like I say these implants are not like smartphones that get improved every year. It is simply too costly to not only re-certify, but to re-train surgeons and audiologists on the new devices. Developing improvements to the external processing units is many factors cheaper than upgrading the implants themselves.

[1] There are some parents who do not necessarily approve or want their children to get implants, but not for the reasons you suggest. This mainly happens with parents who themselves are deaf, or part of the deaf community. This is a whole other ethical kettle of fish though!


The problem is stimulating the nerves. While these implants are marvelous, they are still pretty crude in the stimulation portion.

The implant has an electrode array that is shoved into the cochlea. In diagrams of the ear, the cochlea is the thing that looks like a snail shell.

The cochlea itself wraps around the auditory nerve, and normally works with the auditory nerve as a pressure transducer. Pressure changes in the cochlea result in nerve stimulation.

Different frequencies of sound can propagate to different lengths of the cochlea and stimulate the nerves in the respective areas. You can think of the nerve bundle as a piano keyboard wrapped in a spiral. Different frequencies correspond to different positions. So having a bunch of electrodes matters for granularity, but positioning in the cochlea is just as important.

With the implants though, they don't try to stimulate via pressure, they just send electrical impulses through the electrodes. The resulting electric field is what stimulates the nerve.

So it's pretty different from normal functions.


  > With the implants though, they don't try to stimulate via
  > pressure, they just send electrical impulses through the 
  > electrodes. The resulting electric field is what
  > stimulates the nerve.
Yes, it is true that electrical stimulation skips a few steps in the signaling cascade, but I don't think that matters, in principle, since the end result for both cases is the generation of action potentials.

In normal hearing, the pressure waves affect perception by mechanically deflecting the stereocilia (hairs) on the hair cells, which open ion channels, which lead to the production of action potentials. Electrical stimulation, on the other hand, opens the ion channels directly (either on the hair cells themselves or a synapse or two upstream), bypassing the mechanical action of the stereocilia entirely. This is actually a good thing since in sensorineural hearing loss it is often the hairs or hair cells that are damaged or malformed. By interfacing with the nervous system so peripherally, the vast majority of the neural processing in the auditory system is preserved, as opposed to auditory brainstem implant or stimulation of cortical auditory regions.

Of course there are technical limitations to electrical stimulation: the spatial and temporal spread of the electric field is only a very rough approximation of the pressure waves caused by sound, even with sophisticated acoustic models of the cochlea. But with smaller electronics and increased numbers of channels it should be possible to make the match closer, and perhaps someday indistinguishable for most individuals. These are differences in degree not in kind.


My friend has a daughter who was born deaf and then she and her husband decided to implant her. It was a really hard decision but she was 4 and still not walking due to balance problems and because she didn't hear her surroundings. Well the decision payed off. She is 5 right now and she is walking and running and she started talking a few baby words, so the doctors are pretty sure she will be talking soon. I have been around in her life for about 4 years and I must say that since she got the implant she became a much much happier child.


My wife received a cochlear implant in October, so we are on this journey right now. It has definitely been frustrating, but we have seen progress and we are optimistic about the future. If you are interested you can read about the implant activation process here:

http://hegavememyears.blogspot.com/2012/10/what-i-saw-at-act...


> "From their studies, they found that there were four generations of my family with the A7445G mutation and the pattern showed that it is a maternal mutation, i.e. it is carried out by the females of the family and passed from one generation to the next."

Isn't that always the case with mitochondria, since they only come from the mother?

I am not a geneticist, so this is a real question.


Yes. In the comments, the author mentioned that they know this is redundant, but only if you already know it is.


There's a really cool experimental history behind understanding the cochlea, especially concerning George Zweig, once a particle physicist. A lot of this work was done in the 70s and is, IMHO, some of the best examples of biophysics you can find.

One of the relevant papers you can find on google is: http://symposium.cshlp.org/content/40/619.full.pdf


Startup request: create a service where you could transcript just from videos, without voice. Ideally a mobile app.

Some of the use cases are somewhat creepy, but there are quite a few legitimate ones. In particular, there are a lot of recording where sounds is broken/terrible and you could "restore" does.

You could start it yourself, but later you could make some kind of marketplace.


That's probably very hard, and then you have the "Bad Lip Reading" channel on youtube :)


> The disbelief came when it was time to listen to singular > words. With the screen turned off, I got 0%. With the > screen turned on, I got 8%. That’s crazy!

It's really amazing how pervasive (putatively Bayesian) multisensory integration of information is in the brain. Even individuals without impaired hearing have their auditory perceptions strongly a affected by other inputs. The McGurk effect is a great example of this:

http://en.wikipedia.org/wiki/McGurk_effect http://www.youtube.com/watch?v=G-lN8vWm3m0


I really get a kick out of these personal stories that pop up on HN. I'd never find them otherwise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: