Hacker News new | past | comments | ask | show | jobs | submit login
Hackers Can Kill Diabetics w/ Insulin Pumps? Facts vs. Fear Mongering (hanselman.com)
163 points by phsr on Aug 5, 2011 | hide | past | favorite | 50 comments



As a diabetic (something few of the posters seem to be) I find this discussion quite interesting if a little wrong in some of its assumptions. The first is the idea that turning off the pump will cause the wearer to expire. In most cases, not true. If you want to (and lets wave our magic wand and enable the hack skipping the tech problems mentioned) kill your target, you are going about it the wrong way. Don't turn off the supply---turn it up, way up. You need to create an overdose based on the size of the individual and their tolerance to insulin. Now without knowing the details of the pump industry, I'd guess that there are built in limiters concerning overdoses. This makes the problem far more challenging, even if you know the individual in question. How often do you discuss with your diabetic friends just how many units it would take to kill them? At a guess, even if you know they are diabetic, this is probably not part of normal conversation. There is also the assumption that the wearer never checks his equipment. In the single photo in the article above, I notice a screen crowded with information. Again jumping over the problems listed both in the article and here, the hack would have to adjust the display so as not to warn the victim. Given the in-ability to decipher the signals transmitted, this seems a bit problematic at best. No, I think the best method of attack is the one with a hammer---'Wow you wear a pump huh? Can I see it (victim looks down to pull up shirt) villain applies Maxwell's hammer as solution.'


I helped watch a friend's kid last weekend who has a remote-controlled insulin pump. The remote control refuses to dose without a recent blood-sugar test. Kid wants to eat, no you need to wait, we need to do a finger-prick test first ON the the remote so it knows your glucose level.

That, and the remote needs to establish an insulin baseline every few hours.

It's unclear whether the dose limiting is also hard-coded into the pump or is on the remote side only.

It does seem like a "one in a billion" attack but, given time and repeated access to the pump radio, it seems possible to say the least.

I assume that an adult diabetic is very aware of how glucose levels affect their ability to function and would notice when they start to drop off unexpectedly.


Even if you could manipulate the pump control itself, you still aren't disabling the feature that shows the blood glucose level (which the person is certainly monitoring), or the alarms that most likely go off when the values are too far outside of center.

The most likely outcome of a hacker gaining control, is that the user sees the insulin pump is screwing up, then just takes it off and uses manual injections or sees their doctor.


The notion that you can kill a person with diabetes by hacking their insulin pump is absurdly ridiculous. I can't think of an insulin pump that does not have a setting to limit the maximum bolus. In addition, the setting typically has a sane value and is enabled by default. Further, when a pump is setup with a doctor/nurse practitioner, this value is set to number that is tuned to the person with diabetes. There is also feedback when the pump is delivering insulin. I know this is the case with Animas and Medtronic pumps.

So even if someone got in range, had your serial number, knew the protocol and attempted an insane dosage, the worst that would happen is someone didn't notice the delivery feedback and hit the max bolus. While this would be worst case breach, it is not lethal. Within an hour, the victim will feel hypoglycemic, check their blood glucose and correct it.


I'm much less concerned about vulnerabilities which will allow people to kill me than about vulnerabilities which will enable people to steal my data or money.

There's far more people who want to steal my data or money than who want to kill me, and if somebody does want to kill me and can get within range of me then there's several thousand other ways to do it.


You're assuming a mass harvesting in the theft instance, and someone with a personal grudge against you in the second.

I wouldn't be worried about Mysterious Assassin out to kill Pavel Lishin, Diabetic With A Pump. I'd be worried about Teenage Sociopath, war-driving past a clinic.


It could happen. Teenage sociopaths exist. But they're very, very rare. You're falling victim to the Columbine fallacy here. TV news makes a poor reference for risk management.

But yes: if this exploit is easy, eventually some diabetic is going to be murdered through it. Far more diabetics will kill themselves via poor diet choices, however.


And even the Columbine sociopaths were targeting people they knew, rather than random strangers. Random murderers targeting random strangers are even rarer.


But I would hope that even Teenage Sociopath is smart enough to figure out that this is equivalent to just shooting up the clinic with a gun.

Or, even if you think he's less likely to get caught, the world is full of undetectable opportunities to kill random people. Poison the fruit at the supermarket. Sabotage the train line. It doesn't happen very often.


Two points: Why argue about this at all? Securing known-in-advance endpoints isn't impossible. Why not just do it, and stop arguing?

Second: The relevant risk metric isn't "For a given person, is someone going to try to kill them today?" The relevant metric is "For all such remote-controlled implanted devices (including not just "insulin pumps" but any potentially dangerous device) placed in all human beings across the entire lifetime of both the implanted devices and the human beings, what are the odds someone will be attacked via this vulnerability and physically harmed?" to which the answer is trivial: 100%, rounded to the nearest thousandth of a percent.

Don't argue about it, fix it.

(Oh, and narrowing it down to "technically-skilled teenage sociopaths" is cognitively hazardous, even in casual debate. That's not even close to the full threat model and encourages subconscious dismissal of very real threats not sourced from that narrow group.)


The gentlemen who wrote this post takes an approach I'm not comfortable supporting: The signal and commands haven't been successfully reversed engineered yet so this isn't a real threat.

A little bit about my background: 10+ years successfully (legally) reverse engineering software technology that required both client software and packet manipulation in industries that have been very proactive against it.

Seeing as the medical devices are hardware items issued to unique individual recipients the issue could easily be fixed with a 1024+ Public Private Key-Pair between the devices unique to each issuance.

However, this does nothing to protect the many millions of individuals, using today's devices, potentially exposed to the threat described by Jay Radcliffe.


It's a little less severe than that. First the device has to support remote management, then the device has to have remote management turned on, and finally the attacker would have to have the device's serial number (which seems to be used as a security mechanism) in order to successfully send the device commands.

Also, if you don't like needles, don't watch the youtube video at the bottom of the post :|


This is assuming you only want to control the pump. If the individual is unable to view any information on their monitor, or their monitor is displaying improper data, it may cause other serious health issues in high risk patients.

Not to mention that some devices may be controlled by the monitoring device and it may require a constant stream of good data.

I agree that not all setups and individuals are at risk but some most likely are.


RSA is a little bit too computationally intensive for these devices I'm afraid


I'd have liked a medical approach to this FUD.

Can anyone with more insight than me (medical background perhaps? Or 'experienced' diabetic, since I think this leads to a specific background just as well) tell me what attack vector this could open?

I don't want to play this down, the argument just doesn't match with what I (think to) know, so - please educate me.

Isn't the maximum dose limited by the pump? And the models I've seen seem to take a long time to inject something (with a step motor, for these things).

What could you do to the 'victim'?

Supressing the basal/ongoing rate would send them on a high level of blood sugar, something that I'd expect leads to a very clear reaction: The person, if ~experienced~, will feel nasty, check the pump (maybe the battery died and you didn't hear the alarm. Maybe something with the injection needle went wrong), measure glucose level again and - depending on the result - apply a 'fast' insulin via direct injection. Am I glossing over something here?

Injecting a large(r) amount of insulin would, with a delay that seems to be related to the type of insulin used, send the person into dangerous low levels of blood sugar. Unless this hits at once though, I'd again expect the person to _know_ that there's something wrong if you start craving for every food you can imagine. Probably you'll feel like shit and start shaking etc. pp. I assume this is the more dangerous route, but again the first reaction is probably 'Fuck diabetes, what's going on with my levels', a check of the current sugar levels and direct counter measures (if it's not too bad: Juice, fructose etc. Otherwise you probably have again an injection nearby).

After typing all this I DO wonder what happens if someone causes this in your sleep though...

So - can someone tell me how wrong I am and tell me about the purely medical dangers?


Dropping an unexpected dose of insulin would be much more dangerous than simply disabling the pump.

Disabling the pump would result in an increasing level of blood glucose over a fairly long period of time (likely a day or two). I would be very surprised if a 'victim' didn't notice the issue with their pump long before there were any detectable side effects.

Dumping an unexpected shot of insulin into the victim's system would crash their blood glucose over a fairly short period of time (an hour or two). One of the first side effects of hypoglycemia is confusion, which would reduce their chances of noticing something is wrong.

Long story short: I wouldn't worry about "DoS'sing" the pump, but I would worry about triggering extra insulin "dosings".

My experience with insulin pumps is limited to dealing with patients who are having some sort of blood glucose related emergency (as an EMT).


I'm unsure where I stand on this subject, but this excerpt from Jaron Lanier's "You Are Not A Gadget" seems relevant:

"There are respectable academic conferences devoted to methods of violating sanctities of all kinds. The only criterion is that researchers come up with some way of using digital technology to harm innocent people who thought they were safe. ...

"If the same researchers had done something similar without digital technology, they would at the very least have lost their jobs. Suppose they had spent a couple of years and significant funds figuring out how to rig a washing machine to poison clothing in order to (hypothetically) kill a child once dressed."


The threat model is totally different, so the metaphor fails horribly. I could, theoretically, do some horrible thing to a chemical plant, but if I'm going to put that much physical effort into it, driving a truck bomb up to it is way easier. Preventing that attack is infeasibly expensive. So the obscure stuff is hardly relevant anyhow, when the straightforward stuff works fine.

I could, theoretically, hack a medical device to do something horrible... in which case I might be able to kill someone untraceably, from the other side of the country, with no consequences to myself, and possible dozens or thousands of people at a time, and all it would have taken to protect against this is a programmer adding one line of code that would have checked for the buffer overflow. Or using one of the languages designed with preventing buffer overflows in mind.

It's two things so different that they just aren't comparable. We aren't going to secure our electronic devices by not spending time thinking about how to secure them, and that can not help but manifest as ways to attack them.


Don't people get paid to look for security vulnerabilities in pretty much any engineering field? There are people who work full-time on thinking up ways that a terrorist could potentially rig a chemical plant to release deadly gases, for example.


I suppose there are, but I'm not so sure they have fancy open conferences devoted to chemical plant terrorism.

Like I said, I don't know where I stand.


Would you feel more comfortable if a few smart people figured out the same things and didn't tell anyone about it, but instead used it for profit or to cause harm to innocent people?


As per the other post:

tl;dr

Scott's most relevant points:

1. "This is a key fob that looks like a car alarm beeper that some pump users use to discretely give themselves insulin doses. However, I feel the need to point out as a pump wearer myself that:

Not every Insulin Pump has a remote control feature. Not every remote-controllable insulin pump has that feature turned on. Mine does not, for example."

2. "all he requires to perpetrate the hack is the target pump's serial number. This is like saying "I can open your garage door with a 3rd party garage door opener. Just give me the numbers off the side of your unit..."

3. If you are a diabetic on a pump who is concerned about this kind of thing, my suggestion is to turn off your pump's remote control feature (which is likely off anyway) and turn off your sensor radio when you are not wearing your CGM. Most of all, don't panic. Call the manufacturer and express your concern. In my experience, pump manufacturers do not mess around with this stuff. I'm not overly concerned.

--

Also - someone asked how much entropy was in the serial ID's on these units ?

Even if entropy is low are - how are you going to randomly select a person, and know their serial ID ? Unless you know what units are distributed to what hospitals/doctors - at exact times - at exact shipments and then from the sample delivered know the exact unit given to any person at any particular time.

Sure, if you know a "set of id's" you could try each one sequentially until you finally get a hit - but even then, you must somehow ensure the person being targeted has remote connection turned on. I'm pretty sure walking up to them and saying "oh, hai 'dere! ... plz turn on ur remotz connetz'n 4 me?" [ said in this voice - http://www.youtube.com/watch?v=xh_9QhRzJEs ] - is going to make them pretty suspicious.

There's a lot of "ifs" in there and frankly - if your aim was kill them - it would be a lot faster to do it some other way because to actually get all these things to line up perfectly .... your chances are pretty slim.

I'm a bit of sceptic on this 'hacking' - not to say that it's great that it has been uncovered - but your dealing with minute hardware where every single ms of processing power counts. Simple encryption should be utilized [but then this might be easily hacked anyway ?] but for units placed inside the body [pacemakers and the like] - splitting the units resources between keeping the patient alive vs. encryption for wireless protocols seems to weigh more heavily on the former than the later given how unlikely - for the majority of the world - these 'attacks' are going to be.


Would you trust your life on a computer not being able to count from one to ten billion? From one to ten million? From one to a thousand?

Computers can, in fact, trivially do all of these things. Counting to large numbers quickly is what they do best. Accordingly, if "Guess my large number" is sufficient to remote control the machine, then that's a pretty critical finding.

And there is no work that the attacker can do which will make his life more difficult. Trivial inspection of any machine establishes the upper bound of how hard it will be to compromise. Any attack he can use to reduce entropy only makes that number shrink, potentially radically. We would worry if you could shrink a 2048 bit keys even by a bit, because it suggests a hidden systemic weakness. The second serial number examined is likely to shrink the keyspace - which will not be 2048 bits to begin with - by tens of bits.

There are classes of attackers for whom killing a single named individual is not a goal. "Oh drats, we were only able to kill fifteen people chosen at random from this hospital, Superdome, or session of Congress" would not br a failure condition for them or a victory condition for the public.


To be perfectly clear: a quick Google confirms (at least one type of) insulin pump has 8 digit serial numbers. It also appears the first digit is 1 or 0.

Serial numbers usually have a check digit, so it is likely we are down to only 6 digits.

That's easily brute-forceable.


Strictly speaking, isn't "guess my large number" sufficient to break most encryption protocols that don't rely on security by obscurity? It's just that the numbers are normally larger by dozens of orders of magnitude.


Yes, but you are ignoring the time factor. Assume I'm going to die of natural causes in 60 years. If it takes a minute to guess a 6 digit number, that is really bad. If it takes 1000 years to guess a number that is larger by dozens of orders of magnitude, odds are pretty good I'm going to die of natural causes before the attacker guesses the right number.


The "Just give me the numbers..." counter-argument isn't valid.

Right now, a hacker can kill a specific person, within 30 days, given the following assumptions:

  - that person is wearing an insulin pump with the
    remote control feature turned ON

  - the serial number is 32-bits or less

  - the attacker can test 5000 serial numbers per second
    for at least 8 hours per day, every day
So, given those assumptions, here's a scary scenario: Let's say a hacker wants to kill you, and knows where you live. He builds a transmitter and plants it next to your house, for example behind your air conditioner. The device is configured to 1) detect when you're there, then 2) try to guess your serial number every second you're within range, then 3) kills you.

If the attacker then retrieves the device (so it doesn't fall into the hands of law enforcement), there would be absolutely no way to prove he killed you.

Obviously, this is an incredibly unlikely sequence of events. Nevertheless it IS possible, which is very irresponsible of the medical industry.


So, given those assumptions, here's a scary scenario: Let's say a hacker wants to kill you, and knows where you live.

Most premeditated murder is perpetrated by someone very familiar with the victims.

The device is configured to 1) detect when you're there, then 2) try to guess your serial number every second you're within range, then 3) kills you.

Better yet, just get the serial number.

If the attacker then retrieves the device (so it doesn't fall into the hands of law enforcement), there would be absolutely no way to prove he killed you.

Put yourselves in the shoes of the prosecutor. How are you going to explain all this to the jury? In how many ways will the defense be able to attack the delicate task of explaining the technical details?

Obviously, this is an incredibly unlikely sequence of events. Nevertheless it IS possible, which is very irresponsible of the medical industry.

The "alibi machine" aspect of this scenario actually makes it more likely.


Right now, a hacker can kill a specific person, in 5 seconds, given the following assumptions:

   -- He is standing next to the victim.
   -- He has a hammer.


Yes, but doing it with the insulin pump:

    - Makes it look like a medical emergency
    - Doesn't splatter you with bodily fluids
    - Can be executed in a way that gives you an alibi


> - Makes it look like a medical emergency

A critical insulin overdose with the pump log full of remote access entries will look like murder.

Anyway, dropping poison in their drink while they're in the bathroom is easier, cheaper, more practical and gives the same "benefits".


I disagree. This doesn't give you the alibi, since you have to be there in the same room to drop it. With the wireless mechanism, you could never be in the same room as the victim that day. You might never be within 50 feet of the victim. You might not be spattered with blood, but you might leave physical evidence of your presence at the scene. Logs can be electronically erased, which you can't do with metabolites of poison in the bloodstream.


My way doesn't sound like a bad episode of CSI.


Yeah, it's more like a typical episode of Dexter. (Not Dexter, but that week's killer/victim.)


The police can also look for someone with a motive to kill you, and filter that by who might have the hacking expertise. Do a search of this guy's premises for such a transmitter or equipment to build it, and you have a prime suspect.


I've never encountered a community as poor at cost/benefits analysis as computer security. You see it every time when some new "irresponsible" loophole is gleefully broadcast by some smug cracker. There are far, FAR more economical and efficient ways of getting away with murder. I mean, several orders of magnitude easier.


Excuse me at being unrealistic, but I like to think that I should not be able to kill someone with a GNU Radio setup and a cheap laptop.

I am not actually afraid that people are going to start doing this, however such flaws and failures in security thinking are systemic. Bad security is not limited to insulin pumps, but insulin pumps are a great way of getting the publics attention and (hopefully) getting programmers to consider the impact their laziness could have on the world.


I like to think that I should not be able to kill someone with a GNU Radio setup and a cheap laptop.

If you want to kill someone, there are considerably cheaper options available at your local big-box store's home and garden center.

Seriously, though, I do agree with your concern for systemic problems in security thinking. Given the vastly more concentrated effort required, I don't think it's a problem that one could theoretically kill with GNU Radio and a laptop, versus any of the hundreds of tools more readily repurposed as a murder weapon, but such exploits are best addressed while they are unfeasable.


I've never encountered a community as poor at cost/benefits analysis as computer security.

As the potential cost of attempting murder involves the risk of getting caught, it's entirely reasonable to expect murderers to go to great lengths to conceal their actions, even if in involves highly technical means. If such highly technical means are actually inexpensive and widely available, then this raises the level of concern with regards to the cost/benefit analyses.

In short:

Cost factors "pro" murder through wireless control of medical equipment

    - getting caught is very expensive, so obscure and 
      invisible methods are attractive.  
    - time and materials costs are low for a suitable expert
    - the method enables an alibi


You are still missing my point I am afraid:

"Even if entropy is low are - how are you going to randomly select a person, and know their serial ID ? Unless you know what units are distributed to what hospitals/doctors - at exact times - at exact shipments and then from the sample delivered know the exact unit given to any person at any particular time."

A sufficiently low entropy serial number would mean you don't need to know those things. Because it is low entropy.


sure

1) if a person has diabetes [in this context]

2) if you know where this person lives

3) if you know a person uses a remote insulin delivery pump

4) if you know the model of the device

5) if entropy of serial numbers, for this specific device, is low enough

6) if you are able place a device within range to where this person is living for a sustained period

7) if you are able to ensure remote wireless control is on for this period

8) if you are able to then hack the device [edit: remove - in order to find the exact serial number of the device]

9) if you are able to then change the insulin delivery such that it injects too much or too little

10) if the person is unaware of such a change through external bio-identifiable changes [ http://en.wikipedia.org/wiki/Diabetes_mellitus#Signs_and_sym... ]

11) if the person consequentially is unable to reach medical assistance within such a period in order to receive more or less insulin resulting in their death

12) if you are able to remove any evidence of tampering with their device without being caught

yes, then this is a concern and low entropy is relevant. But pragmatically and realistically - if you can do all this - then you could execute a murder anyway :P


Point 5 is the only one that I am concerning myself with. If it is low, I consider it a security failure, if it is high, then I don't care. I don't give a shit about insulin machines and murder plots, I am concerned with technical implementation of security. I'm not sure what you are getting hung up on.....

Also, point 8 makes absolutely no sense.


Read my original comment

"but your dealing with minute hardware where every single ms of processing power counts. Simple encryption should be utilized [but then this might be easily hacked anyway ?] but for units placed inside the body [pacemakers and the like] - splitting the units resources between keeping the patient alive vs. encryption for wireless protocols seems to weigh more heavily on the former than the later given how unlikely - for the majority of the world - these 'attacks' are going to be."

So your solution is to increase security such that it compromises the functionality of the device itself through it's utility ? High security, poor battery life ? High security, high replacement cost ?

"Point 5 is the only one that I am concerning myself with." - and delivering insulin isn't important ?

Get realistic - security loopholes are only as important as what you are trying to practically protect and at what cost with what risk. This is what I am trying to make evident.

The assumption that every single manufacturer in the medical industry hasn't considered security of remote devices seems a far stretch to me given the prominence of medical litigation and the fact your dealing with someones life. Is a high security cost, high device cost, low battery life and therefore low adoption for patients and community accessibility acceptable? No, it's not.

The world is not based on everyone wanting to kill each other because entropy of serial numbers [which you nor I have any idea about] are low so they can hack insulin devices and kill someone. That said - it needs to be fixed with a balance to risk and all these other factors.


Read my original comment:

  "2. "all he requires to perpetrate the hack is the target pump's
  serial number."

  Do we know how much entropy is in those? They could very
  well be sequential or date derived.
As you can clearly see, I am objecting to the apparent assertion that requiring the serial number should be considered a mitigating factor if we don't know anything about the entropy of these serial numbers. Without additional information, we should not be comforted by this.

Allow me to be perfectly blunt to get across my point once and for all: I don't give a shit about insulin. I don't give a shit about insulin pumps. I care misconceptions about security, and improper security implementation. This article serves as nothing to me other than a vehicle to discuss these things.

Most importantly: I am more concerned with your apparent suggestion that "a serial number is probably a sufficient shared secret" than I am with anything in the story. Serial numbers, as a general rule, make terrible shared secrets.


"more concerned with your apparent suggestion"

where have I made any such "apparent suggestion" in any of my comments. i haven't - i've stated that, and at least I believed I made quite clear, that the risk and practicality of using this hack is negligible. i haven't stated that it does not exist or that it should not be fixed. to the contrary - it should be fixed.

you're focusing on a singular aspect in a vacuum. "improper security implementation." - yes in this singular vacuum - you're correct and that's great - it's a concern. But what point is there focusing on security implementations in a vacuum when your dealing with real devices on real people and the practicality of using such improper implementations. The entire BlackHat conference is about exposing hacks in vendor-neutral software and devices that affect the real world. As I stated:

"Get realistic - security loopholes are only as important as what you are trying to practically protect and at what cost with what risk. This is what I am trying to make evident."

i'm focusing on the practicality in the real world as is the entire point of the BlackHat Security Conference. Arguably any device which opens itself to wireless communication could be hacked - and a device like this should have some cryptographic system requiring two separate keys - but at what practical cost is my point.

as hanslemen says in his article - the easiest way to resolve this is just to build in upper and lower limits of insulin delivery. at least you can't kill someone - but I acknowledge that even controlling it is a concern.

[peace, not trying to get all up and hot in here :)]


Even if entropy is low are - how are you going to randomly select a person, and know their serial ID?

The real concern isn't random killing. It's targeted killing. Most premeditated murder is targeted, is perpetrated by someone for a specific motivation, and is perpetrated by someone close to the victim.

The sort of medical emergency caused by an insulin pump malfunction doesn't raise the suspicion of today's typical police officer. Likewise, the likelihood that someone has the right medical and technical knowledge to investigate such a crime is much lower.

People covering murder by faking a random crime are operating in an area of their ignorance, versus the expert's knowledge and practice. A hacker killing someone through medical equipment would be operating as the expert while the officials and professionals whose job it would be to catch him may well be the newbies.

It would be easy to concoct an alibi with a murder technique like this. This is the most concerning thing about it.


To be honest, the facts are kind of boring, and the fear-mongering is kind of dramatic.

I can definitely see how, if they didn't care about the abstract concept of truth, people could prefer to pay attention to the fear-mongering.


You're right, it /would/ make for good TV drama: an evil nursing home wants to kick some malingering diabetics off of their rolls in order to make space for newer and more lucrative patients. The management cook up a scheme to hack the patients' insulin pumps in order to kill people quietly in their sleep. Our crack team of hackers is called to the murder scene when an potential victim's relative notices them behaving strangely on a morning when they visit along with an anomalously low blood sugar reading!

Yeah, it sounds a bit far-fetched even for TV.


This was the plotline of a 3rd season episode of Law & Order, "Virus". http://www.imdb.com/title/tt0629490/


The Weekly World News once reported that a new, deadly computer virus could make your computer explode. Seems sort of prophetic now, given the media frenzy. http://books.google.com/books?id=9ewDAAAAMBAJ&pg=PA40...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: