Hacker News new | past | comments | ask | show | jobs | submit login
CVE-2015-3459 – Hospira Lifecare PCA Infusion Pump (nist.gov)
152 points by aburan28 on May 4, 2015 | hide | past | favorite | 82 comments



I worked for Hospira last year on the Plum A+ and 360 infusion pumps but not the PCA. I'm a little surprised such a blatant security hole wasn't caught, considering the magnitude of the regulatory environment we worked under. I'd never worked in that kind of environment before, and I left shortly after a successfully defended audit of our software development and tracking process and systems. (Although how successful is the development and tracking process if a year later this "bug" comes out?) My guess is that because we were beaten over the head day in and day out focusing on "if this software delivers the wrong medication the patient is probably going to die", and "if you make a mistake in the development process and in change tracking we can lose the ability to make these devices" that the idea of defending against malicious intent was de-emphasized/overlooked.


Question unrelated to the stupidity of supporting telnet as protocol on these devices- Did you guys use formal verification techniques? http://en.wikipedia.org/wiki/Formal_verification


Formal verification would not help if the formal or informal design specs called for unauthenticated telnet. It would prove that the entire stack was bullet-proof against memory violations and mathematically pure, and that it would with 100% certainty allow untrusted users to access administrative functions.


>Formal verification would not help if the formal or informal design specs called for unauthenticated telnet

This is why I qualified by question saying such.


If some one would explain the down-voting to me, I'd love to understand... honestly

1. My question was unrelated to the poor decision the OPs former employer made to use telnet.

2. Formal verification the the gold standard in correctness often used for can't fail things like missiles and satellites.


Probably your tone. Half your post is you telling them they're stupid, and the other half is a patronizing link to very well-known concepts (implying they'd be stupid not to use them, except there are reasons why very few people actually do). It can read like the average know-it-all post plus gratuitous insults.


>Half your post is you telling them they're stupid

No I was saying my question was unrelated to a stupid decision their former employer made....

>and the other half is a patronizing link to very well-known concepts

The link was NOT for the benefit of the OP to whom I was asking a question, it was for reader here that might not know what I was referring too in the question.


> I was saying my question was unrelated to a stupid decision their former employer made

"Former employers" don't usually make decisions in a vacuum -- chances are that engineers were involved at some point, and if it wasn't OP maybe it was a colleague or even a friend. Engineers are as lazy as anyone else, and telnet is an easy protocol to support. Also "let's not talk about my opinion of X" is just a passive-aggressive way of stating your opinion without accepting a possible critique of it, which in itself is off-putting. If you really didn't want to talk about the subject, you wouldn't have mentioned it right away. The only safe use of "let's not talk about X" is as a way to move on after a topic has been discussed and agreement could not be reached.

> The link was NOT for the benefit of the OP to whom I was asking a question, it was for reader

that might as well be, but 1) both OP and HN readers will likely be familiar with the concept and/or can google it themselves, and 2) playing to the peanut gallery is patronizing in itself.

Note I'm not criticizing your positions, I'm just describing how your tone might come across as patronizing. And that's more than enough meta for a week, I think :)


If I had to guess I would say it relates to the opinion of some engineers that formal verification is only possible in relatively simple systems, and even then only as a mathematical curiosity. As true or not as that may be, the current 'collective wisdom' is more oriented towards automated testing than formal verification, and anything contrary to the popular view gets some natural resistance.


"Beware of bugs in the above code; I have only proved it correct, not tried it." — Donald Knuth


There are no hard rules but this seems a little gratuitous: "Question unrelated to the stupidity of supporting telnet as protocol on these devices"


If you have references showing that formal methods are used for missiles and satellites, I would love to see them. [I work on formal methods tools and I am currently working on a usecase from the space industry, but that one is not documented yet.]


I have heard about this from two sources:

1. Papers/case studies the master's of software engineering directory at my university provided while I was in grad. I can try to dig up some of these later if you are interested.

2. Friends who either worked on satellites or airpcraft


This is most likely the problem - the number of cases where patients die from drug delivery mistakes is probably in thousands if not tens of thousands. The cases where someone deliberately administers a dose, via a machine are likely to be zero (because I have not heard of them)

So any cost benefit or risk analysis is going to worry about the drug.


If you require certification of software by some standard, companies will build software that passes the certification standard.

But in my experience, the assumption that a security, quality, or whatever certification process correlates with actually secure, high quality, etc software does not have a lot backing it.

There's a big difference between dotting the i's and crossing the t's according to ISO-9001, and actually caring about software quality, and it seems like the standards make it harder to actually care and actually focus on delivering a good product.


It sounds similar to the idea of teachers "teaching to the test", as opposed to providing an all round education and critical thinking ability.


Here's the bad news: after spending a decade working in the medical device industry, I can say that security is not always a high priority in medical device designs.

I don't think the situation is likely to get better until the FDA starts requiring security audits as a condition of approval -- and for 510(k)s too, not just for PMAs.


> I can say that security is not always a high priority in medical device designs.

Agreed. But it's because generally it's not (yet?) a problem.


Is anyone else dissatisfied with the amount of information we get in many sec bug reports? Almost everytime I read a CVE or other vulnerability I find myself with more questions than answers. There's so little information given.

Is telnet on by default? Is this device normally plugged into a network, for how long? How common is this device? Has it been on the market for long?

Without this kind of information it's very difficult to assess risk, or otherwise form an opinion.


Telnet is on by default. It is a busybox shell. This device is normally connected to a network via wifi. There is an additional Ethernet port on the back. It is safe to say every patient using one of these has physical access. The wireless encryption keys are stored in plain text.


On the other hand, for potentially significant security holes, answering those questions would make attacking the devices easy. I imagine more detail will come out in a postmortem once the affected devices are updated. I may be overly optimistic, though..


I've got a full advisory written up. It is unclear if the vendor will patching the device. When I get that cleared up I'll have a full advisory for everyone. In the meantime if you need something answered urgently you can contact me directly.


Hopefully not a literal post mortem.


I was thinking this is bad, but this thing is running a 2.4 kernel, so it's probably just a wired ethernet port on the back of the device. Limited but dangerous potential here.

Nope, it's a thousands times worse. This devices has WiFi. You don't even need to plug a cable in the back.

I can't find how the wireless is configured in their manual[1], but they clearly mention Wireless LAN.

[1]: http://www.meql.com/Manuals/Abbott-Hospira-PCA-III-Mednet-Us...


tl;dr - drug pump has no authentication on telnet port

This is the brochure page for this family of products (http://www.hospira.com/en/products_and_services/infusion_pum...) and the one that stands out a mile is

- Robust wireless capabilities enable remote drug library updates

the product itself (let's assume it's top notch medical hardware, I am afraid I cannot comment usefully) works fine, and has been battle tested and built by veterans. But the second they add network access they are hit by a double whammy ... Actually I have rewritten this three times, and really I can find no excuse for this. Selling to moronic sysadmins - "hey easy to use web front end with passwords or even client certs). I can only assume the argument runs like this

- if we make it hard to do X over the network, and X does not happen, chances of someone dying are y%. If we bollox security, then X is sooo easy to do, and we just save y% of lives.

Now that is "tragedy of the commons" in sheeps clothing, but I can see it may be the only excuse you have


This is completely unacceptable. There really should be more pentesting of medical gear and industrial control systems in general.


Related: Karen Sandler's LC2012 keynote, which discusses the challenges of finding anything out about life-sustaining medical technology: https://www.youtube.com/watch?v=5XDTQLa3NjE


It's surprising that the FDA doesn't require that as part of their certification procedure.


I can think of many words to describe my reaction to this, but "surprise" is not one of them. I have never seen a regulatory body impose any meaningful security testing. Even supposedly "secure" standards are often self-certified, not tested.


If you take that too far you end up with medical gear that is so locked down the owner can not make any changes themself, but must ask for permission to make any changes.


I was under the impression that medical devices were already locked down like that due to e.g. FDA regulations. Not having security either seems like a "worst of both worlds" situation.


With no security people get access on their own......


...at the risk of bricking devices and legal threats? Sure. Both of which are concerns with some teeth for such mission critical devices as... your xbox.

Jailbroken medical devices sounds like a great way for a doctor to loose their license, a developer to end up with manslaughter charges, and patients being unable to tell their doctors about their full medical situation, lest their doctors run screaming from the impending liability lawsuit.

I think the correct answer to "let people modify their own devices" is "fix the regulations" not "intentionally weaken security in matters of potential life and death."


I'd argue that the very fact that it's physically possible to hack these sort of devices is a huge red flag.

There shouldn't be pentesting, because there shouldn't be anything to pentest.

Something like a medical device, you keep the outside communication part airgapped from anything that could cause harm. If that means you have to duplicate things, then it means you have to duplicate things.


Implantable devices typically have to have a wireless interface of some sort. The alternative is to put a physical port on someone's body, which is a great way to cause infections.


So you have a wireless interface for the non-able-to-kill-you stuff.

But if you're having to change the code of something that's been implanted, you've already done something horribly wrong.


For example, I think pacemakers can set the range of acceptable heart rates by radio. So now, you have to have logic checking that it doesn't get set to 0. But what happens if there's a buffer overflow in that checking?

Edit: here's a paper demonstrating similar attacks: http://www.secure-medicine.org/public/publications/icd-study...


See, that's an example of something that I do not think should be settable via radio. Maybe inductive or sonic communication.

Alternatively, you have the software check the set range, but the hardware (or a second non-connected processor) also checks it separately to make sure it's sane anyways.


Pacemakers and defibrilators are configured to match an individual patient. Part of that configuration includes what parts of the heart to stimulate. This configuration may need to change as the patient's condition changes.

You also need to be able to turn of the pacemaker entirely to monitor how the heart operates without stimulation. You may need to turn of the defibrilator during certain medical procedures...


But... but... I want a datajack at the base of my skull!


Me too, I've been searching for an excuse to get the Motoko haircut...


> There shouldn't be pentesting, because there shouldn't be anything to pentest.

At the face this seems like the right approach, but you end up with security-through-obscurity protecting (yet another) critical system. If you don't hire pentesters, how can you be sure there is no way to maliciously interact with the device? Because you didn't intend for there to be? Its a good thing vulnerabilities are never unintentional right?

Its also not very resilient to a change in the software's requirements; maybe its okay for a pacemaker not to have a password when you need to open up a patient's chest to connect to it. Maybe those extra microseconds you save mean less blood loss and pain for the patient undergoing surgery, an undeniable win. But this operation turns out to be expensive (by every conceivable measure), so the next model ships with bluetooth, now we have a problem.

See also:

http://blog.ioactive.com/2013/02/broken-hearts-how-plausible...


Perhaps I was overstating things. I'm not saying "don't hire pentesters". I'm saying "don't implement security in software when it can be done in hardware". If all interfaces that could cause damage are not physically connected to an external interface, the chances of anything being able to coerce them into doing something bad are... slim, at best. Not none, never none. But slim.

Certainly slimmer than just stuffing everything in the same basket and calling it a day.

And your second part is exactly why medical devices are not, or at the very least, should not, have changes done to them after-the-fact.


  > And your second part is exactly why medical devices are not, or at
  > the very least, should not, have changes done to them after-the-fact.
I don't think this is self-evident. There are already a number of devices on the market that basically require physician tuning post-implantation. Deep brain stimulators and cochlear implants spring immediately to mind, and there are likely many other examples. There are many reasons for this: there may be too many parameters for the physician to realistically adjust intraoperatively, assessing efficacy may require activities that are impossible to perform in the OR (ie observing gait), the parameters may be time-varying, it may not be possible to have the patient awake during surgery, the patient may be pre-lingual, etc.

Even worse, both DBS and cochlear implants have relatively short loop-latencies between the physician "turning the knob" and observing the effect (seconds). There are emerging medical implants where the loop-latency may be in the hours-to-weeks range. That's pretty much going to require tuning post-surgery which in turn basically requires a wireless interface of some kind.

Finally, while updating the firmware of a medical device (eg to give new capabilities) should certainly not be done lightly, but it is far and away preferable to going under the knife again to receive a new device.

Bottom line, the right thing to do is get security right, not limit what physicians can do with the devices.


Security necessarily comes in layers. Some will fail; hopefully not all of them. Adding protections at the hardware part of the stack is a fantastic idea, but insufficient for something lives are literally, directly depending on.

Medical devices do need to be accessed. Diagnostics need to be performed to make sure they're functioning. Sometimes batteries even need to be changed. Insulin pumps and the like need to be given more medicine, and the dosage may need to be adjusted. Et cetera.


Diagnostics are a good example of something that can be separated from the actual safety-critical stuff. Have a dumb output from the safety-critical processor that the radio interface can read.

If you're changing batteries or medicine, you're going to be accessing the device physically anyways, and so can change settings / dosages then.

Et cetera.


A battery change of a pacemaker or an ICD is required every 6 years or so, if not later. You may need to change settings far earlier than that.


Things like pacemakers often have radio interfaces, because the alternative is to cut someone open.


I didn't say "don't have a radio interface". I said, "don't have a radio interface to the able-to-kill-you parts". There is a distinction.

Also, how does that work? Water is a pretty good attenuator of radio waves.


A lot of them are more "inductive" than "radio" and work by magnetic coupling rather than electromagnetic coupling.

Put a coil outside the metal body of the device and not too far from the skin. Then have another coil that you place on the skin, in the same region as the under-skin coil. Run a 1kHz sine wave through the external coil and you'll make SOME voltage on the internal coil. That allows you to charge.

For bonus points you can also run a communication protocol at say 100kHz (gotta get a couple of decades of frequency difference) and since you're charging the device, you can afford to "waste" a lot of power to get the signal out.

I don't do this stuff myself, but my dad did for quite a number of years.


Yep. NFC is wonderful for that sort of thing. And better than radio since it's inherently shorter range (O(r^-4) versus O(r^-2), for the same reasons as active radar). But still not perfect.


They also need to be "able to kill you" so that the patient can be defibrilated if their heart stops beating. If you do that when their heart is functioning it tends to have the opposite effect.


Right. But that functionality can be airgapped from the readout parts.


Some sandboxing could be done, but if an attacker roots a pacemaker or an insulin pump it will be extremely difficult if not impossible to prevent them from convincing the device to perform its intended function at an unintended time.


Airgapped. Not just sandboxed. As in two separate processors, with no overlapping RAM / etc.

You can't do your example, because there's no way to get at the internal clock from the radio.


the problem here is the medical community (especially hospitals) are increasingly pressured to increase productivity to offset cost increases. one way is to automate some of the dreary stuff like monitoring of medical devices in a hospital ward. if youve been in a relatively modern hospital lately, youll find they have wall mounted dashboards which can monitor all the vitals and devices for patients without the nurse needing to walk around and check rooms. this lets them respond to changes as they come up, as opposed to needing to wait till they make their rounds to the room.

this sort of thing needs a network. I'm not familiar, but I thought in the US, HIPPA covered some of this stuff.. not sure how stringent it is, or if it covers scenarios like this.. perhaps it doesnt at all (I am thinking like PCI, where certain data must be encrypted, etc).


"Robust wireless capabilities enable remote drug library updates" [1]

What do they mean by 'robust'? You can rely on it to be there whenever you want to hack someone to death?

[1] http://www.hospira.com/en/products_and_services/infusion_pum...


I mentally translate 'robust' to 'fancy' -- the dictionary meaning I grew up with being so rarely used correctly now.


So is the procurement process in medical organizations so feeble that nobody notices these, or is there such widespread apathy about this state of affairs that it all gets ignored, or is there some kind of culture of secrecy that results in even these life threatening cases to be kept quiet between the hospital and the equipment salesman?


I wonder how long it's going to be before we see state-level actors exploit these kinds of security holes.

To be sure, there are already a lot of ways to incapacitate or hurt someone. But abusing software bugs in a person's medical devices might be a less traceable way to disrupt their activity.


Why bother with medical devices? Just lock on their driver-side brakes when they're on the highway. Bonus: car accidents are common enough that it'd be unlikely to be noticed.


You'll have to be more subtle than that. A crash investigator is going to notice asymmetric skid patterns and brake disc damage.


For plausible deniability on extrajudicial actions, I thought "drug overdose" and "murder/suicide" were the preferred means.

That's assuming the agency committing the act doesn't have readily handy poisons that degrade within hours and are thus untraceable. Combine with car accident and you have a large mess to uncover, further decreasing likelihood of discovery.


Malicious hackers are going to have so much "fun" with always-connected self-driving cars that get OTA updates that can "upgrade" their engines, or breaks, etc.

Also I imagine CIA would largely be made redundant as the NSA can just take over the assassinations from there.


> I wonder how long it's going to be before we see state-level actors exploit these kinds of security holes.

Sounds like a political nightmare for little gain. I would be more worried about ransom-oriented terrorists.


> state-level actors

> ransom-oriented terrorists

Sadly, INTERSECT on these groups yields a non-trivial set of results.


It's essentially a device for automatic delivery of pain killers when the patient presses a button. Why on earth would such device even require network connectivity? They advertise "wireless capabilities for remote drug library update". But "wireless capability" is a whole new engineering domain, and the manufacturer seems to have failed to even begin to address it properly.


Just because you cannot envision a particular use case does not mean that it is not a valuable one. Fist off a large hospital will have drugs that are "formulary", meaning drugs doctors are allowed to prescribe and the in-house pharmacy fill. These lists need to be updated, imagine you have thousands of these pumps on premises. Now imagine you also have to do inventory management. Now imagine you have these pumps wired in with your nurse calling system (for when infusion is complete or the device is empty).

Edit: usually you pick the drug from a list on the interface when programming it. I'm not familiar with this pump but I've used many like it.


Why on earth would such device even require network connectivity?

Convenience features sell. It looks like these devices might just push down delivery limits over the network, but the connectivity of medical devices is in the process of exploding. The market forces for software in general don't magically not apply to medical software/devices.

Of all the software out there that sends telemetry back to base, medical devices are probably one of the easiest to justify and least morally dubious types of user monitoring around.


I don't think we have any of our equipment in the OR networked other than commo devices; phones, video, et. al. I do know of some dental tools that use bluetooth to sync drills with a nurses cell so they can change the settings for the docs but 'hacking' that wouldn't lead to any unrecoverable harm... unless, of course, you could turn the drill on via blue tooth


Of all the bad things that can happen to a patient connected to an infusion pump, being hacked is way way down the list.


Not that this isn't a nasty security hole, but I should hope that this sort of equipment is firewalled at least.


"Hope", yes. But if there are products with unencrypted no-auth telnet, I'm not sure we can assume everyone is following industry-standard best practices :)


Glucose meters and pumps similar to this one have already been found on Shodan :-/ The majority aren't directly on the Internet but it happens. There has actually been an increase in the number of hospitals that are showing up online.


> Not that this isn't a nasty security hole, but I should hope that this sort of equipment is firewalled at least.

Won't do you much good if an attacker makes an open Wifi point with the same SSID as the hospital's network.


>Won't do you much good if an attacker makes an open Wifi point with the same SSID as the hospital's network.

For that to work the hospital's SSID would have to be open as well. And if that were the case, faking the SSID would be a complete waste of time because you could just connect to it yourself.


You vastly overestimate the difficulty of spoofing a password-protected wifi network.

Even without that, acquiring access credentials is hardly rocket science. (eg. grab one of these pumps or any other wifi device lying around and read the password over the ethernet/serial port)


I don't think you understand how wpa works. It's not like the client just sends a password. A shared key is mutual authentication. If putting up a network with a target ssid leaked data, WiFi would be completely broken.


http://www.renderlab.net/projects/WPA-tables/

wifi security is a pretty soft layer of additional security and definitely should be considered cheaply penetrable for purpouses of defense.

But like I said you can just harvest the PSK off a device without cracking anything. An unprotected shared secret on all nodes is not very secret at all.


A link to some precomputed tables doesn't mean anything. Wpa2 AES CCMP with a long shared secret is effectively unbreakable. I have not seen any research on academia or industry that comes close.

Care to share some? If not, please stop spreading misinformation.


> For that to work the hospital's SSID would have to be open as well.

It's not something I've messed with, but I can't imagine it'd be that hard to make an access point that is "closed" but accepts any password given to it.


Was anyone else reminded of that 'Homeland' episode?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: