I've been in Healthcare IT for a decade and a half or so, and I think there's definitely a reckoning coming with regard to the lapses in security.
I think honestly, the only thing that's kept this from being a problem with greater consequence is that to date it hasn't been clear that there's a real path to monetization of health data. It's been a lot more profitable to chase down credit card #'s and mass email/password combinations that lead to banking access.
I've long wondered when a solid monetization strategy for health data would show up and we'd see a quick rush to target these datasets. Trends that I see that make me think we're getting closer:
1. Systems are increasingly net-connected, obviously. In some ways it's increasing security (literally most hospitals I've been in you could plug into any ethernet port in the building and be on a network where pretty sensitive data is sent in the clear), but it's making these systems available to a larger number of interested attackers
2. Patients have accounts now. Health data is suddenly a not-insignificant source of email/password combinations for patients (previously just employees). Pretty reasonable to expect that health systems may be the source of future Gawker-style breaches for collecting poorly protected user credentials that can be used elsewhere.
3. The uptick and cost-effectiveness of encryption-ransomware, personal and corporate. It's been interesting to see cases where the data itself isn't monitezed because it has some broad market value, but solely by threatening the owners with it's release or exposure. I won't be surprised if Healthcare organizations or individual patients find themselves victims of extortion either by threatening to publish sensitive health data, or to destroy it.
There is (finally) an extra-linear increase in attention to this issue in Health IT, but there's also quite a large backlog of debt and an enormous number of systems deployed that were built for a different reality than current exists.
What bothers me is that I was under the impression that if you have a HIPAA compliant information system (software or hardware) that none of these criticisms would be true, yet we know that this is essentially the norm and that healthcare providers routinely ignore the problem. What's going on such that hospitals have the worst of both worlds - expensive devices subject to incredible amounts of regulation to safeguard patients but demonstrably insecure systems? I'd hate to think of what would happen if we had no regulations whatsoever, but on the other hand the current effective infosec status of the healthcare industry seems to be not very far off from as if they had no regulations.
Did I miss something in HIPAA about "don't make it easy as hell for any random person to come in and steal patient data or command other HIPAA compliant systems to act as an agent?"
> What bothers me is that I was under the impression that if you have a HIPAA compliant information system
The idea of a "HIPAA compliant information system" is largely empty marketing speak (less so in terms of the Transactions and Code Sets rule than the Privacy and Security rules); HIPAA and its implementing regulations do not establish specific standards for information systems (in privacy/security terms), it sets specific standards for what organizations holding PHI must do, and most of the technical features of software related to those functions are unspecified and, to the extent that there are requirements, whether the software as used is compliant will be highly dependent on the relation between the policies, specific functions performed by the organization, and how the software is used.
At most, software has features which facilitate compliance with some parts of HIPAA, but you can't just drop in a piece of software and achieve turnkey HIPAA compliance.
I think (right now) there isn't a way to really mass-target people illegally if you have their medical data. Blackmail requires some non-zero amount of effort per user.
Compare this to getting a password dump from a social networking site and trying those emails/passwords in an automated way against sites like paypal/gmail/banks etc.
EDIT: from the article
"TrapX also found a bug called Citadel, ransomware that’s designed to restrict a user’s access to his or her own files, which allows hackers to demand payment to restore that access"
> I think (right now) there isn't a way to really mass-target people illegally if you have their medical data. Blackmail requires some non-zero amount of effort per user.
I'm saying that's not really true anymore. These systems have patient email addresses and their medical histories. It's doesn't seem to me a big leap to automate ransomware. Hell, there's plenty of encryption-based ransomware schemes active right now and they involve needing access to the target's local PC. This doesn't even require that. One massive healthcare breach and anyone who can pull it down on bit-torrent can start going to town extorting.
I think it's something that's been a practical possibility for a couple-few years now. Just hasn't happened yet.
>I think honestly, the only thing that's kept this from being a problem with greater consequence is that to date it hasn't been clear that there's a real path to monetization of health data. It's been a lot more profitable to chase down credit card #'s and mass email/password combinations that lead to banking access.
Stealing CC#s is easier, but pretty much everything you'd need to engage in wholesale identity theft is provided in your average ADT transaction.
Many people throw hospital security into the pile of "well, lots of people don't care about infosec!" In my opinion, this stance is incorrect.
I've performed security assessments against many different industries, including banks, large enterprise, barely-funded startups, nuclear power facilities, law firms, hospitals, and more. In each of these fields, you see the "good guys" and the "bad guys" in terms of IT security strength. In hospitals, though, the whole field is terrible. The best of the best -- high-tech facilities that actually care about security -- are still doing terribly compared to the average large enterprise.
Health records are becoming more valuable, and not just because of blackmail. Insurance fraud and identity theft are feasible if you've stolen someone's health records, and the information stored within is only getting broader.
Hospitals wouldn't let their medical tech slip this far. They shouldn't let their security slip, either.
There are many forms of extortion. Until recently, health care insurance coverage and care were one primary route exercised by institutions in the U.S. (Just one example: Roll over and take it at work, or you'll lose your health care.)
The ACA was supposed to help address that, but, entering its third year, I at least see significant signs of its being weakened.
So... it's not just black hat bad actors. The proliferation of this data, by means legal, or gray, in addition to black, threatens a broad and diverse swath of those who have... less than perfect statistics.
Not just in health care, but in general, I've observed a lot of apathy towards security because corporations and large institutions serve a very significant role in diluting responsibility and repercussions. No one person -- at least and especially and including those with actual power to deploy resources against problems -- is really on the hook. Even in the C suite. Hell, it seems that "taking the blame" has become a significant component of such roles: You parachute out, and someone picks you up to play a repeat performance a year or two down the line.
No one really answers. And there is a lot of pressure -- below the surface -- to continue making this information more transparent and readily accessible. "Black hats" become a straw man for an industry that, in many respects, in the whole is actually trying to move in the opposite direction.
With such discontinuity, is it any wonder that the results are a disaster?
Forget about hacking devices. We once walked into a hospital to talk to nurses and show them a tablet app to access notes lab results etc.
The scary part was that the server room with the PACS and everything else in the building was unprotected with unlocked doors and nobody particularly caring we were there.
Imagine walking into a random law firm and walking into the server room with the ability to copy all data from all the clients.
Most PACS get data from devices and send it to workstations unencrypted. The security model implemented on them is usually no more than a white list of IP addresses the PACS will talk to. (To be fair, many of the servers have better security available now, but hospitals haven't taken advantage of it). Combine that with insecure, unsegmented networks, and hackable WiFi, and you don't need physical access to the server room.
Back in the early to mid 90's the root DNS servers were also left in unlocked rooms of some Universities according to stories that I read around that time.
Also for .info domains when they first came out I happen to be at the headquarters of the registry (Afilias when it was in the US) and they had their servers sitting out in the middle of an office that was quite unprotected like you would any tower computer. The janitor could have taken it off line with a vacuum cleaner. (This was early 2000's iirc.)
“I appreciate you wanting to jump in,” Rick Hampton, wireless communications manager for Partners HealthCare System, said, “but frankly, some of the National Enquirer headlines that you guys create cause nothing but problems.”
This right here is the problem - Researchers unveiled serious threats in the hardware - the ones described in the article could all be used to kill. And the response to that? "Shut up, you're scaring people".
Those headlines should be scaring people because they are scary!
You won't convince these guys, they're mostly jackasses. The only thing that will get through to them is something that pulls their negligence and horrid practices out into the light.
You need headlines to break them. They will not do the right thing until they have to, and the FDA isn't going to get that rolling.
I'd want good disclosure practices if someone was targeting my products, but I'm also not stubborn and defend terrible products with "Well, no one has been hurt yet!"
Basically, they would log on from their control server in Eastern Europe to a blood gas analyzer; they’d then go from the BGA to a data source, pull the records back to the BGA, and then out. Wright says they were able to determine that hackers were taking data out through medical devices because, to take one example, they found patient data in a blood gas analyzer, where it wasn’t supposed to be.
Not to minimize the problems with the BGA or with other devices, but this points at least as much to a problem with the "data source", which is left unidentified in TFA. One reason the BGA wouldn't be worried about protecting PII might be that... it should never have PII in the first place. There's a HIPAA violation somewhere, but I don't think it's in the BGA, and the BGA isn't the only host that's assuming a safe network.
Target, Home Depot, etc. have been justifiably criticized for operating their POS devices as if firewalls could possibly be sufficient to protect a large network. Hospitals might consider themselves more noble than mere stores, but it doesn't make a difference to a hacker.
Few people care about IT security; it's not just in healthcare. It doesn't matter how high the stakes are, for some reason people just don't feel threatened. To see the pattern think of the attitude in almost every context you can think of:
* Business
* Individual citizens, who seem to care little about the confidentiality of their personal information
* Government
* Developers of most software. Even RSA was hacked.
* Even national security organizations: The OPM hack; Snowden walking out of the NSA with all that secret data; CIA leaders taking home top secret information, etc.
Perhaps it's human nature. On the other hand, when it comes to physical security, people often tend to overreact.
First, HIPAA requirements are in part IT requirements. Every single hospital works really hard to comply with these, because there are huge fines if they don't. Some of the problems with hospital IT security might be due to defects in the already quite onerous HIPAA specification.
The second is that hospitals are extremely low-margin institutions. Most hospitals (even the really big ones) just break even, especially if they're teaching hospitals or serve poorer areas. IT security doesn't really produce any revenue.
I agree it's a problem. This needs a systemic solution–who would pay for IT fixes? Reimbursement is declining and government payment sucks. Most hospitals are in crisis mode as it is.
Hospitals are low margin/break even because they choose to be - but that's another topic.
>IT security doesn't really produce any revenue.
Hospitals (and the health care industry at large) has been drowning in billions-with-a-B federal and state dollars since ARRA. All in the name of "meaningful use" of health information technology. HIT is very much a revenue model for hospitals.
> Hospitals (and the health care industry at large) has been drowning in billions-with-a-B federal and state dollars since ARRA.
For the US Healthcare industry, billions-with-a-b (for the period since ARRA in 2009) in public funds isn't enough money to drown in. Its not even enough money to get noticeably damp -- its a $3.8 Trillion (annually) industry (something like, IIRC, 40% of which is public funds, so its not even all that big of a delta in public funds.)
Meaningful use for EHR is completely different from IT security practices. Additionally, payments for implementing new EHR systems are declining to zero; it was a temporary measure. I think you don't really understand what is implied by "meaningful use" measures.
It's also pretty misleading to use hospital payment data from 2011, pre-ACA, as reimbursement structure has changed massively in just a few years, massively altering hospital financial models.
In a sense the name says it all: HIPAA's about Portability and Accountability, less about security.
I worked on a few security consulting projects in healthcare. The HIPAA security rule is way more vague about actual controls than a rational person would assume; much more than analogous regulations on financial data (e.g. PCI-DSS). The HITECH amendment added a lot of breadth regarding which parties must comply, but did little to proscribe specific controls. Most providers, contractors, etc. use a framework called HITRUST that attempts to identify and map actual security controls to HIPAA, but even that is not super actionable.
One of the hardest problems to solve is the immediate criticality of patient data. You absolutely cannot have someone die because a nurse or doctor forgot their password and couldn't look up medical history. Makes practitioners resist adoption, and you end up with "break the glass" (emergency security bypass) functionality on a lot of sensitive systems/data.
HIPAA does require the data to be stored in a secured way. It also requires a certain level of security to get to the data. Don't let the name fool you.
Your second point is extremely valid. You can't really restrict a medical professional from looking up health information. It could cause loss of life. Mostly, a large organization can't be expected to lock down the data of an individual.
What you can do is log the access (GxP regulations). This means that you will know who accessed the data after the fact.
> HIPAA does require the data to be stored in a secured way.
But its extremely unspecific about what that means.
> It also requires a certain level of security to get to the data.
Less so than you probably think. It has vague high level standards, under which are lower-level implementation specifications (which still tend to be somewhat vague) which may be required or "addressable", which basically means that organizations are required to review whether they are appropriate; the only technical implementation specifications that are required in the security rule are having unique user IDs and having emergency access procedures.
The idea of setting up an open hardware lab for these devices, and some kind of bake-off, is awesome.
Also, I wish someone could do a medical device network security system -- it really isn't the core competency of any of the hardware vendors, and yet is something you can't get wrong. The public protocols (DICOM, HL7, etc.) are at best baroque and don't include the details which matter to security. I wish this didn't have to be a company -- it really could be something funded by NIH or a consortium of device vendors or medical institutions -- but it probably has to be in order to be effective. There's a need for an open standard for medical device security over top of all this, but rather than just publishing a standard, it would be easier to provide working end to end network from device to information system.
My university IT (which also runs a huge hospital network) seems to have no idea how to secure their data. Their solution is to encrypt everything, as if disk encryption and VPNs are enough to prevent data theft. They rolled out a csmpus-wide VPN requirement recently based on a Juniper networks system - the exact same system that led to the Anthem data breach that lost 80 million patient records (because Juniper had a slow patch cycle after Heartbleed). No two-factor auth on the VPN, so any one of 50,000 employees with phished credentials could give an attacker VPN access. Meanwhile all the actual patient databases are old, leaky systems they seem uninterested in upgrading. Sheer lunacy.
You don't have to hack the system. You can literally just look at trash cans and you will probably find some medical records. I seriously doubt most people shred the document... criminals can do inside job, and this happens without anyone even knowing. As a nurse you can probably download all patients records. I don't know if anyone is required to scan some temporary access code to get records. I do know there are logs, but they are more for compliance than anything IMO. In fact, in many IT organizations, I seriously doubt anyone even look at logs. They are there for compliance, mostly mean when shit happens, there is evidence. Aftermath. This is why AI can help in intrusion and anomalies detection (understanding context). It is like talking about virus and malware detection...
Also, a lot of the two-auth out there has the option to Remember This Computer so if the computer is hacked you are doom. I am interested in what constitutes frequent-access data access and infrequent-access data because then you can classify behaviors. Logging, encrypt in transits and at rest are required, but I doubt most of the data are encrypted in transit (probably just proxy over 80 and reaches the terminal screen).
This is, to a large extent, scaremongering. While there are some valid points made in the article, the article fails to differentiate between security problems that can be exploited by trolls or single, untrained individuals, and ones that take a powerful team working on behalf for a government or other such group to exploit. It's the difference between the hospital being defended against your average thief, and being defended against a strike squad of ninjas. Despite this, the article does make good points when it comes to the lack of worry about the problems they found. Even though these vulnerabilities may be over hyped, they are real and the lack of focus on these vulnerabilities is chilling. The real underlying problems for this stem not from an industry that leaves bugs in applications designed for high security, but in the fact that the industry doesn't realize that security needs to be the default, whether or not you see exploits being used.
Medical services is one of the worst sectors with very poor 'right' investments in technology.
There are many gaps while some significantly standout compared to others.
* As highlighted in this article, Security is a huge issue. Given the sensitivity of the data, the sad state of infrastructure does not do any justice.
* A lot of the infrastructure is still paper based. The digital revolution is way behind its time in this sector.
* Medical sciences which is supposed to revolutionize has most of its spend in regulations rather than technology. The advancements in science are excruciatingly slow. Drug discovery has slowed down tremendously.
Not to mention, the poor patient-experience and lack of reach of medicine to the poorest of society.
It's pretty funny that on one hand you say security is a big issue but with the other you are frustrated they haven't moved away from paper fast enough. Paper based systems when used correctly can be very secure. That being said they aren't without their issues e.g.
http://www.databreachtoday.com/800000-penalty-for-paper-reco...
Security is of paramount importance however never at the cost of digitization.
Imagine two scenarios
1. Patient diagnosis delayed or incorrect, if doctors cannot view the blood reports for next 21 days or doctor makes mistake since he cannot process data like a computer would.
2. The icu system got hacked on which someone's life depended.
I don't think you could choose one over the other.
A lot of the time, we trade off between convenience and security. However that is a prudent and deliberate choice.
What I see more and more happening is a reckless and inadvertent choice.
I think honestly, the only thing that's kept this from being a problem with greater consequence is that to date it hasn't been clear that there's a real path to monetization of health data. It's been a lot more profitable to chase down credit card #'s and mass email/password combinations that lead to banking access.
I've long wondered when a solid monetization strategy for health data would show up and we'd see a quick rush to target these datasets. Trends that I see that make me think we're getting closer:
1. Systems are increasingly net-connected, obviously. In some ways it's increasing security (literally most hospitals I've been in you could plug into any ethernet port in the building and be on a network where pretty sensitive data is sent in the clear), but it's making these systems available to a larger number of interested attackers
2. Patients have accounts now. Health data is suddenly a not-insignificant source of email/password combinations for patients (previously just employees). Pretty reasonable to expect that health systems may be the source of future Gawker-style breaches for collecting poorly protected user credentials that can be used elsewhere.
3. The uptick and cost-effectiveness of encryption-ransomware, personal and corporate. It's been interesting to see cases where the data itself isn't monitezed because it has some broad market value, but solely by threatening the owners with it's release or exposure. I won't be surprised if Healthcare organizations or individual patients find themselves victims of extortion either by threatening to publish sensitive health data, or to destroy it.
There is (finally) an extra-linear increase in attention to this issue in Health IT, but there's also quite a large backlog of debt and an enormous number of systems deployed that were built for a different reality than current exists.