> Surely it was NASA's bad security practices that cost them $41,000 - do they really think if he hadn't found the problem, they wouldn't ever have had to fix their insecure
Why is blaming the victim so uniquely acceptable when we're talking about hackers?
> Why is blaming the victim so uniquely acceptable when we're talking about hackers?
"Don't blame the victim" comes from the context of sexual assault. Blaming the victim for dressing provocatively is unjustifiable because dressing provocatively is not misconduct.
Maintaining insecure internet-facing servers is professional misconduct. Someone else's wrong doesn't retroactively make your wrong right.
I left my car door unlocked once in my apartment's parking garage, and someone stole my iPhone out of it. Am I engaging in professional misconduct? Maybe the garage is, for failing to take adequate security precautions, but not me.
> Am I engaging in professional misconduct? Maybe the garage is, for failing to take adequate security precautions, but not me.
Depends on what you mean by "professional misconduct." For lawyers like you (and me) the analysis by a bar-association ethics committee might be fairly strict. A lawyer's smartphone likely contains non-public client information such as contact info and perhaps even documents. In that case, leaving the phone (or worse, a laptop) in an unlocked car might be regarded as a failure to take prudent precautions to protect client confidences. That in turn could mean exposure for the lawyer, even though the thief was clearly the most-culpable.
ADDED: The same, negligence-based analysis might apply to security professionals as well: Even though others might be equally- or more-culpable, a failure to take prudent measures in accordance with "industry standards" (a vague term that would have to be [expensively] litigated) might lead to civil liability. See, e.g., The T.J. Hooper, a 1932 case in which the court held that a tugboat's failure to have reliable radios on board was negligent, even though that was not the prevailing practice among other tug operators [1]. It's a case with which all first-year law students (presumably) become familiar.
> But it doesn't absolve the attacker of any responsibility for damages caused, either.
stephen_g was questioning the assumption that he caused any damage at all.
> Surely it was NASA's bad security practices that cost them $41,000 - do they really think if he hadn't found the problem, they wouldn't ever have had to fix their insecure systems?
I don't think anyone was taking about absolution only weather or not the article was misleading.
Even if some "hacker" hadn't located the security issues, but some employee from NASA had located the security issues, it still costs NASA money to fix the code.
Also read "That doesn't excuse messing around in other people's systems of course." stephen_g isn't blaming the victim. But blaming the intruder as if he created the lack of security is just as disingenuous. Imagine a business that's generally open to the public at specific hours. The owner installs the doors himself without any consideration for locks. When reviewing security camera footage from non-business hours, he discovers someone open the door, walk in, look around, and leave. Sure, this person is indeed trespassing. But blaming the trespasser for the cost of putting locks on the door is asinine.
> By entering through a router in Dulles, Va., and installing a back door for access, he intercepted DTRA e-mail, 19 user names and passwords of employees, including 10 on military computers.
If the security issue was found internally, you can just patch it, do a cursory check to see if there is evidence of intrusion, and go on your way. When you find the security issue as a result of being hacked, now you not only have to patch to flaw but also investigate what was compromised, but may have been modified, etc. That would entail taking down all of the potentially affected systems for evaluation, which means a big hit to productive work while the investigation is underway.
Valid point. I did miss the installed backdoor. But I disagree about the "cursory check" - at the point of finding a vulnerability on a production system, one must assume compromise and be exhaustive. And that means it's costly, regardless of who found it first.
"Cursory" was perhaps the wrong word. In any case, taking down your company's network for forensic investigation is extremely costly - it's not something you'd do unless there was evidence of an intrusion. It would have been a whole lot cheaper to take care of this incident if the problem was found internally, rather than performing damage assessment and control after the fact.
> In any case, taking down your company's network for forensic investigation is extremely costly - it's not something you'd do unless there was evidence of an intrusion.
It's not something you would do even if there was. The sensible first response when you find a vulnerability is take a snapshot of the existing system -- you want to do this before patching the vulnerability in any event, in case the patch causes serious problems and has to be rolled back. Then you can conduct your investigation against the snapshot without having to disable the production systems.
Which is why I think you're making a very strong argument for why attributing "mitigation costs" is a farce. Because you could easily find a company who would take down their network and incur very high costs unnecessarily. The overreaction is not the fault of nor is it under the control of the attacker.
> The sensible first response when you find a vulnerability is take a snapshot of the existing system ... without having to disable the production systems.
Which would involve taking the system down to conduct the snapshot. What gets put back in place will depend on the severity of the breach, perceived threat, sensitivity of data, etc. They had no way of knowing exactly how sophisticated the attack was until the cops finished their investigation - is this some script kiddie or the Chinese military? I'm not going to worry about a foreign intelligence service if I'm serving up web pages for an eCommerce site, but I would if I were working for NASA. Just because you patch the vulnerability in question doesn't mean you've denied the attacker access to your network...
If they suspected additional backdoors have been added during the breach, the affected systems would need to rebuilt entirely, patched, then have data selectively restored from backup (you don't want to reintroduce to the system any malware that was saved to a backup). What other systems were accessible from the one that was hacked? Are there rootkits sending beacons home on any of them? Is there reason to preemptively take them down and rebuild them? What if one of the affected systems is a mail server/file server/etc.?
No, I don't blame NASA for overreacting. The kid pulled back technical details for a space station. The Russian government would have done the same (and may even have already been in there). NASA took steps that they thought were sensible, and they ate the costs. The kid ended up getting 6 months of house arrest and 2 years probation.
> Which would involve taking the system down to conduct the snapshot.
a) Not necessarily. Modern virtualization systems support live snapshots.
b) Taking the system down for a matter of minutes to make an offline snapshot is still dramatically less expensive than taking the system down for the duration of the investigation.
c) You still need the snapshot to be able to roll back to when patching the vulnerability in case it doesn't go well. The cost is the same whether you need it for an investigation or not because you need it regardless.
> I'm not going to worry about a foreign intelligence service if I'm serving up web pages for an eCommerce site, but I would if I were working for NASA.
You're arguing yourself into a corner. If the system is just "web pages for an eCommerce site" then taking the system offline is an overreaction. If it contains some vital national security information then you're here:
> The Russian government would have done the same (and may even have already been in there).
At which point it doesn't matter whether you know an intrusion has occurred, you have to treat it as though it had, because the likely adversary is sophisticated enough to evade cursory detection and the system is important enough to justify the expense of being thorough.
> At which point it doesn't matter whether you know an intrusion has occurred, you have to treat it as though it had...
Which gets you into benefits/trade-offs territory; I think it's probably an overreaction for NASA to take their internet facing servers and all computers on the connected networks down to investigate for possible intrusions every time a new security patch comes out. I don't think it's unreasonable for them to do so when they have credible evidence that they've been hacked. (of course, I don't work for NASA, so I have no idea what procedures they have in place)
The problem is that this situation sometimes (not always) demands that we apportion blame to both the perpetrator and the victim.
The perpetrator is always to blame, and their blame is obviously not mitigated by anything the victim does.
If the victim has a duty to protect the information lost, and the victim is negligent, they have an orthogonal culpability as well.
On message boards, it really seems like people have trouble holding these two thoughts together in their head at the same time. Of course, nerds like us on message boards have a cluster-headache of a persecution complex as well, which is ironic given that we're all making bank making everyone else's jobs disappear.
Oh those poor government agencies with their millions of dollars in funding. Who will defend them from those teenage hackers that are causing them "trillions" in damages tm.
Why is blaming the victim so uniquely acceptable when we're talking about hackers?