Hacker News new | past | comments | ask | show | jobs | submit login

> With Stuxnet as a "blueprint" downloadable from the Internet, he says, "any dumb hacker" can now figure out how to build and sell cyberweapons to any hacktivist or terrorist who wants "to put the lights out" in a US city or "release a toxic gas cloud."

Is this really the case? I'm nothing close to a security specialist, so maybe I'm talking out of my ass. Still, everything I read on the subject said that Stuxnet was especially impressive in that it exploited not one but several previously-unknown OS-level exploits, on top of the embedded systems attacks it used. With these zero-days now discovered and hopefully patched, how much more value does Stuxnet offer?

(Neither of these questions are rhetorical — hopeful that one of our resident experts can fill me in.)




To answer that, ask yourself this. If you somehow got access to the root account on the central control computer of a nuclear power plant, could you cause a meltdown? Most likely, the answer is no, because you don't know how to control a nuclear power plant. You can run any command, but you have no idea which command would cause any real-world damage. If you're just doing it for the lulz, you could go for the ever-popular "rm -rf *" and cross your fingers, but if you want to be sure to cause damage, you're going to need some domain-specific knowledge. Not just knowledge about nuclear power plants in general, but also knowledge about how to manipulate the control systems at your specific target nuclear power plant.

In other words, even if you have an easily-exploited attack vector available to you, you still need to know a lot about your target in order to cause damage. Contrast this to guided missiles, which don't really need to know anything about the building that they are blowing up other than its GPS coordinates.

For this reason, I'm skeptical that "any dumb hacker" will ever be capable of causing something like a nuclear meltdown via virus infection.


On modern reactors, it would be hard even for the plant operators to cause a meltdown -- there's just too many passive measures in place. You'd probably need to start by walking around the reactor smashing bits of machinery first.

On the other hand, triggering the reactor to automatically shut down would be pretty easy -- and if you shut down all the nuclear reactors in the US for a few weeks, you'll certainly have made a significant impact.


I'm reminded of a fairly recent -- or recently released -- demonstration where researchers (in Idaho, IIRC) programmed a large engine/generator to self-destruct. It was then pointed out that latent inventory on such items is practically non-existent and replacement time is several months. I believe that replacement is also increasingly dependent upon China; North America essentially doesn't make the item, or critical components, any more.

Knock out several of those in critical locations, and you start to grind the U.S. economy to a halt.

You don't need to go nuclear. And destroying the engine was a fairly simple task of pushing it well outside its performance envelope.

EDIT: Looks like pnathan already cited this event -- see the last link in this comment:

http://news.ycombinator.com/item?id=3035909

It's a bit older than I remembered. The article is dated September, 2007.

I note incidentally that for one "catastrophic" scenario they describe, with an estimated "cost" of $700 billion, the damages figure now pales in comparison to what the U.S. economy has been through in the last few years. A bit of a lesson of its own regarding the rhetoric that surrounds the actual topic.


Part of Langner's point is that if you can insert code into the PLC, which Stuxnet shows you how to do, you don't need any insider information to create a damaging payload. Just stop the PLC from running after a given date, or for thirty seconds every half hour, or whatever. The PLC is there for low-level, real-time control of actuators that direct some physical process, and once the control stops, the process will go on in some unwanted way.

Of course there are safeguards to prevent catastrophes, but even stopping some part of the automation in an industrial plant could easily cause serious problems such as damaged equipment and downtime for debugging.


you do need insider knowledge to know that the PLCs cessation of function would cause your effect. stuxnet didn't stop the PLCs, it changed their operation. how did they know what to change it to..?


"force cooling pump off" would probably do the job


But mechanical systems have safety checks built into them. For example, some years ago I wrote some code that helps to monitor and control gates on a reversible interstate highway. Theoretically, the main traffic center software could command all of the gates open, causing head-on collisions. In reality, in addition to many software checks, there were real-world checks and balances involving predictable manual procedures, on-site human intervention, etc. so much so that a Stuxnet-like attack would be extremely difficult, if not near impossible.


Even systems as common as traffic lights work this way. Even if the software were to command "green" in all directions, the electrical circuits are such that it's physically impossible. You'd have to actually rewire the signals to make this happen.


Exactly my point. Although even fail-safe systems can fail, as I've personally seen a T-intersection where all the lights were green (which was freaky to see and caused a little traffic jam). But your point still stands, as I'm not sure how the lights got in that configuration; it could have been worker error.

Computer-controlled systems that can have disastrous real-world consequences almost always have built-in checks to avoid these failure states.


Just because the OS or system (e.g. Siemens) vendors have released patches for the zero-days, doesn't mean they have been applied. In an environment where none of the machines have internet access, applying OS patches takes non-trivial technician time and companies or plants are often lazy.


There can be an excruciatingly large monetary cost to patching some SCADA systems.

Imagine you have a plastics plant running with ACME SCADA system. Your plastics plant has molten plastic running through the facility 24/7. It's actually a lights-out facility, and you're making 1M per day. You schedule six days a year for maintenance, three days every six months, to do a look-see at the pipes. This takes about one day to spin the plant down, one day to audit the pipes, and one day to spin the plant up. This whole process costs you 3M in lost profit, plus the cost of auditing and the process cost of spinning up/down.

Now, your IT guy comes to you and says, "we gotta patch! ACME SCADA's got a hack out against it". Now remember, your ACME system is running the plant. If you power it down without the proper procedure, the pipes freeze with plastic, and your facility needs to be replaced.

What's the risk of you being hacked? You're a plastics facility, making Widgets for economists and their lectures. No one really cares about Widgets. Anyway, you're in the badlands of Boondockia, USA.

Your expected cost of patches must be below the expected cost of being hacked for you to apply the patches.

---

That's the sort of requirements which SCADA owners have to deal with. It's not simply a question of laziness.


If maintenance is being done every three months, it seems (from an admittedly naive perspective) that there is no reason that zero-days should live in the wild for more than 92 days. Fine, don't bring the system down just for IT patches, but once you're bringing it down, update all the systems while you're at it.


It depends on the level of control the electronic control system has over the system as a whole.

The electric power grid security is an area of national concern in the US. I read a report to Congress (publically available) a few years back that suggested the power grid was being hacked in quite a few ways. Googling electric power grid security returns a plethora of results, all of them reporting problems.

Here's a few reports. I haven't evaluated them for reliability and accuracy.

A 2009 report that kicked off a lot of talk http://online.wsj.com/article/SB123914805204099085.html

This one is old http://www.wired.com/science/discoveries/news/1998/06/12746

This one is 'new', as of Jan '11. http://www.gao.gov/new.items/d11117.pdf

Here's a blog on it: http://smartgridsecurity.blogspot.com/

Lockheed sez they are going to work on it. http://www.bloomberg.com/news/2011-06-30/lockheed-promises-e...

In 2010, we got some national guidelines. http://www.nist.gov/public_affairs/releases/nist-finalizes-i...

A video of some congressional testimony: http://www.youtube.com/watch?v=JIPQRKAmCWo

China is frequently cited in this business http://www.uscc.gov/researchpapers/2009/NorthropGrumman_PRC_...

And MacAfee has a report on China going after energy companies. http://www.mcafee.com/us/resources/white-papers/wp-global-en...

Explode a generator! (this can be mitigated) http://articles.cnn.com/2007-09-26/us/power.at.risk_1_genera...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: