Hacker News new | past | comments | ask | show | jobs | submit login
From the man who discovered Stuxnet, dire warnings one year later (csmonitor.com)
68 points by tokenadult on Sept 25, 2011 | hide | past | favorite | 23 comments



SCADA (Supervisory Control and Data Acquisitions, the usual term for industrial control) systems until, at the least, 9/11/01, were not typically designed with security in mind.

Symptoms of problems include the ability to DOS a SCADA network simply by flooding it with packets and the lack of authentication/ encryption embedded in protocols such as IEC 61850 (an increasingly popular SCADA standard).

There are two halves to the problem of hacking a SCADA system.

First, you must be able to exploit the software. E.g., Siemens Step 7. That is standard IT hacking. Not a "problem".

Second, you must be able to exploit the installation. Let me explain a bit more.

In software terms, what you are given to work with is a list of hex values denoting inputs, and then a list of hex values denoting outputs.

So - without knowledge - what you will see is conceptually like this:

    READ 0x1

    READ 0x2

    IF 0x1 + 0x2 > 314159 THEN

      WRITE 0xA, 100

    ENDIF
What do those numbers mean? There's no context until you know what those read/write registers are plugged into. And those could be different for each installation.

The second part isn't always brought out in the Stuxnet discussions. Part of the search for understanding for Stuxnet was decoding how the registers mapped to the installation.

For the interested reader, I refer you to the Symantec white paper. It is of quite high quality and good technical detail. The SCADASEC mailing list contains useful discussion by people involved in the industry, and they really bring out the differences between SCADA security and IT security. And for the really interested reader, I recommend reading up on PLC programming and digging up protocol standards for MODBUS and DNP3.


The article might tout the idea of cyberweapons slightly too much, but I think Stuxnet indeed qualifies as one.

I'm somewhat worried about these things. The problem I see is that we are becoming even more and more leveraged/dependent on technology. And these technologies are increasingly interdependent. A successful attack on one technology can potentially bring down entire systems down in unanticipated ways.

The recent power outage in San Diego and nearby areas serves as a good reminder. You don't actively think about power, it is something you take for granted. Only when the power is lost, then you realize how dependent everything is on it; traffic lights stopped, ATMs didn't work, credit and debit cards didn't work, freezers and fridges stopped, and so forth. From modern times to the dark ages in an eye-blink, instant paralysis.

I don't think nuclear power plants as targets are that interesting. Just turning off traffic light system would be enough to bring down and entire US urban area down to its knees.

New networks of complex dependencies are being created all the time. The smartphone boom is going to create one, and people will start relying on the existence of it. If iPhone and Android keep dominating the market it will create more homogenous mass of devices, providing a more consistent attack surface and more potential for widespread damage. I don't see how smartphones could avoid the same problems PCs were/are experiencing. Waiting for the first smartphone "UNIX worm".

Wireless features are getting added to cars. Yet another potential complex network. War-driving could soon get completely new meanings.


Regardless of whether copies of Stuxnet or variants of it can be used successfully in the future by "any dumb hacker", Pandora's box has been open, the code and concepts are out there, and it only takes one (dumb hacker(s) - competent or not). The threat is real! even if some believe the author might be "drumming up biz" or there is an 'overdramatization of cyberweapons/cyberwarfare. What gets me is the following: Regardless of Stuxnet in this article or any other past article that has discussed network security, what rings true here and will continue in the foreseeable future is this - "Most engineers are aware of the problem, it's just that they don't get the budget to fix the problem. The risk is just discounted. As long as management doesn't see an immediate threat, there is a tendency to ignore it because it costs money to fix." So, even if there was a holy grail of a security measure, I fear that it will have to take real cyberterrorist attack of some sort to really implement it. There is a measure of complacency by government and businesses regardless of whether copies of Stuxnet is out there or not.


Some siemens PLCs have a default superuser password hard coded in to the firmware. Siemens hasn't released an update to remove it.

Control systems should be on separate networks, but even those networks are susceptible to wandering USB keys and contractor laptops.


A better title would have been "Computer consultant will say anything to stay relevant and drum up business".


I remember reading about how stuxnet works a while ago, and it didn't sound like the kind of thing you could 'drag and drop'. Whoever made it had to know specific things about the hardware being used in Iran, the network there, and machine code for the motors that were affected. Then they had to hide the whole thing in a virus in such a way that it took a while for the experts to figure out what was actually happening and where.

The idea could be used, sure: find some important place that you want to damage in some way. Find out if it has computers that hook into some kind of specialized hardware. Work out how that hardware can be damaged via those computers. Find out how those computers are vulnerable. Write an overly-complicated virus that hides what it's really doing, and set it loose, and hope it makes it all way to those specific computers, and delivers its payload. ...it kinda sounds like a single-use case, really.

More likely, stuxnet is just encouraging people (governments, whatever) to consider attacks that target non-computer run systems. The motors being damaged in Iran, if I understood it correctly, weren't being run by computers, just programmed by them. (I'd be surprised to learn that no one had thought of that before.)


Ts ts. Those people who think because they uncovered an idea (which wasnt even their) they also uncovered the possibility for the whole humanity to do a whole new range of stuff. It's not even research material, its idea and concepts. And it's not concepts about physics, its pure 1+1 logic, no magic, no extremely hard to think of ideas. In fact, having read stuxnet I just wondered "wah, wasnt this done since ages anyway"? And most knowledgeable people probably though that too.


Attacking industrial systems is not something that 'just any' hacker can do. Not necessarily due to the skill involved, but just because getting access to a system or the software that runs on the system may not be trivial (unlike, say, getting a pirated -- or legit -- copy of Windows).

The gist of this guy's argument seems to be that industrial systems aren't being patched fast enough to plug the holes that stuxnet used. Until these systems are all fully patched, anyone looking at stuxnet can exploit those holes without needing to gain access to the software these systems are running (to find new exploits).


> With Stuxnet as a "blueprint" downloadable from the Internet, he says, "any dumb hacker" can now figure out how to build and sell cyberweapons to any hacktivist or terrorist who wants "to put the lights out" in a US city or "release a toxic gas cloud."

Is this really the case? I'm nothing close to a security specialist, so maybe I'm talking out of my ass. Still, everything I read on the subject said that Stuxnet was especially impressive in that it exploited not one but several previously-unknown OS-level exploits, on top of the embedded systems attacks it used. With these zero-days now discovered and hopefully patched, how much more value does Stuxnet offer?

(Neither of these questions are rhetorical — hopeful that one of our resident experts can fill me in.)


To answer that, ask yourself this. If you somehow got access to the root account on the central control computer of a nuclear power plant, could you cause a meltdown? Most likely, the answer is no, because you don't know how to control a nuclear power plant. You can run any command, but you have no idea which command would cause any real-world damage. If you're just doing it for the lulz, you could go for the ever-popular "rm -rf *" and cross your fingers, but if you want to be sure to cause damage, you're going to need some domain-specific knowledge. Not just knowledge about nuclear power plants in general, but also knowledge about how to manipulate the control systems at your specific target nuclear power plant.

In other words, even if you have an easily-exploited attack vector available to you, you still need to know a lot about your target in order to cause damage. Contrast this to guided missiles, which don't really need to know anything about the building that they are blowing up other than its GPS coordinates.

For this reason, I'm skeptical that "any dumb hacker" will ever be capable of causing something like a nuclear meltdown via virus infection.


On modern reactors, it would be hard even for the plant operators to cause a meltdown -- there's just too many passive measures in place. You'd probably need to start by walking around the reactor smashing bits of machinery first.

On the other hand, triggering the reactor to automatically shut down would be pretty easy -- and if you shut down all the nuclear reactors in the US for a few weeks, you'll certainly have made a significant impact.


I'm reminded of a fairly recent -- or recently released -- demonstration where researchers (in Idaho, IIRC) programmed a large engine/generator to self-destruct. It was then pointed out that latent inventory on such items is practically non-existent and replacement time is several months. I believe that replacement is also increasingly dependent upon China; North America essentially doesn't make the item, or critical components, any more.

Knock out several of those in critical locations, and you start to grind the U.S. economy to a halt.

You don't need to go nuclear. And destroying the engine was a fairly simple task of pushing it well outside its performance envelope.

EDIT: Looks like pnathan already cited this event -- see the last link in this comment:

http://news.ycombinator.com/item?id=3035909

It's a bit older than I remembered. The article is dated September, 2007.

I note incidentally that for one "catastrophic" scenario they describe, with an estimated "cost" of $700 billion, the damages figure now pales in comparison to what the U.S. economy has been through in the last few years. A bit of a lesson of its own regarding the rhetoric that surrounds the actual topic.


Part of Langner's point is that if you can insert code into the PLC, which Stuxnet shows you how to do, you don't need any insider information to create a damaging payload. Just stop the PLC from running after a given date, or for thirty seconds every half hour, or whatever. The PLC is there for low-level, real-time control of actuators that direct some physical process, and once the control stops, the process will go on in some unwanted way.

Of course there are safeguards to prevent catastrophes, but even stopping some part of the automation in an industrial plant could easily cause serious problems such as damaged equipment and downtime for debugging.


you do need insider knowledge to know that the PLCs cessation of function would cause your effect. stuxnet didn't stop the PLCs, it changed their operation. how did they know what to change it to..?


"force cooling pump off" would probably do the job


But mechanical systems have safety checks built into them. For example, some years ago I wrote some code that helps to monitor and control gates on a reversible interstate highway. Theoretically, the main traffic center software could command all of the gates open, causing head-on collisions. In reality, in addition to many software checks, there were real-world checks and balances involving predictable manual procedures, on-site human intervention, etc. so much so that a Stuxnet-like attack would be extremely difficult, if not near impossible.


Even systems as common as traffic lights work this way. Even if the software were to command "green" in all directions, the electrical circuits are such that it's physically impossible. You'd have to actually rewire the signals to make this happen.


Exactly my point. Although even fail-safe systems can fail, as I've personally seen a T-intersection where all the lights were green (which was freaky to see and caused a little traffic jam). But your point still stands, as I'm not sure how the lights got in that configuration; it could have been worker error.

Computer-controlled systems that can have disastrous real-world consequences almost always have built-in checks to avoid these failure states.


Just because the OS or system (e.g. Siemens) vendors have released patches for the zero-days, doesn't mean they have been applied. In an environment where none of the machines have internet access, applying OS patches takes non-trivial technician time and companies or plants are often lazy.


There can be an excruciatingly large monetary cost to patching some SCADA systems.

Imagine you have a plastics plant running with ACME SCADA system. Your plastics plant has molten plastic running through the facility 24/7. It's actually a lights-out facility, and you're making 1M per day. You schedule six days a year for maintenance, three days every six months, to do a look-see at the pipes. This takes about one day to spin the plant down, one day to audit the pipes, and one day to spin the plant up. This whole process costs you 3M in lost profit, plus the cost of auditing and the process cost of spinning up/down.

Now, your IT guy comes to you and says, "we gotta patch! ACME SCADA's got a hack out against it". Now remember, your ACME system is running the plant. If you power it down without the proper procedure, the pipes freeze with plastic, and your facility needs to be replaced.

What's the risk of you being hacked? You're a plastics facility, making Widgets for economists and their lectures. No one really cares about Widgets. Anyway, you're in the badlands of Boondockia, USA.

Your expected cost of patches must be below the expected cost of being hacked for you to apply the patches.

---

That's the sort of requirements which SCADA owners have to deal with. It's not simply a question of laziness.


If maintenance is being done every three months, it seems (from an admittedly naive perspective) that there is no reason that zero-days should live in the wild for more than 92 days. Fine, don't bring the system down just for IT patches, but once you're bringing it down, update all the systems while you're at it.


It depends on the level of control the electronic control system has over the system as a whole.

The electric power grid security is an area of national concern in the US. I read a report to Congress (publically available) a few years back that suggested the power grid was being hacked in quite a few ways. Googling electric power grid security returns a plethora of results, all of them reporting problems.

Here's a few reports. I haven't evaluated them for reliability and accuracy.

A 2009 report that kicked off a lot of talk http://online.wsj.com/article/SB123914805204099085.html

This one is old http://www.wired.com/science/discoveries/news/1998/06/12746

This one is 'new', as of Jan '11. http://www.gao.gov/new.items/d11117.pdf

Here's a blog on it: http://smartgridsecurity.blogspot.com/

Lockheed sez they are going to work on it. http://www.bloomberg.com/news/2011-06-30/lockheed-promises-e...

In 2010, we got some national guidelines. http://www.nist.gov/public_affairs/releases/nist-finalizes-i...

A video of some congressional testimony: http://www.youtube.com/watch?v=JIPQRKAmCWo

China is frequently cited in this business http://www.uscc.gov/researchpapers/2009/NorthropGrumman_PRC_...

And MacAfee has a report on China going after energy companies. http://www.mcafee.com/us/resources/white-papers/wp-global-en...

Explode a generator! (this can be mitigated) http://articles.cnn.com/2007-09-26/us/power.at.risk_1_genera...


Security professionals habitually grossly underestimate the negative externalities they wish to impose on others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: