Hacker News new | past | comments | ask | show | jobs | submit login
A Cyberattack 'the World Isn’t Ready For' (nytimes.com)
400 points by darod on June 22, 2017 | hide | past | favorite | 241 comments




> Finally, before they left, they encrypted her computer with ransomware, demanding $130 to unlock it, to cover up the more invasive attack on her computer.

Anyone having hard time swallowing this? They broke in and were totally undetected when they planted a... rootkit (unclear from article). While undetected they silently stole user credentials... Then to cover it all up, they ransomware'd the computer?

It's like breaking into a house and putting a bug in the wall, and then to cover the tracks you smash in the front door and leave the water running in the sink.

If the attacker was completely undetected, why intentionally jeopardize that?


No it makes perfect sense. If you want to land an advanced persistent threat but your entry is detectable a distraction is ALWAYS a great psychological tool. The very best and long lasting victories are made when you convince the loser that they've won.

So the premise is that they clear the ransomware and think its over. But its not.


And the standard way to clear the ransomware is to re-image the machine and restore from backups. So the infection has to be hidden above the operating system level. BIOS/SSD/HDD firmware etc.

Unless of course you don't provision machines, or keep backups. In which case hiding on a machine being "cleaned" would be simples.


If you capture user credentials it doesn't really matter because you can just come back a month later. Especially if they had domain admin access and created a golden ticket. Also if you infect a user as opposed to a machine your infection will be back the moment the user logs in.


Would it not also be standard procedure to update credentials after being compromised? How long would those credentials even typically be valid for?


Ahh well if you only compromised passwords then it depends on the domain policy. Often something like 90 or 180 days, but sometimes also only 1 month. But the ticket is different. An attacker can sign and create their own ticket and determine how long it is valid. Obviously an attacker is going for the maximum time which I think is indefinitely for golden tickets (can't quite recall now). Also it's hard to remove the trust in such a ticket once it's been created, it user to be so bad that the entire Forest (with all domains had to be rebuilt) but afaik it's easier(but not trivial) now.


What does golden ticket mean here? Is some kind of microsoft credential? you should be able to detect any kind of leftover account. a backdoor would seem be better.

and on this whole area, i would not encrypt someones machine - unless you are trying to scare someone, it would be better to have never known.


"The very best and long lasting victories are made when you convince the loser that they've won"

saved!


"The very best and long lasting victories are made when you convince the loser that they've won." -- that's catchy, but I don't agree. Often the result will be "damn that was close, better enhance the defences". Whereas the ignorant just carry on.


yeah except that they were found out because of this "distraction," so it seems to have been a mistake


Why is it hard to swallow? Really it's a classic magician's trick - misdirection.

Think of it this way: if you are responsible for security, you never assume that you're fully protected. If you're a smart hacker, it should be the same thing: you never assume that you won't be detected.


> It's like breaking into a house and putting a bug in the wall, and then to cover the tracks you smash in the front door and leave the water running in the sink.

I think it's more akin to breaking into a house, putting a bug in the wall, and then stealing some cash in case the homeowner had a silent alarm or nosy neighbors or some other way you could be caught.


It's really like breaking in to steal their passport and spare keys out of the safe, but deciding you're going to make off with some cash, a few TVs, and smash up the place so it's harder to figure out what you stole and find good forensics.

If your shoe prints only led to the safe and back out, it would be obvious what you stole; if they're all over the house and the whole place is a wreck, then it's harder to tell what you were (actually) there for.


It's really just used as a distraction. But I like you Home Alone reference. However, the Wet Bandits weren't very good at their job.


The wet bandits had a great scam. They just didn't account for an improbably skilled child to be alone in a house that should be empty.


You're right! They did have a good scam going.


It's a classic distraction tactic. You create some big visible threat/situation to focus their attention on. While your real threat is happening unnoticed as they are too busy dealing with the distraction.


It's like breaking in to plant a bug in the wall, but you had to break a window to get in. They'd notice the broken window, so you steal the TV so they won't be suspicious of a break without an apparent motive.


So we are investing dollars in NSA so they make tools that bring us damage? I think there should be a limit on how long they can keep their "secrets". Snowden already showed that information can go outside, so any information will be leaked at some time. The society has much more to win when the software defects are repaired instead of used for hacking.

I think that all these NSA problems show bad management. They should be reorganized, or maybe even abandoned. They cost more than they deliver, and even costed us our privacy. Probably they are still breaking US and international laws on that. Breaking up NSA can allow the FBI and (open) security companies to take over the cybersecurity.

I suspect that we will soon have leaks of CIA tools too. But thanks to wikileaks companies can prepare for these future problems.

We can go deeper into who got these tools and who is using them. Some may even argue that the CIA leaked the NSA tools to weaken the NSA. Or worse, that some in the NSA want to create cyber chaos to push for more control over the internet in the future.

The article mentions the popular political scapegoats, and as usual this is just speculation. To solve NSA's problem we have to request very concrete evidence, otherwise we are just being played with.

Article: > The Shadow Brokers resurfaced last month, promising a fresh load of N.S.A. attack tools, even offering to supply them for monthly paying subscribers — like a wine-of-the-month club for cyberweapon enthusiasts.

This shows how we are being played with. The NSA could already have published the security details of all leaked tools, so we could all have protected our computer systems. We could have prevented Wannacry.


>This shows how we are being played with. The NSA could already have published the security details of all leaked tools, so we could all have protected our computer systems. We could have prevented Wannacry.

NSA did exactly what you said and went to MS months before wcry hit, once it was clear what shadowbrokers had, in order to patch the vulnerability that wcry exploited: https://technet.microsoft.com/en-us/library/security/ms17-01...

unfortunately, not everyone keeps their systems totally up to date for various reasons.


> unfortunately, not everyone keeps their systems totally up to date for various reasons

Because patching requires a quick risk calculation. Should I patch to the bleeding edge and get the latests security but risk a regression bug, or do I wait a bit so I can run a full regression test?

On my machines, sure I like to stay on the latest and greatest. But I'm sure there are plenty of companies that got bitten because some critical software they rely on didn't play well with the latest OS upgrade. Blame game notwithstanding, it comes down to a business disruption risk.

Of course, the right answer is to test the patches as they come out in a non-production environment, and go from there based on results. But I can see where some companies wouldn't have the resources devoted to do that on a frequent basis, which is unfortunate.


Similar to this, I didn't patch my OS X laptop for something like three months because the darn update was a firmware update. It said something along the lines of "Make sure your laptop doesn't lose power, and make sure you have a full backup before starting." Uh-huh. I do have a full backup, but clicking that "Later" button was far more valuable than dealing with the 0.001% chance of disaster.

And I'm a security guy. If I don't care enough to update every day, what do other people do? (There's not much reason to keep a personal laptop up to date when you're not interesting enough to be targeted and not in the habit of running random internet programs. Or rather, a few months of lag is fine, as long as you're paying attention to what the updates are for.)

I think we just don't like to acknowledge the fact that updates sort of suck, yet we bash people and shame them for not doing them. I mean, we don't have many other tools to force them to update, but surprise surprise when people are just concealing the fact that they don't update at all. Even in corporate settings.


Platforms should maybe get better at differing security patches from other types of updates to help make the choice easier. But, I suppose, at a certain point any sufficiently old setup is simply no longer supported because it becomes too difficult to align the security patches with old versions.


This. It still amazes me that MS hasn't done a better job of this.

I'd be interested to hear a heavyweight enterprise sysadmin's take on this, but my experience and read is that it's "toss it at the wall and see what sticks".

Which is crazy when MS knows all of the following things: (a) which files the patch changes, (b) how it changes those files, (c) which programs a customer uses, & (d) which MS files get loaded by which customer programs.

In days where we can do ISA-to-ISA conversion on the fly, you'd think it wouldn't be rocket science to be able to say "Warning! This patch may effect operation of commonly used programs X, Y, and Z". Or at least have an admin tool to provide that information.


Microsoft does a pretty good job at this, but some patches disrupt certain configuration. The latest round, for example, impacts printing for many people on Windows 7.

This is part of what I do. We do a risk assessment -- SMB and Kerberos get patched no matter what, everything else depends on a test cycle and may be deferred for up to 6 mo.


>Microsoft does a pretty good job at this, but

What? With the large monolithic patches that Microsoft has moved to, they have got worse, much worse, recently. It does make it easier to patch everything at once quickly, but if one thing goes wrong, you have to back it off and lose all protections.


Unless you were doing external vulnerability assessment, that granularity was a false sense of security. Rolling back sometimes re-introduced old bugs.


Granularity is impractical when applying patches is optional, as it drastically increases the number of applied patch combinations to QA.

But my point was more towards MS programatically alerting customers as to what programs and subsystems patches might effect.

As far as I've seen, they give you a file list, some brief notes on what the patch is for, and assurance that they internally QA'd it.

But I can't see why there's any technical reason that my system can't warn me that an MS library called into frequently by a particular program I use every day is modified in this patch.

Which is something I care about almost as much as "MS QA passed this patch" (side note: thanks so much to all the unloved, unknown internal QA folks out there, keeping things from breaking!).


I mean... they're trying pretty damn hard with Win10.

The LTSB version is designed to essentially stay the same with regards to applications/apis/etc while still proffering up security updates as fast they have them.

http://windowsitpro.com/windows-10/understanding-long-term-s...


That sounds sounds like a problem perfect for a startup to tackle.


Also the security patch channel has been incredibly abused to deliver non-patches.


Yes, the final analysis showred that Windows 7 systems were the most affected by WannaCry. And they were targeted for more than a year by Microsoft with the multiple "upgrade now to 10" ads delivered through the security update channel. And I personally had the computers which didn't work under Windows 10, and worked correctly under Windows 7, so I really understand those who disabled auto-updates.

Not to mention that also afterwards there were Windows 7 patches that kept one CPU core 100% busy for days!


Windows 10 has tons of security patches in it. Yes, it also has some UI changes that are less than great, but, seriously, stop using Windows XP already.


Windows XP is a red herring. No one uses it anymore, and WannaCry didn't even successfully spread from that version of Windows.

Windows 7 is the new holdout, and people aren't eager to swallow 10.


Except in countries that used XP extensively like China. Non-Microsoft parties had to create and release patches that protected against wannacry.


Why? The only reason XP came up was because Microsoft went out of their way to patch it themselves.


Because XP patches only worked on valid licenses. A ton of XP in China is pirated. So Qihoo 360 created custom patches for all those pirated versions of Windows. Weird Alice in Wonderland situation.

https://www.engadget.com/2017/05/15/pirated-windows-china-ru...


So you are just going to ignore the fact that Microsoft abused the security patch channel to automatically "upgrade" people to windows 10?


OS updates should come through the normal security patch channel! That's how OS X does it, and that's how Chrome OS does it.


OS updates should come through the normal security patch channel!

Except when you are "updated" from a full, paid version to a spyware/adware-ridden version.

Seriously: I think Windows 10 is great, technically and usability wise.

But MS need to learn that they can't have it both ways:

Paid xor ads. (I can think of two exception: inside the store app and inside settings for onedrive.)


Still, Microsoft misused the channel. They made it intrusive, misleading and impossible to control more than once.


Well, it's not how most Linux distros do it. If your update has breaking changes, it's not a security update.

There is no good reason to excuse Microsoft for maliciously disguising updates as security patches in order to manipulate the non tech-savvy into switching to their new unprecedentedly invasive OS. Especially when, as you said, the UI is worse.

The entire UX of Windows 10 is worse, every time I have to use it for something I get physical anxiety from claustrophobia.

I really don't like that I have to install untrusted 3rd party software on my computer in order to prevent my operating system from automatically ruining my user experience and spying on me.


Microsoft deserves ALL the blame for people still running XP. They broke so many hardware drivers with the XP->Vista change that people basically got stuck forever.

Then, not having learned their lesson, they pulled similar crap with the Windows 7->Windows 8 transition which pissed people off so badly that they refused to go to Windows 10 and are currently suing Microsoft for attempting to shove it down people's throats.

Insecure old version of Windows are Microsoft's own damn fault.


I recall the driver change was because under XP, drivers ran under kernel privileges.

Moving them out of the kernel is a great engineering decision, infinite backwards compatibility isn't feasible. Sometimes breaking changes are needed.


Drivers still run inside kernel. What was changed in Vista is that the kernel tries to detect when kernel data structures that should be immutable are changed, which some drivers do.

The idea of non-privileged drivers is neat, but in general is not worthwhile because the driver has to somehow access the hardware, which for significant amount of device/platform combinations leads to access to arbitrary memory locations.

Edit: perfect example are GPU drivers, which are for a long time typically composed of small priviledged kernel driver and all the complex logic in userspace. In many cases the interface between these two components could be abused to get code execution in kernel context (in 2k/xp times there was even RCE in kernel context triggered by displaying properly crafted image in IE)


As I recall, the issue they meant to address wasn't security but stability. Apparently a majority of BSODs were caused by faults in the driver taking down the kernel with it.

This means you aren't preventing drivers from having full access. You just need to prevent more unintended side-effects.


And you can do things like give people specific permission to access the kernel when doing the driver install. ie. "This driver does not adhere to Vista driver standards. Do you wish to install as an XP driver?"

Suddenly, people can run that business critical, single, old driver as unsafe while running the other drivers safely.

Alas, some manager at Microsoft decided it was more important to get his numbers up this quarter so he could get his bonus. In so doing, Microsoft orphaned a bunch of people on XP just like they orphaned a bunch of people on VB6.

Microsoft made its own bed; now it has to lie in it.


You're giving people too much credit. A lot of people, and companies, totally ignore their software. There's no risk calculation; they're not thinking about it at all.


Most of the time a security patch doesn't break software. There are exceptions and disasters have of course happened but you're kind of conflating general updates, upgrades and security patching. You can even canary test patches against percentages of your userbase since it's really simple with a Windows environment. Patching windows itself is probably the easiest thing to patch in an enterprise environment. The real reason these companies/organizations don't patch Windows is because they don't prioritize it and/or they're lazy.


You speak in certainties about vagaries.

There are plenty of companies where "most of the time it's not broken" is a big problem. Possibly bigger than the cost of a security issue (I don't know, because this is both hypothetical and I'm an outsider to the issues at play).

To not acknowledge that others would value a different part of the risk curve lacks perspective.


I mean, in my line of work Windows Update suddenly running and rebooting means fire risks. I wish I could connect the computer to the network so I could monitor equipment remotely, but it's too much risk. I hope utilities take the same measures. Certainly some work computers are vulnerable to all mess of viruses from not having gotten updates in years.


> I mean, in my line of work Windows Update suddenly running and rebooting means fire risks.

With all due respect, if that's the case you should not be running Windows.


I agree in spirit, but there's always a balance. And to clarify, I meant "risk" in the "failure analysis" sense. I didn't intend to imply that such risks should go unmanaged. Disconnecting from the internet is part of that risk management, but of course it is multi-layered.

I can't buy an Emerson control system for a small reactor getting reconfigured every other week, and LabView on an un-networked Windows computer is perfectly fine.

I would not use a PC (with any OS) to control a 10 kg reactor though. At least directly. I think it'd be okay to use a PC to coordinate discrete controllers as long as they couldn't change state without a command (i.e. latching valves and the like) and as long as there was a backup safety that didn't have a computer in the loop.

Safeties that do things like shut off furnaces if temperature sensors break or valves that shut off flow if it becomes too high or detect a flame are common, analogous to a fuse on a circuit board. You sure hope not to use them, but they'll suffice for unexpected situations.

But there's definitely a risk that has to be managed, and connecting infrastructure and industrial equipment to the internet is not managing it very well!


I believe it was compounded in this case by the fact that Microsoft did not release the patch to all users. That OS was no longer supported so they only made the patch available to customers who pay for long term support.


What OS?

All OSes vulnerable to wannacry are still under support.

Contrary to popular belief, wannacry will not spread to windows XP, it will just crash it. It will run on it if you fire the executable manually, but it will not infect an XP host through SMB.


Oh? I had heard that NHS didn't have access to the patch but I'd be happy to be proven wrong. I was under the impression that XP was the main vulnerable OS under WannaCry


But the point is that if you are not getting paid they are as you say users and not customers.


The point is that people could have died because Microsoft decided not to release a zero day patch widely. In fact, once WannaCry got bad enough, they did release the patch.


What about Stuxnet? Did they go to companies and told them to patch against that once it was out? We don't even know how many more times these things have happened in the underground and used by criminals. We have just one example of NSA doing this, and even that wasn't sufficient because the "horse was already out of the barn."


> I think that all these NSA problems show bad management. They should be reorganized, or maybe even abandoned. They cost more than they deliver, and even costed us our privacy.

I.e. the average citizen.

Q: Is the NSA there to support the average citizen, or are they there to support the bureaucracy / power structures?

Q: depending on the previous answer, does your security or privacy matter to the NSA?

>Probably they are still breaking US and international laws on that.

Do the people who enforce the law get punished when they break it? On the whole, I don't think that's a clear "Yes". More of a "maybe, sometimes".


I'll see your "maybe, sometimes", and raise you a "rarely, if ever".


In practice, the penalty is not only rarely applied; but worse, those seeking to improve the situation are scared, threatened or imprisoned.


>Do the people who enforce the law get punished when they break it?

In a democracy they do ...


Democracy is technically orthogonal to this issue. If the rule of law is firmly established in the country, people who do illegal things get punished. This is equally possible in well-organized dictatorships as in democracies.

I will grant that there is a strong correlation between (the presence of) rule of law and democracy, but it's debatable in which way a causal relationship might flow or if they might both be caused by something else. (For example, economic prosperity seems to help)


Surely a dictatorship is necessarily contrary to rule of law as at least one person is not bound by the law?

Meanwhile, in a democracy, I can see you could have group not under the rule of law _iff_ any actions they take do not impact anyone who is going by the law, including an impact such as "gives an advantage or benefit to those under the law can't get". Otherwise the government whilst possibly being "by the people" would no longer be "for the people" but instead "for the preferred classes".

You could maybe finagle something that some people could do, that was illegal for others, but doesn't give the first group any benefit. But I don't think you can practically; so practically then democracy would be predicated on rule of law even if not technically required.


A dictatorship could conceivably alter the law such that whatever he/she does is not illegal, possibly even chinging the laws as he/she goes. If you're a dictator, changing the law need not be a long and difficult process.

I'll grant that in practice it is very difficult to have rule of law without democracy. It is interesting to think about as a "gedankenexperiment" though. :)


The example of dictatorship with a rule of law is Singapore under Lee Kuan Yew.


In what way is he dictator if he can be challenged and defeated in the courts? FWIW the Wikipedia page, on cursory view, only refers to him as part of political parties and having been elected: perhaps you can flesh it out a bit for someone not familiar with Singaporean history, thanks.


His party is a ruling party of Singapore since before independence, until 80s it won all seats in parliament. He was a prime minister for 30 years, current prime minister is his son.

Lee Kuan Yew was a benevolent dictator, except for his political opponents. Dictatorship is a spectrum :)


No. Rule of law is an inseparable part of democracy, you can't have one without the other.


Can you expand on this? I'm willing to grant that it is true in practice, but it is not at all clear to me that a benevolent, lawful dictatorship impossible in theory.


A dictator can be benevolent, but he's out of the Law's reach, or he wouldn't be a dictator.

If everybody must follow the same laws, and nobody can change the laws at will, you will get a democracy. If some people can steer lawmaking at will or law does not apply to them, it doesn't matter if you vote on public representatives, they won't be submitted to your wills.


In theory, yes. In practice, no.


> Some may even argue that the CIA leaked the NSA tools to weaken the NSA. Or worse, that some in the NSA want to create cyber chaos to push for more control over the internet in the future.

Some may argue a lot of things that may or may not be true.

This will probably always be an issue so long as there are classified operations... Everyone who isn't privy will have their own interpretation of what really happened.


> I suspect that we will soon have leaks of CIA tools too.

All of the stuff recently leaked by Wikileaks (including today) was from CIA.


But they purposefully didn't release the tools or the source code for the tools, despite receiving them.


The NSA did contact the appropriate vendor in time to patch WannaCry.

People didn't (or couldn't) update their systems with the appropriate patch.


I've just been waiting for someone to send them a bill.

How much money have their tools cost folks in time, money, hardware, and personnel, even in just the last year? How much money have their unpatched exploits cost?

How's that all compare to their (mostly undocumented) annual budget, anyway?


So, in summary. The NSA hordes zero day (not sure if just windows), builds some scary tools to exploit these, and of course doesn't let MS know about them, because zero days are incredibly valuable to the spooks. And because perfect security is impossible these tools get out. Perhaps this was obvious from the WannaCry episode, but this article really hammered it home for me.

Why people run any systems on windows is beyond me (not that others are more secure, but windows is a bigger target)


>Why people run any systems on windows is beyond me

Windows gets hacked because it is the most used desktop OS. If everybody started using Linux, it would get hacked as often as Windows. Quoting your comment again:

>because perfect security is impossible


When I worked at a web hosting company, it certainly felt like Linux got hacked as much as Windows.


The solution is not that everyone on Windows switch to Linux, just that some of them do. You said it right there: Linux is more secure, right now, not because it has fewer holes in it but because the incentive to find those holes is much less than with Windows. Seems like a good enough reason to switch for me, just don't expect it to last forever.


Farmers have known for centuries that diversification increases security, for a small cost in efficiency.


And THAT is my main objection to GMO / seed standardization


Routers sound like the bigger target to me. Take over the NAT/Firewall and you can use it to exploit every machine behind it.


Which is why everyone should still be running a firewall on their machines and stop relying completely on the router. It seems way to many people (and companies) rely on a single point of defense. This is especially true of any machines that may be connecting to public networks and home networks where you might let you're friends use wifi.


> Why people run any systems on windows is beyond me

I'm guessing you've never been near a large company that has a large investment in Windows and applications that run on Windows.


Honestly, when it comes to cyberattacks, I think the truly worst area right now is "Internet of Things" devices and similar Internet connected devices like point of sales. At least Windows (and other computer OSes) have slowly become more secure over the last 20+ years. As far as I know, all the major OS players are very good at patching discovered exploits quickly. Malware spread by social engineering (eg tricking someone to execute malicious code) is the most common way things are spread these days.

Many IoT companies, in contrast, have root-level exploits that are laughably trivial to hack, and some don't seem to care that much at all when exploits are discovered (https://www.trustwave.com/Resources/SpiderLabs-Blog/Undocume...). Windows is imperfect I'm sure, but I'd rather connect a Windows machine to the public network than any so-called "smart" appliance at this point.


Like it or not, pretty much all enterprises run on Windows software. Not just desktop, either. Active directory and Exchange are pretty much the standard for all enterprises.

And non-techies in these places couldn't live without Office. I once got an email with an attached Word document that contained an embedded Excel spreadsheet with exactly two cells -- an IP address and a hostname -- for a DNS entry I needed to update.


As the Four Yorkshiremen would say: Luxury!

At least you could copy and paste. In some work environments, you would have been sent a JPG screenshot of the IP address and name.

Now enjoy the discussion of images in mail messages in https://news.ycombinator.com/item?id=14567074 (-:


That seems impressively competent for your average office user being able to embed a spreadsheet. What was wrong with them?


I think you can just copy and paste cells from Excel into word


This isnt a really good reason to not run windows, they have similar 0days for any nix system as well.


Do they ?

I have watched very closely for 20 years now for any chatter or evidence of a FreeBSD/OpenBSD zero-day vuln that was weaponized or sold or used for state-sponsored covert action and I am not aware of any.

None of the leaked information from Snowden, et. al, gave any evidence of their existence either.

Technically speaking they should exist but I'm not seeing them ...



That's a samba vuln, not a FreeBSD vuln in particular ...

Further, if you were worried about attackers or hardening a system, etc., you wouldn't be running samba (or anything like it).

I'm talking about a remote-root vuln in the FreeBSD kernel or in any of the default system daemons that were bundled with an OS release (like openssh, crond, syslogd, etc.)


In all honesty, there are probably both zero days for both BSDs (the software was written by humans after all) and great reasons to prefer them that don't involve this argument. I'd finger the "security ethos" of OpenBSD in particular.


Because the NSA is keeping those secret so they can still use them. All these recent 'leaks' are just misdirection so when OpenBSD becomes the defacto standard PC OS they still have a way in ;)


> Why people run any systems on windows is beyond me

games


Not the case in enterprise ;)


> games

> Not the case in enterprise ;)

tools


>tools

that's a funny way of spelling "subsidies"


More like tools + a ransom with I.P. laws.


Same here. I have a Windows PC which I use only for games and media processing. Steam for Mac isn't bad, but the vast preponderance of popular new (and old) games remain Windows exclusives.


I feel silly saying this, but I sometimes imagine a scenario in which the attackers are not motivated by money but instead are aiming to simply cause as much destruction as possible, like some kind of "cyberterrorism".

Imagine if the creators of WannaCry had decided to brick everything they could, instead of _just_ holding data for ransom. What then?

Ben-Oni (from the article) says he sees it as "life-and-death". I agree. We're simply not prepared for a well-coordinated attack. I think it will take a true catastrophe before anyone really understands just how vulnerable the Internet is.


Have you seen the Live Free or Die Hard [1] fire sale? Also, the original Deus Ex has the New Dark Age [2] ending. Both are similar to what you've described.

1: http://diehard.wikia.com/wiki/Fire_Sale

2: http://deusex.wikia.com/wiki/Deus_Ex_endings


Have you seen live free or die hard? It was a heist flick and a crappy movie in general.

It utilized scary "cyberweapons" as a plot point but the goal was to steal.


I'm imagining a future where systems have become so complex it's impossible to isolate and deactivate a worm. Malware that may not be effective anymore, but persists on the Internet hopping from unmaintained system to unmaintained system forever.

https://www.youtube.com/watch?v=jSospSmAGL4


Check your web server's access logs. I still see evidence of Code Red [0] on a regular basis.

[0]: https://en.m.wikipedia.org/wiki/Code_Red_(computer_worm)


When alien archeologists study the third rock from Sol, they will find no life save for IRC C&C bots chattering back and forth. Feels like the end to an interesting sci-fi novel.


That would be Stanislaw Lem's most depressing but aptly named masterpiece, "Fiasco".

https://en.wikipedia.org/wiki/Fiasco_(novel)

http://garethrees.org/2012/05/31/fiasco/

2. Radio static

A radiolocation map of the planet showed hundreds of transmitters of white noise, which merged into shapeless blotches. Quinta was emitting noise on all wavelengths.

In the Cold War theory: “What came to mind was an image of “radio warfare” taken to the point of absurdity, where no one any longer transmitted anything, because each side drowned out the other… All bands of radio waves were jammed. The entire capacity of the channels of transmission was filled with noise. In a fairly short period of time the race became a contest between the forces of jamming and the forces of intelligence-gathering and command-signaling. But this escalation, too, penetrating the noise with stronger signals and in turn jamming the signals with stronger noise, resulted in an impasse.”

Other hypotheses considered: “The noise was either the scrambling of broadcast signals or a kind of coded communication concealed by the semblance of chaos.” [It’s a consequence of the Shannon–Hartley theorem that the maximum information is transferred on a channel in the form of white noise.]


Sounds like American politics on the Internet right now



Malware does that now. There are enough unpatched/vulnerable systems on the internet at any given time that nothing is ever really eradicated.


That's actually a major plot element in Peter Watts' ßehemoth trilogy.


Nitpick: isn't it β instead of ß?


Crud, it is. Well spotted, and thanks for the catch!


That seems pretty inevitable, not unlike biological pathogens.

Actually, I can imagine a future where we use AI to eliminate biological pathogens, but digital pathogens proliferate.


That's already the case last I checked - plug any machine into the internet directly you'll still get scanned for vulns from early 2000s.


> I feel silly saying this, but I sometimes imagine a scenario in which the attackers are not motivated by money but instead are aiming to simply cause as much destruction as possible, like some kind of "cyberterrorism".

That's pretty much how viruses worked before people figured out how to monetize them. A whole industry was built to defend against bored, malicious teenagers.


A fear of mine, that I think wouldn't be too hard (relatively speaking) would be to interrupt bid/ask sizes in markets, since it's largely machines trading now with no humans able to step in like a floor specialist. Effecting the bid/ask sizes could cause a massive over supply of equities and cause it to crater, which could induce more machine selling simply because that's how their algorithms are written. The market could crash 10% before anyone knew what was happening. There are breakers that are built to prevent this but high leverage in ETF securities, etc could exacerbate the event. In effect, causing an '87-like crash from a hack. The '87 crash was also largely machine driven.


What you're describing happened to Ethereum this week [1]. On one exchange (GDAX), a large sell order pushed the price down by 30% or so, which in turn triggered a lot of (market) stop loss orders, pushing the price farther down. In the end, a couple of thousand ETH traded hands for $0.10 while the price at other exchanges was around $300.

I share your fear. At the same time, it's also important to remember that most other instruments trade at way deeper order books, with orders of magnitude greater liquidity and massively better controls (e.g. circuit breakers).

[1] https://news.ycombinator.com/item?id=14611333


I think it will take a true catastrophe before anyone really understands just how vulnerable the Internet is.

Or how bad an idea it is to connect anything and everything to that Internet, particularly if it does anything important or potentially dangerous. If the Internet is one of the best ideas humanity ever had, the Internet of Things may prove to be one of the worst.

My personal nightmare involves a vulnerability in a popular model of remotely connected and semi-autonomous or autonomous vehicle. I don't think Western governments have any idea how much harm something like that could do or how plausible it actually is, and I don't think the auto industry executives care enough to stop it.


It doesn't need to be on the internet to be catastrophically exploited. Most buildings have zero defense against tailgating, let alone sophisticated covert entry. Most organizations contain people who can be tricked, bribed, or accidentally hire an adversary.

Disconnection can stop drive-by malware, people trawling for additions to their botnet collections. Someone who wants to launch a coordinated attack will have no problem getting behind the firewall or across the air gap at enough interesting networks to cause serious harm.

We have to actually write secure software.


> Most organizations contain people who can be tricked, bribed, or accidentally hire an adversary.

I've been thinking about this all week. I discovered a fairly big vulnerability in our software the other day that allows anyone in the company to access data they shouldn't, not national secret level data, but enough that it could be somewhat valuable. We also have a number of people of a certain nationality that's somewhat hostile to the west, many of those people are programmers.

How would you differentiate incompetence that lead to the vulnerability from maliciousness that intentionally caused it?


So you're saying because you employ people from a country that's hostile to the west you trust them less than people from your own country?

Sounds like the vulnerability isn't your primary problem.


Hostile and known for subterfuge. Most are probably alright but one in particular also had a run at politics with a fair bit of financial backing from this country.


> My personal nightmare involves a vulnerability in a popular model of remotely connected and semi-autonomous or autonomous vehicle.

The CAN bus is a fundamentally insecure system. Devices accept that you are the device ID you say you are. The only way for a device to vote you out is for it to see its forged ID go out on the bus and then trash the bus. Not remotely failsafe.

Increasingly vehicles are networked systems. Devices need to act like it - encrypt data between themselves and authenticate each other. Without subsystem-level access controls (should the head unit be talking to the brake controller?) there is just too much attack surface.

You can no longer be secure on a "friendly bus", this is now a mini-WAN as far as attack surface, and has been since wifi/bluetooth/cellular basebands were put on the bus. Firmware updates need to be cryptographically signed (or jailbreakable with a user-selectable root CA cert).

Everything else is vehicle manufacturers whistling past the graveyard. The CAN bus is dead, it passed away probably 10 years ago, it's just zombie companies who refuse to re-engineer appropriately when they can just ignore the problem instead (recalls don't happen right?). It's cheaper just to put the new head unit in.


The CAN bus is a

lol. So is the PCI bus inside your PC. It's not fair to sneer at something that doesn't do something it was never intended for.


The Singapore government implemented something similar a year earlier - disconnecting all government office computers from the internet. It's inconvenient, but in certain high risk use cases (power plants, airports, etc.) it might be essential because there will always be vulnerabilities in a system, especially when people are involved, no matter how well the systems are monitored.


> airport

Airports have everything from huge IoT systems managing the climate of the terminals, thousands of monitors for public information, controls for automated baggage handling, traffic/parking management sensors, vast sewer systems with sensors to manage storm/sanitary/glycol recovery/water, security systems (tens of thousands of cameras/doors being monitored), and then hundreds of smaller systems doing everything from managing ground transportation through to emergency dispatch.

I'm not sure it's even feasible to air gap all of that - the loss of productivity and additional cost would be far greater than the perceived security risks. Airports typically don't have large IT departments either, they outsource much of the work to consultants and cloud solutions. Critical systems should be, and are, air gapped but if something took out some of the systems connected to the internet it would be chaos.

One random aside... I work at an airport and did a tour of the ATC tower when I started. One of my first questions was how do they handle a loss of critical systems for landing planes. They proudly whipped out a massive signaling light (https://en.wikipedia.org/wiki/Aviation_light_signals) and explained how they use it. I actually found it quite reassuring that despite all of the technology they clearly had contingency plans in place.


> Airports have everything from huge IoT systems managing the climate of the terminals, thousands of monitors for public information, controls for automated baggage handling, traffic/parking management sensors, vast sewer systems with sensors to manage storm/sanitary/glycol recovery/water, security systems (tens of thousands of cameras/doors being monitored), and then hundreds of smaller systems doing everything from managing ground transportation through to emergency dispatch. … I'm not sure it's even feasible to air gap all of that - the loss of productivity and additional cost would be far greater than the perceived security risks.

Which of those systems would suffer from decreased productivity were they disconnected from the Internet? Indeed, I imagine that they'd experience increased productivity: there's no need for an air conditioning system to get its updates over the Internet rather than, say, by a human being with a thumb drive. Ditto monitors, ditto baggage handling, &c.


Many of these systems have a management UI available through mobile devices because of the need for staff to manage them while in the field or out of the office. As I said, some of the most critical systems are air gapped but there are critical systems that require an internet connection for practical use.

Emergency dispatch for example - the airport acts as a PSAP (https://en.wikipedia.org/wiki/Public-safety_answering_point) and requires integration into regional systems. They have radio backups but my understanding is that they pull a lot of data via the internet. The first responders also have cloud apps that help them route to a location or see the position of other nearby resources.

There is also considerable coordination with regional/national infrastructure owned by the airlines for managing when aircraft will depart/arrive. That would be much harder without an internet connection.

The airport will continue to operate safely if they lose internet connected infrastructure but the efficiency will drop quickly and the national airspace is like a busy road network - congestion in one area can rapidly cascade and cause chaos.


Put the car's firmware in ROM. Power cycle the electronics, and the malware is gone.


Now it takes a dealership visit and a supply chain to get a firmware update. Good luck getting that to fly in the era of day-one patches and 1.0 betas.


Why the fk does a car need a firmware update regularly enough that going to a dealership is a problem? I honestly find myself wanting dumber everything these days.


Tesla pushing out car features OTA seems to be quite popular.


Until it's exploited, then we will be wondering how we could be so dumb.


Not necessarily. A physical write-enable switch can work, too. (Not a software switch!)


Spring-loaded.

See the SR-71 fuel aft transfer switch. Maintaining a forward CG keeps the bird in the air. Control is intentionally manual.

https://www.sr-71.org/blackbird/manual/1/1-62.php


So, if it is still connected to the Internet and its vulnerabilities haven't been patched, how long will it take for the malware to reinfect it?


So don't connect it to the internet. That's the only way you're going to be able to secure a car. (Well, and don't run Bluetooth, and and and...) Let the car be a car. You want to be connected via your car? Your car is now insecure.


So basically infiltrate the supply chain which doesn't seem like a big deal lately


Yeah, I think vehicle software is going to be a nightmare. No way are they going to keep patching software after 5 years or maybe 10. I can imagine that a 20 year old vehicle will just be recycled into scrap because it has been compromised.


In the US most vehicles are already recycled for scrap before they are 20 years old.

Average age has just ticked up to 11.6 years: http://www.autonews.com/article/20161122/RETAIL05/161129973/...


I personally don't really intend to buy any cars with a wireless network connection. I don't know how much longer that will be possible. But at least requiring physical access will help a lot. (And prevent a true horror: Someone figuring out how to make malware spread from car to car.)

Regarding "IoT": There's no reason your light switches should talk to the Internet, even for home automation purposes.


I personally don't really intend to buy any cars with a wireless network connection. I don't know how much longer that will be possible.

Not very. In many areas, it is either already a regulatory requirement or about to become one that any new vehicle implements an automatic system that will notify emergency services in the event of an accident where no-one on board is able to call for help, sending information about the location of the vehicle and the nature of the accident. That inevitably requires both remote communications capability and integration with some of the other safety-critical systems in the vehicle. While this particular application may be a worthy goal that will genuinely save lives, the architecture it implies will inevitably also be more at risk of security vulnerabilities than an entirely disconnected vehicle.


Do you have more details on where this is mandatory currently?


From 31 March 2018 in the EU.


I'm having a new A/C and furnace installed. About half the proposals we received had wifi-connected thermostats, and I said no thank you.


I love having a connected thermostat, mind you. But I'm using INSTEON, which is controlled locally by my computer. Rather than trusting the security of a random IoT device, I ensure nothing can get to my devices except through my computer.


The situation is so scary, I don't even joke about it.

I don't want to discuss it in depth for the same reasons, but joking around with friends has led to feeling physically ill and bouts of drinking because of the extremely dangerous possibilities that exist.

It's a fundamental mistake to network all of society to make it efficient: inefficiency is security. (In a broad, theoretical sense -- from capitalism to government to IT.)


We're simply not prepared for a well-coordinated attack.

Aren't we? We handle natural disasters, large-scale power grid failures, etc. If WannaCry bricked everything it touched, the results would have been much worse and more tragic but would they be worse than an earthquake or hurricane in a major metropolitan area?


Yes and no. Natural disasters are contained to a region. Imagine a completely undefended attack that bricks all Windows, Linux and Mac OS computers computers worldwide at the same time. It could be all routers. All chips made in say China.

So many people would starve because food couldn't be delivered in the efficiency it takes to feed the population. You've seen grocery stores wiped out during hurricanes, and those have several days lead time to prepare. We are much more dependent on technology than a lot of people realize.


I honestly don't think anyone in a 1st world country would starve, if that happened. It sets you back 20 years, the processes and technology to deal with that are still in place. And it's a lot less than 'entire eastern seaboard has no electricity'.


I dunno. We're a seriously automated country. Food processing and refrigeration plants are designed around robots and those robots are controlled by (hypothetically) bricked computers. Even if we had the volunteers, the floor just isn't designed to hold them, it's designed for 2 ton hunks of metal that don't move anymore. How do you feed 320 million people? 20 years ago, we had around 50 million fewer. How do you deliver the food to them when automated fuel pumps no longer work and security systems no longer open doors and gates? What happens when the water purification plants stop? I mean our society is built around Just-In-Time delivery; what happens when almost every node in that system fails?


We're not trying to fight it at a legislative level. For example, ISPs could mitigate DDoS attacks through various means [1]. Sadly they just don't want to do the few day worth of labor (maybe more if they haven't properly automated their infrastructure over the last few years). Now look at Congress, they haven't required that. They haven't used the power of the Constitution to regulate interstate commerce, definitionally what happens when you buy things online, to require that ISPs must do reasonable things to limit the affect of such attacks.

1 - https://tools.ietf.org/html/rfc2827


I don't think the attitude that it's "a few days worth of work or oh I guess maybe a few more if they don't have everything automated yet..." is very helpful. It diminishes the problem and is incredibly unrealistic. While I don't have experience running an ISP, it's huge infrastructure, which is inevitably incredibly complex and it is almost certainly expensive, time consuming, and extremely risky to roll out major changes. But I don't disagree with you! This is even more of a reason that we need help from regulators on this front. Things that are prohibitively costly for private entities to do but must be done for society to function the way we want it to are the sweet spot for regulation.

I keep hearing that the government is taking cyber security seriously, but I see no evidence. Where is the DHS funded formal verification tool or subsidized penetration testing for critical infrastructure? I'm not saying these are the right ideas, but I don't see anything at all. Perhaps I just don't know of the programs that already exist though - looking forward to being educated here!


Ingress filtering really is not a big deal, and has been supported by all routing edge hardware/software for 15 or 20 years now.

The fact it isn't implemented is down to apathy, ignorance and people making excuses.

I don't think throwing your hands up and saying "oh this is probably hard" is very constructive on this front. Stop making excuses for people whos lack of action puts everyone else is danger.

http://www.bcp38.info/index.php/Main_Page


I think you're misunderstanding me. It's hard because doing anything is hard at scale. But I'm not saying it shouldn't be done because it's hard. Big companies do this sort of hard stuff all the time. But resources are limited and lots of efforts are competing to be the priority. It is natural that the things that will make or save money for the organization will take priority over things that are just general nice-to-haves. So the question is: how would this affect the bottom line of an ISP? If it wouldn't move the needle much, or would move it in the wrong direction, it's unlikely to get done. This is why I'm saying we need regulatory support - to shift the incentives.

A possibility I recently heard about and thought sounded interesting would be requiring certain kinds of companies hold security insurance, and allowing damages for things like DDoS attacks. Then, if the insurance is functioning properly, doing things like this would decrease premiums. Mostly though, I'm just bummed that I don't seem to hear much about any ideas for how to actually attack the problem after things like WannaCry or big data leaks happen. This is clearly a systemic problem, and nobody really seems to be attacking it at that level.


What would legislation look like for DDoS on an ISP level?

You make DDoS mitigation sound easy, but most of what I would call successful attacks are from traffic that looks real at the ISP level and are relatively low bandwidth. Attackers achieve success through locking some aspect of their victims architecture.

Most websites are not prepared for large fluctuations relative to their normal traffic, which look like a drop in the bucket when you are at an ISP level. I don't blame websites for this because mitigation at this level can be expensive.

I think legislation for something like this would be a mess, because it's not simple and it has a technical solution.


The vast majority of DDOS these days is due to IP spoofing, which is what is possible if you don't ingress filter. It's a simple and 100% effective technique against IP spoofing, the problem is everyone on the internet needs to do it...

And something like 90%+ of IP endpoints have, that last remaining percentages is a real bitch though

http://www.bcp38.info/index.php/Main_Page


>The vast majority of DDOS these days is due to IP spoofin

Um, no. Amplification attacks, and direct attacks from compromised hosts, such as IoT, are the majority of the traffic. BCP38 at the edge doesn't protect against devices spoofing their internal network address from someone else in their same subnet/routing block.


How often do those kinds of attacks occur? I'm actually curious. From my research it's more common that DDoS are intersubnet rather than intra. If this is the case, why not make Comcast prevent spoofing within its own network? They're at a good place to know legit IPs.

If I'm correct, we're going to need this sooner rather than later. As IoT gets rolled out (had to remove my new AC from the WiFi since it was just easier to let the install hook it up than explaining why it's a bad idea), we need to have someone be accountable for security. IoT manufactures should be on the hook, but so to the network admins.


Nearly all amplification attacks require IP spoofing.

Attacks from compromised hosts is another thing entirely. But even on those, IP spoofing makes it impossible to block the attack at the ISP level.


It would take a global effort. An ISP level DDoS mitigation doesn't help me access my servers in the UK from Australia while they are being attacked from a wordlwide botnet.

It also raises concerns about traffic control. What if my entirely legitimate but small service in Australia is blocked by US DDoS mitigation? They have no reason to care and I may have no legal or monetarily reasonable recourse.


What then? Well, users wouldn't have wasted their time trying to recover unrecoverable systems. They'd just have rebuilt them.

Really properly permanently bricking is much harder, and hardware-specific.


So it's like building weapons of mass destruction without being noticed? Sounds really bad.


I bet it would actually end up stimulating the economy like a war does. Not to say I think this scenario would be good.


> like some kind of "cyberterrorism".

Or even worse, a state actor with nuclear weapons to back it up.


As a security guy my long bet is transportation infrastructure. Industrial systems (like cars) typically have some very rudimentary security with no depth.

Someone will get root somewhere and shut down some small percent of light vehicles (i.e. not medium and heavy trucks) on the road at an inconvenient time causing a massive economic panic and screwing up the used car market in a way that makes cash for clunkers seem well thought out.

Even a complicated attack vector that results in a narrow target selection (e.g. android malware -> infotainment system on a particular few model years of one brand) would have a massive psychological impact.

Edit: a similar occurrence could happen by accident (edge cases, poor testing and so on).

Edit: meant to reply to a comment. Oh well.


There's something about Windows exploits that just feels like a canned hunt by now.

With stuxnet, there developed the sense that Windows received conspicuous attention from a special class of mysterious operators.

At this point, given the tiny cottage industry that feeds a handful of starving security analysts, I feel it's reasonable to presume that Windows is built to be a secure as possible, and that what's possible is mostly intentional and understood as a known quantity for special populations.


> given the tiny cottage industry that feeds a handful of starving security analysts

you may not have enough information.


I'd almost rather not know.


There will always be flaws in software systems, it's inevitable.

You're basically saying Microsoft have perfected security to the point the can't overlook something or make a mistake with something????


I am not saying the thing you added four question marks onto.


After nuclear weapons were first invented, the big powers spent a lot of effort in non-proliferation, to stop smaller countries building them.

Cyber weapons are not as dangerous as nukes, but much easier to copy, and much harder to know who attacked you.

The NSA/CIA has been very lax in allowing their weapons to be copied.


Cyber weapons mean that during a crisis, nuclear militaries have to consider whether their chain of command has been compromised.

WHich means they are, in fact, as dangerous as nukes.


Only if cyber weapons are built to target the military. I'd wager they're built to wage economic warfare rather than any military role, the corporate/commercial world is a much softer target. Imagine doing the economic/industrial damage of carpet bombing cities but without firing a shot.


>Cyber weapons mean that during a crisis, nuclear militaries have to consider whether their chain of command has been compromised.

Good luck compromising nuclear arms C&C. There's a reason a lot of it runs on systems from the 70s.


Maybe in the US and Russia. I wonder if it could be pirated Windows XP in North Korea, OTOH ;)


Well, the UK has what is pejoratively called "Windows for Warships: https://en.m.wikipedia.org/wiki/Submarine_Command_System#SMC...


Reminds me of this classic: https://en.wikipedia.org/wiki/USS_Yorktown_%28CG-48%29#Smart...

On 21 September 1997, while on maneuvers off the coast of Cape Charles, Virginia, a crew member entered a zero into a database field causing an attempted division by zero in the ship's Remote Data Base Manager, resulting in a buffer overflow which brought down all the machines on the network, causing the ship's propulsion system to fail.


Launching a pile of nukes can kill tens to hundreds of millions of people. People have been launching cyberweapons since they were invented with death toll low enough that folks still reference THERAC in software deaths. Don't play: the two are barely comparable in how many they kill.


Past performance is no guarantee of future returns.

Global systemic risk is increasing. Masssive infrastructure and response disruption can be devastating.

What was the worst powerplant disaster in history? What was the primary kill mechanism?

Related: http://www.feasta.org/2012/06/17/trade-off-financial-system-...


I totally agree. I think the root of the disagreement is subjective word "scariest." Apparently, a nuclear disaster happening in a random place is the scariest thing for the rest of you. For me and possibly majority of Americans, the scariest thing is what might happen to us. Especially if the odds are high with us seeing reminders in the media.

Rogue, nuclear weapon doesn't scare me. Increasingly automated cars, unpatched cellphones, legal ID's hard go fix when stolen, or bank/medical/government/databases compromised all worry me more since they can impact me and happen a lot. SCADA, too, if non-nuclear given it might be my power plant or utility.


Hint: it wasn't nuclear.

https://en.m.wikipedia.org/wiki/Banqiao_Dam

Though I also make the argument that the book hasn't been closed on our nuclear disasters, and won't be. For tens of thousands of years.

Banqiao, however, has been fully resolved. (It occurred in 1975.)


I think the point was that cyber weapons are effectively as dangerous as nukes because they could cause nukes to be launched.


That's a better point but nukes run on a combo of ancient hardware using dedicated lines and physical presence. They're still reaching if that's the argument.


Read up if you can afford to lose sleep tonight:

https://theamericanscholar.org/our-nuclear-future/#

GIven that "cyber attacks," that is remotely tampering with telecom equipment, are as old as the Arab Israeli war of 1948, I'm not ready to be reassured by the antiquated hardware the US and Russia use for C&C


How bout you tell me all the deaths and property damage that has happened from nuclear weapons detonated by hackers. Ill cross-reference it against all the financial, property, or other damage from hackers without nukes. We'll see which has been higher to determine what worries people should invest security money in.


I believe the parent meant that cyberweapons can be used to launch nuclear weapons without authorization. Which I'm not sure is actually feasible...


tldr; someone got hold of NSA a pen-tool and is slowly getting it setup for a big global attack of sorts. And a whole long life story of some other dude..


The attack surface could be greatly reduced by putting a lot of code in ROMs (Read Only Memory) where it won't survive a reboot.


Putting it in ROM also makes it unpatchable, meaning it will immediately be infected again when it boots up.


Current setup: I click on bad link, malware installed on my computer that infects my disk firmware. I remove the malware from my computer. My disk remains compromised.

Suggested setup: I click on bad link, malware installed on my computer that infects my disk firmware. I remove the malware from my computer. My disk is no longer compromised.

Also, I plug a USB stick into my computer. It gets compromised. I unplug it. It gets uncompromised. No more USB spread malware.


Your suggested setup would allow a worm to repeatedly reinfect everything. Some forms of malware, such as one used in an attack against Kaspersky Lab, have been designed to do so instead of persisting directly onto the targeted devices.


The exponential growth of the infection will be either blunted or stopped if a significant percentage of the infected machines keep getting uninfected. It's "herd immunity".


Not necessarily. A physical write-enable switch can work, too. (Not a software switch!)


That's an interesting idea. Has this been tried before, for this kind of network security purpose?


Not that I know of. I suggest it every time the subject comes up of hardening internet connected devices. Nobody has ever replied with an instance of it.

Back in the 80's, most devices used ROMs, PROMs or EPROMS. ROMs were burned at the factory. PROMs could be user programmed once. EPROMs were erasable and reprogrammable, if you put the chip under a UV lamp. As far I can tell, this is a forgotten technology.

Later on came EEPROMS, electrically erasable programmable read only memory. Then someone had the bright (!) idea of connecting the write-enable line to the internet so anybody in the world could update anybody else's firmware, and welcome to the hell we have today.


Here's my 30 second analysis

Option A) What we have now

Option B) Code in PROM with physical interlock (e.g. push button to enable write)

Option C) Code in ROM

Downsides of A are obvious, we see them now.

Downside of B is that automatic updates are impossible, so the majority of devices that are network connected will remain vulnerable. At the peak of code red, it took less than 5 minutes for a fresh windows install to be compromised.

You also will have people who will forget to disable the network before pushing the button, or who may be tricked into pushing the button which will allow malicious code to persist anyways.

C) Absent a hardware recall, people will just use devices that are infected within 5 minutes of bootup.


> At the peak of code red

This implies a lot of infected machines. As I mentioned earlier, machines getting uninfected at every reboot will reduce the number of infected machines at any point in time. It may reduce it far enough to provide "herd immunity", which is why we only need 90% of the population immunized against measles to prevent it from propagating.

Furthermore, if people are aware that reboots will de-infect malware, machines can be set up to regularly reboot. For example, I could set up my router to reboot once an hour. This would be only a minor inconvenience, as it reboots pretty quickly.

Regular reboot can be done by adding one of those simple hardware store lamp timers, or the device itself can contain a hardware circuit (outside of software control) to regularly reboot it.

By the way, do your really want automatic updates to your disk drive firmware? Your USB stick firmware? I don't. They don't happen anyway, yet I've seen articles about how those get infected with malware.


Early PC had all their OS in Rom. Apples (Mac, etc) for instance. It was abandoned because it was difficult to patch the OS. The patch were needed to fix unpleasants "features" of the program. The back side is that it also allows to install backdoors or snooping extensions.

A better approach would be to create simpler systems that are easier to verify and monitor, and harder to corrupt. Another approach is to use micro kernel architecture and a container oriented system.

Finally, the backdoor strategy of the NSA is now proven to be harmful. With all their tools, they should know who is behind it.


QubesOS boots vms from templates, only /home changes survive.


This is certainly a serious issue, but a few aspects of this article are very strange.

> Worse, the assault, which has never been reported before, was not spotted by some of the nation’s leading cybersecurity products, the top security engineers at its biggest tech companies, government intelligence analysts or the F.B.I., which remains consumed with the WannaCry attack.

> “The world is burning about WannaCry, but this is a nuclear bomb compared to WannaCry,” Mr. Ben-Oni said. “This is different. It’s a lot worse. It steals credentials. You can’t catch it, and it’s happening right under our noses.”

This attack and WannaCry use the same exploitation vector (EternalBlue). It seems that his company was targeted with a custom payload, which is definitely unfortunate, but that is not related to the exploit itself, it is just another form of custom code being used to perform further actions (Instead of simply encrypting files as WannaCry was doing). This is probably even easier for an attacker since there is now even a Metasploit module for MS17-010.

> The attack on IDT went a step further with another stolen N.S.A. cyberweapon, called DoublePulsar. The N.S.A. used DoublePulsar to penetrate computer systems without tripping security alarms. It allowed N.S.A. spies to inject their tools into the nerve center of a target’s computer system, called the kernel, which manages communications between a computer’s hardware and its software.

This is not a "step further" though. DoublePulsar is the implant injected EternalBlue and was certainly used in WannaCry. I am not sure why they had not even taken the time to try to verify this, even the WannaCry Wikipedia page states this (https://en.wikipedia.org/wiki/WannaCry_ransomware_attack). Again, this is the same exploitation vector and same implant, but with a modified payload to specifically target IDT it seems.

> For his part, Mr. Ben-Oni said he had rolled out Microsoft’s patches as soon as they became available, but attackers still managed to get in through the IDT contractor’s home modem.

This tells me: 1. Even though machines internally were patched, a contractor was allowed to connect to the network with an unpatched machine. 2. If machines were internally patched, how would an infected contractor be able to do damage? I am not clear on this. They might be saying the network itself was not attacked, but rather the attacker was able to login with the legitimate employee's credentials and cause damage that way (In which case, something is very wrong internally if this was possible).

I know it is not nice to victim-shame regarding security issues, and I am trying not to do so, but it seems like the story here is phrased in a slightly disingenuous manner. It is essentially this: An IDT contractor with an unpatched machine and privileged network access was targeted using EternalBlue to steal their credentials with a custom payload. It worked. After this, it is unclear if the stated network intrusion occurred because EternalBlue spread (Would not make sense if patched) or the contractor credentials were used "legitimately" (Indicates poor access control and monitoring).


I suspect this “cyberweapon” nomenclature will have serious consequences, if it becomes popular. This is like the reverse of the usual PC language softening, e.g. to murder is to “neutralize,” etc.

Why is NYT using this term? It’s been invented by NSA to redirect defense funding towards their mass surveillance activity. Shouldn’t journalists point out things like that?


NYT? Journalists? hah!


His attack came on the 29th April - yet there were Snort signatures on the 21st April (42329, 42332, 42340) for DoublePulsar, and before that 14 March (41978) for one of the vulnerabilities.

Despite his much lauded protection, I find it odd to believe that he's not running Snort, and the lack of actual specifics make me believe this is really a piece for the mentioned Israeli security company with a "blackbox" IDS.


The root cause is that the more convenience the more danger.

cyberattack can be easily overcome using the steps below:

1) DON'T connect your computer to the internet physically

2) if you want to use the internet, use another pc which has no credential data

3) for bank/shop online,etc. we should use a dedicate device which couldn't be reprogrammed.

To sacrifice a little convenience for the safe. That is it.


what purpose would an airgap'd computer even serve for 99% of people?


So from the article, it sounds like DoublePulsar is already patched, but due to computers lacking the patch, it still got in. Have I got that right? Also, what is the attack vector here? Because if, like wannacry, it was an email, then surely the defence is the same as before, be careful what you download?


I am quite surprised no one has used an exploit for rompager yet and used to it take down most of the internet.


“I don’t pursue every attacker, just the ones that piss me off,”

Why paint a target on your back saying things like this?


> By Mr. Ben-Oni’s estimate, IDT experiences hundreds of attacks a day on its businesses, but perhaps only four each year that give him pause.

There’s not enough time to pursue hundreds of new attackers every day, most of whom are not competent enough to be a serious threat. Presumably a big part of Ben-Oni’s job is to figure out which attackers can safely be ignored and which ones he needs to worry about tracking down.


> Metasploit, an automated hacking tool, now allows anyone to carry out these NSA attacks with the click of a button.

What a time to be alive


Actually, that's good. Now that anyone can with ease exploit unmaintained systems, people will need to start caring about security.


Critical infrastructure, Economic stability, nation states preparing/probing for war/advantage?


> In the pecking order of a computer system, the kernel is at the very top, allowing anyone with secret access to it to take full control of a machine. It is also a dangerous blind spot for most security software, allowing attackers to do what they want and go unnoticed.

Why do journalists even try to explain things like this? Do they ever get it right? Does it ever not just go over people's heads?


Aka 'Live free or Die Hard'. Saw it. Bruce Willis runs out of bullets and shoots a helicopter down with a car


why can't i just up vote this a million times


Got a non-paywall link?


Outline.com or archive.is


So does everyone here just subscribe to the NYTimes, or am I missing something when it comes to all these paywalled articles (other than, you know, missing the article itself?)


If you don't see the article it means that you probably already read 10 other NYTimes articles this month at which point they require a subscription.


Yeah I do realise that - however so many NYT links appear in ycomb every day, it doesn't take much to hit that limit, especially since I launch articles through that 'Panda' chrome extension, so you don't really get much chance to screen the links other than the small link preview in the bottom left.

I was wondering if people go through an alternative site/app to serve the articles - bypassing the paywall - or if everyones just in the same boat as me? Some other forums I'm with tend to make note of a paywalled link.


If you're consuming that much of their content, maybe it's something you should pay for?


You can always delete the nytimes.com cookies or use another browser.


Now that they lost control of their napalm bombs which are now burning innocent targets, where is the NSA?

A little bit of taking responsibility please? They could at least lead the charge to get this stuff dealt with now.


Most companies hiring and empowering decent security people would have avoided this bullshit, due to all the twitter signaling.

Noobs: follow @hackerfantastic


What's the real math, a full-scope attack might do what, cripple 30% of the Modernized World at any given time? Maybe for a week? Shit, y'all need to catch up on nature a little bit. Sometimes a forest fire burning things to the ground inspires new, stronger life and recovery. Also, if you think I'm being a dick, I'm paraphrasing Tim Berners Lee about his views on the modern Internet.


That would be nice, but the sad reality is that most likely a lot of people would die during this 'catching up on nature' time...


Messing up the global food supply chain would be no joke.


Give the NSA a break, they handle more secrets than I can imagine. And at least they managed to hold onto the Russian Golden Shower video!

Suggesting that the USA get rid of the NSA is like saying "Crap, terrorists got a hold of a nuclear weapon, lets unilaterally get rid of all of our weapons and hope for the best!"


But rather than "NSA" or "no NSA", there's another alternative: Defense-only NSA (or at least, defense-heavy NSA, where the vulnerabilities equities process has been altered to heavily favor disclosure). Instead of unilateral disarmament, disarming ourselves while also forcibly disarming others.


While I like that idea from it being the "right" thing to do. I can see why they don't want to give up every exploit as soon as it is found.


More like, "Crap, the terrorists just got a working copy of our nuclear arsenal!" Digitally speaking being a much closer comparison. The fact is, had the NSA been responsibly disclosing these details (maybe a 15-30 day hold before a private disclosure then after another 30 days public discloser)

That gives them 30 days to use 0-day exploits, but can still be effective contributors to greater overall security.


Good points. But its a fine line on how to deal with leaks 0 day exploits. We can't cripple the NSA, other wise we are bring a wet noodle to a knife fight.


I just mentioned they should be able to hold onto 0-day discoveries for 15-30 days before confidential disclosure.

Terrorists WILL do a lot of damage (as demonstrated) with these exploits... the NSA might ... the world, and in particular US interests are far better served with secure systems all around.


> he would not stop until the attacks had been shut down and those responsible were behind bars.

That's cute. I wonder if he means Microsoft, people who use Microsoft products in safety-critical systems or maybe some nuke-capable nation state hiding behind tor, VPN, custom IoT botnet, another layer of tor and another VPN?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: