Hacker News new | past | comments | ask | show | jobs | submit login
The internet as existential threat (raphkoster.com)
152 points by zephyrfalcon on June 28, 2017 | hide | past | favorite | 121 comments



I'm getting worried. I've been harping on the Maersk downtime lately. The ports of the largest shipping company in the world have been shut down for two days by a cyber attack. Trucks can't unload or load at many of the world's ports, including the main container ports of LA, NYC, and Rotterdam. They're going to be down tomorrow, too. Maybe partial operation by Friday.[1] Maersk even lost phone and email systems. One of the few good sources of info has been somebody at the Port Authority of New York and New Jersey who sends out alerts to truckers.

It's a distributed outage. The usual forms of disaster preparation involve geographic dispersal. That doesn't work against this threat. Few companies are as physically dispersed as Maersk, which has major facilities in 61 countries. It didn't help.

What happens when someone figures out how to take over Windows Update or the Intel Management Engine or Ubuntu Update?

[1] http://btt.paalerts.com/recentmessages.aspx


Finally, after two days, Maersk is starting to come back on line. They've managed to get a status page up.[1] Some of their biggest and most automated ports are still completely down. Maasvlakte II (Rotterdam) is completely down; their main Rotterdam port is down except for ship loading.

Some US ports are up with manual operation; Port Elizabeth (NJ) is loading and unloading ships and trains, but not trucks. Los Angeles is totally down.

The Asian ports are mostly up.

The shipping industry is very worried about this.[2]

[1] http://www.maersk.com/~/media/20170629-operational-update/20... [2] http://splash247.com/back-future-maersk-wake-petya-attack/


Thanks, I'd been looking for recovery info. This is scary, in a way that I hope might clarify to governments and intelligence agencies just what kind of damage these issues do.


And thanks for that. Your earlier HN comments were the first notice I had of that, and I agree, it's absolutely massive. First thought through my head was David Korowicz's "Global Supply Chain Cross-Contagion", though I think it might take a somewhat harder kick.

http://www.feasta.org/2012/06/17/trade-off-financial-system-...

Have you posted/blogged on this anywhere, because I'd like to see your general thoughts on this.

Another thought is that these attacks are far more epidemiological than military. That is, the spread amongst susceptible hosts, via chains of transmission, and can be controlled or limited through what are fundamentally epidemiological mechanisms: identifying reservoirs, patients zero and defence penetration, treatment by isolation, innoculation, monitoring, host-separation, and host diversity.

Similar argument apply to many of the memetic and human-based communications attacks which have been escallating of late.

A friend has pointed out that any communications network -- physical or logical -- ultimately becomes attractive as a mode of attack or subversion. The 1970s and 1980s vision of the Vast Interconnected World, perpetuated through the 1990s through the PR and advertising of companies such as IBM, AOL, AT&T (1-800-YOU-WILL), Microsoft, Google, Facebook, ..., was that this would be a better an peacefully interconnected world.

Clay Shirky has pointed out that when you connect more people ... there are more people to argue with. And who will try to scam, or subvert, or attack, or profit by, or gain political power through, communications, information, financial, or media networks. The bigger the system, the bigger the draw.

And ... yet we still don't prepare for this.

I'd very much like to see the EU, US, India, and China call for a public post mortem from Maersk. And to step up the game on computer security. Because this is, as you've said, deadly serious.


The first step would be to stop using C (and any other memory-unsafe language). The vast majority of these problems come about by memory corruption tricking the computer into executing attacker-delivered code (or ROP gadgets).

Yes there will be other security flaws like missing access checks, etc. But none are so pervasive nor so devastating as out-of-bounds read/write. If C arrays were bounds-checked by default then the majority (literally) of exploits would be rendered void immediately. The Rust borrow checker, Swift ARC, or GCs would eliminate use-after-free. Combined that would drastically reduce all available attack surface.

The second step is to assign a separate identity to every piece of code, rather than running everything as the current user. Unless it has been initiated by a user action there is no reason for malware to even have write access to your files (or system files for that matter). The idea that any ol random bit of code that manages to execute should be just as trusted as the user is crazytown. IMHM macOS SIP is a good step in that direction, enforcing read-only access to /System and most of /usr.

The third step is to have pervasive copy-on-write, including in all filesystems by default. If your system does get pwned it should be a trivial matter of booting to a read-only recovery mode and rolling back to a known-good state.

The fourth step is to have more interlocks enforced by the system that cannot be overridden. For example there might be a legitimate reason to truncate shadow copies but why should anyone with admin rights be allowed to do so from a normal login session? That's just stupid. (For those unaware, malware now routinely disables Windows' volume shadow copy and nukes the existing copies to stop you from rolling your files back). Stuff like that should require rebooting into single-user mode or recovery mode.

There's always a lot of resistance to the idea of dropping C. Before you go replying with a counter-argument I urge you to stop and think seriously. We've been pushing secure coding standards as an industry for many years now. Everyone is well aware of the risks. We have tools like Address Sanitizer and Undefined Behavior Sanitizer. We have lots of static analysis tools. We have tools like valgrind. And yet... the exploits keep on coming. Despite the thousands upon thousands of years of effort poured into fixing it, we still routinely have new out-of-bounds and use-after-free exploits in critical C code. Even experienced developers get bitten by signed integer overflow and other forms of undefined behavior... and most C code is not being written by experienced careful developers. Embedded code is even worse. If you read the source behind your car's ECU you'd probably never drive again.

It is time to admit the truth: programming languages should be memory-safe by default. Period. Whatever the cost it is worth paying.


> The first step would be to stop using C (and any other memory-unsafe language). The vast majority of these problems come about by memory corruption tricking the computer into executing attacker-delivered code (or ROP gadgets).

Seriously. People have been saying this for decades now, and warning of future disasters in using memory unsafe languages. The warning signs were all there in early worms and viruses, but still we haven't changed the languages we use or the access control systems that leave gaping wide holes in our computer systems. The chickens are coming home to roost, and everyone will feel the economic impact of this short sightedness!

Ada has been around long enough, but it was a little painful when I was using it a few years back despite all the great safety benefits. Thank god for Rust. I sincerely hope it, or something like Idris really takes off.


Well, it's always a tradeof, isn't it? Manual memory management gives you greater freedom and performance, with added risks. That being said, performance is what you want when it comes to low level layers that other layers have to rely on - e.g. You don't want your net stack be dragged down by a slow TCP/IP implementation.


Now you have rust. No excuses.


When all the rust libraries are in my package manager and when rust is as simple to write as C let me know, until then it's laughable that you think it's a replacement.

To replace C rustc needs to be a drop in replacement for gcc, not a whole platform on it's own as it is now with cargo.


> when rust is as simple to write as C let me know

Good C is not simple to write, that's the whole point.

> To replace C rustc needs to be a drop in replacement for gcc, not a whole platform on it's own as it is now with cargo.

The argument here is about dropping C. You can't replace C without paying some kind of a price.


> Good C is not simple to write, that's the whole point.

Seriously. It's like we're all riding unicycles on tightropes to get around, and someone invents sidewalks and bridges, and everyone is like, "well if you just learned to ride your unicycle properly and worked tirelessly on your balance, you wouldn't have this problem of 'falling to your death' or 'recklessly endangering other people'".

The reality distortion field around systems programming is strong.


But men, those sidewalks and bridges are expensive ! And all my drivers know the shortcuts on ropes and will have to learn the whole map again if we use the road. Plus our unicycle have wheels shapes for ropes, not side roads, it would take us a lot of time and resource to do that. The whole industry is stuck in that case, and the market has a huge inertia you know.

You just can't use sidewalks and bridges right now instead of ropes. It's just not pragmatic.


> The argument here is about dropping C. You can't replace C without paying some kind of a price.

The thing is that rust cannot replace C right now. The only stable ABI that rust has is the C one, which removes most of the benefits of using rust.


If rust would be memory safe or concurrency safe rust would be a valid option. Unfortunately it is not. But fast and safe options do exist.

Here the problem is the OS and it's lack of security, not the language.


> That being said, performance is what you want when it comes to low level layers that other layers have to rely on - e.g. You don't want your net stack be dragged down by a slow TCP/IP implementation.

What you don't want, is a whole economy disrupted by a poor technical decision. Abstract technical goals like "performance" are not valuable goals in themselves.


Before you go replying with a counter-argument I urge you to stop and think seriously.

I have thought about these things for a long time, and arrived at the exact opposite conclusion: had such levels of security been there since the beginning of computing, there would be no jailbreaks, rooting, homebrew, or any real freedom. Everything would be so locked down and unbreakable that there would be no "possibility to disobey", and that is a far more terrifying alternative.

A world in which there is no crime is one where everyone has already been strongly coerced into submission and every aspect of their lives closely monitored and controlled. Ultimately, humans are imperfect and that's where all of these exploits and vulnerabilities come from. If we attempt to eradicate them, by essentially removing imperfection, we will literally be taking the humanity out of life. Is that really the (cyber-)world you want to live in, or move towards?

Whatever the cost it is worth paying.

I see this continuing insecurity as consolation that we still have some freedom left, but positions like yours --- increasingly popular, it seems --- are strongly reminiscent of the "war on terrorism" and its associated repugnance, because it is really the "war on cyberterrorism".

As the classic saying goes, "Those who give up freedom for security deserve neither."


I agree. The real danger is software monoculture. We need many diverse systems with very different attack surfaces. Less standardization. That's the way nature survives. We just need to learn the lesson.


Nature also has things like immune systems, but you get into very sketchy legal territory there. Our legal system have not caught up with this new interconnected reality.

Oh, and nature very frequently suffers all kinds of systemic collapse. Nature isn't immune to this.


Still kicking after 3 billion years.


Your comment is very bizarre. Securing software is in no way comparable to giving up freedom.


This. We've known how to secure computers from this kind of nonsense for decades and there have been hundreds of massively expensive attacks over the years. Every time a new "biggest attack in history" happened, I thought "maybe now the cost of repair has finally exceeded the cost of killing C".

Nope. I've been wrong every time. The psychological cost of abandoning backward compatibility is still considered unthinkably expensive, regardless of the economic cost of repair after an attack. The bad guys will continue to win until we change this mindset.


I'm a C programmer and i agree with this - C is fundamentally flawed from security viewpoint. However security isn't everything about the language. The reason we don't switch to "C alternatives" is that they all have flaws and don't replace the flexibility and versatility of C/C++. E.g draconian type systems(Ada) or pedantic enforcement of memory safety(Rust). When programming in your language is painful, no one will switch to it, and C/C++ will continue to dominate and be the basis of most software stacks/operating systems/runtimes. An analogy is: why are you using a car which is unsafe,expensive, polluting and prone to road accidents, instead of safe, reliable, cheaper and environmentally friendly cargo tricycle?


If I could wave a magic wand I'd have the C standards committee:

1. Add safe arrays to the next version of the standard.

These arrays would work much like C arrays do, but taking a reference or passing to another function would pass an array_ref type rather than decaying to a pointer. Under the covers an array_ref would be a (size_t, address). Taking a reference to an element in the middle of an array would get the correct size of the reference.

The default mode would be to bounds-check accesses to array_ref, aborting on failure. Since the array_ref knows its size it would be easy to check the index and return an error code if you prefer not to abort ("if (i >= sizeof some_array_param)").

2. Make signed integer math trap on overflow and add wrapping variants (similar to Swift's &-prefixed versions of operators). For some projects this would be too disruptive but compilers would undoubtedly offer options to disable it. But the fact that this existed at all would immediately start pushing people to adopt it for the same reason people like to adopt -Weverything or -Wall.

2b. Add generic functions to check for potential overflow so we can have an officially blessed way to say "if x + y > INT_MAX" with no undefined behavior. This is so easy to accidentally get wrong without any obvious indication that you've introduced undefined behavior.

3. Some blessed reference counting library and/or types. If C had something to say about memory management other than "you're on your own" it would be a big boon to eliminating dangling pointers and use-after-free. I recognize there are many contexts where this isn't appropriate (e.g. embedded that statically allocates all memory at startup) but a lot of C code could adopt a stdlib-provided reference counting system if it existed. ARC has proven you can get everything except cycle detection without too much overhead.

I'm sure people could quibble with the exact solutions I've proposed and there may be other things I've missed, but I'd really like to see the standard admit that C has a really nasty legacy and do something about it.


An analogy is: why are you using a car which is unsafe,expensive, polluting and prone to road accidents, instead of safe, reliable, cheaper and environmentally friendly cargo tricycle?

Or a motorcycle, which is even more unsafe --- but "hella fun", in the words of a friend. I'd rather drive, being in control, and take the risk than be forced to ride in a safe, boring, highly-regulated self-driving car.


You aren't just putting yourself at risk by driving, you're putting other people at risk too.


this may apply to a motorcycle you ride for fun (hobby project?) but when I program for a living my top priority is to best/fastest solve the problem at hand, not to have fun.


Thats would be assembler and demoscene.


Google chrome os!!!

Add on your comment about c, it's also a major problem with Windows. It's tragically so easy to attack windows systems with endless patching and discovery of vulnerabilities. The vast majority of companies could get by with their clients (the people who click on these stupid attachments with the infected pictures or word docs or web links) if they were given google chrome os laptops. google chrome os laptops would hugely reduce vulnerability problems for majority of end users.


You seem to have given this a lot of thought. So serious question - why don't you create a secure operating system for governments and big corporates? You would easily get the funding and likely an insane amount of revenue.


The second step is to assign a separate identity to every piece of code

Cool story, but now you will need a time machine to go back 50-odd years and tell them this. And to build that time machine you will need C.


> The Internet ... started out by only connecting computer networks. But today it connects networks of vastly different sorts: computers, yes, but also financial networks, distribution networks, road networks, water networks, power networks, communication networks, social networks.

Those are all computer networks. It just happens that some of them involve computers running financial software, or computers running electrical control software, or computers running social software. Or computers running phone software, or light-bulb software, or toaster software.

Perhaps we don't need networked toasters, but I don't want to go back to a world where financial transactions are conducted based on hand-scratched notes and shouting on trading floors.

Computers are just too powerful to ignore. The problem is not that they are computers, but that some are locked down for user control of networking, but simultaneously designed insecurely as, say, "smart web cams" without the necessary incentives to keep them secure.


> I don't want to go back to a world where financial transactions are conducted based on hand-scratched notes and shouting on trading floors.

Seemed to work pretty well for a long time.


Our expectations have since increased. I wouldn't want to go back to the GDP I thought was fine ten years ago.


One subtle question: how much of that is GDP change is growth, and how much is risk redistribution?

Obviously communication efficiency has real advantages. Some of what has been gained is lasting improvement. But humans have a nasty tendency to enforce order on chaotic systems and mistake legibility for growth. Everywhere from forest fires to housing prices, we create systems that delay, but concentrate, risk. Annual brushfires are extinguished, leading to decadal infernos. Credit default swaps enable high default lending at low risk, right up until they don't.

I'm darkly suspicious that many of the gains from communications technology, particularly in stock trading, fall into this category. They've allowed us to push concentrate and defer everyday risks, getting us growth over most time windows when the reality is that we've redistributed risk.


The question is not whether things are good, it is whether it depends on systems that could suddenly go very, very bad.


Adjusted for inflation, has it changed that much in 10years? Ditto for median wages.


Absolutely yes for both. The past decade was one of the most successful ones, if not the most successful one, in global wage increases and poverty reduction. Literally billions of people have been raised to standards of living they didn't dare to dream of a decade ago.

Of course, these increases have been concentrated among the poorest people in the world, so you don't see them in the US.


Almost all of that in two countries: China and India. And much of that in China. Which had a hell of a lot more to do with raising the floor than lifting the ceiling.

Some of our international globalisation tools assisted in that -- shipping (Maersk included), finance, and realtime shelf-to-factory inventory control. But a lot of it didn't.

As Gibson's noted, the future's already here, it's just not evenly distributed. You might also want to ask, from time to time, just which trends truly represent "the future".


I'm glad to see you're still here dredmorbius.

The facts and statistics in history and economics appear to be 'politically incorrect' for every type of political belief system. Liberal, Left, Libertarian, Conservative, doesn't matter, there's some factor out there that hasn't been incorporated. I think the Google people say something like "unless you're God, bring data".


Thanks.

Yes, epistemology and ideology seem rather opposed.


OP said GDP, so I figured he was talking about the US (or another Western country).


Depends where you were in the world.

In 2007, GWP was ~50Trn USD.

In 2017, GWP is ~80Trn USD.

https://en.wikipedia.org/wiki/Gross_world_product


GDP, if you are an American, has not been related to wages for a very long time. The median American has not received a real wage increase in over four decades.

If you're a libertarian, which some are here, this should worry you.


Tricky POV shift. Your expectations have increased. We are not reducible to a gross domestic product.


Hunting and gathering seemed to work pretty well too.


Similarly, if it worked so well, why did we all make the switch?


The rational actor school says it's because we're all rational actors and the alternative outcome was clearly preferable.

There are other, and frequently far more convincing, models, which suggest that humans frequently make poor decisions out of near-term exigencies, coercion, underappreciated risk (pretty much just what we're talking about here), or any number of other reasons.

See: Fallacy of composition, Logic of Collective Action, Tragedy of the Commons, Prisoners' Dilemma, Red Queen's Race, Complexity Trap, among others.


Well, sure it did. It was the best we had, after all, and folks were trained to do it. But compared to today's transactions, it is really a step backwards.


Definition of "well" is where people will disagree. And good luck resolving that disagreement...


People got by pretty well with sunlight and lamps to light their homes, too.


And they still can with heliostats and fibre optic cables :-)


People still do.


They do, and candle and lamp-caused fires kill tens of thousands of people every year.


doesn't scale up...


> some are locked down for user control of networking, but simultaneously designed insecurely as, say, "smart web cams" without the necessary incentives to keep them secure.

The Internet of Scary Things.


The S in "IoT" stands for "security".


I wish you were less right. I'm working on this "whole world connected" system and the things I see developed in IoT devices are just too scary (like sending your wifi ssid and password unencrypted to some distant server about every minute). Only way for some damage mitigation will probably be some kind of IPS/Firewall hubs so that all your "smart" devices are isolated.

When someone connects 1000 different devices into one system, he now has 1000*x potential holes into this system. Requiring that every connected device is perfectly secure is just not realistic. Even those with biggest budgets can't be totally secure, so what do we expect from some small manufacturer with two programmers and tight deadlines? We need to have good firewalls for IoT devices.


I've speculated before about the idea of having ISP service packages include CPE that by default supports UPnP only on a VPN interface that's automatically configured and matched with an ISP-resolvable domain name for the customer's use.

To your point here, it seems reasonable to have that VPN interface also apply an outbound traffic firewall that defaults closed - that'd put a hard stop to the kind of trivial yet horrific exposure you describe, and making automatic firmware updates require extra effort to work seems like a relatively small price to pay. (After all, it's not as though device providers who fail to scruple at that kind of atrocity are going to be putting out security updates on a regular basis...)


I think the problem is that we have one network doing many things that need to have builtin redundancy both in terms of messaging and data storage. So putting a bank's entire database into Azure or AWS isn't wise in my opinion. Rather, I think the network either needs to be fragmented to serve different sets of customers or the end points need to be improved to the point that onsite hosting becomes the norm again. Centralization is much like specialization in that it can be adopted to the point of diseconomy.


>Rather, I think the network either needs to be fragmented to serve different sets of customers or the end points need to be improved to the point that onsite hosting becomes the norm again.

Hosting in the cloud is a bet that Azure/AWS can protect your data from physical damage/loss as well. It isn't just a cost-reduction to hire less sysadmins and desk support. They have ginormous amounts of cash to build quality data centers in stable-Earth locations that still have excellent internet connections.

Really, the answer is more data centers and cloud hosting companies with replication and cold-storage agreements between them (where sane).

But for a lot of companies out there, this is overkill. One cloud provider is more security/safety than they'd likely ever get building their own data center.


The analogy with electricity is pretty good. When the electricity goes out, most people are pretty screwed: it powers our lighting, air conditioning, household appliances, refrigeration, gasoline pumps, Internet access, point-of-sale systems, and in many places, cooking and running water too.

However, when the electricity goes out, it usually comes back on within a day or so, because a power outage is viewed as a top priority to fix, simply because so many aspects of modern life depend on it. And really critical businesses (supermarkets, hospitals, etc.) invest in generators so that they can run independently of the power grid.

The Internet will likely follow that path. It'll become indispensable to modern life, which means that when there's a threat against it, a lot of experts will be mobilized to put it back in service. And really critical businesses will invest in making sure their systems work even when offline.


The scary case isn't the internet going down. The scary thing is the internet becoming to hostile a place.

When I can't reach the internet, all it takes is a new cable. When anything I connect to the internet is going to be infected and DDosed to oblivion, the fix is a lot harder.

The other scare is that some critical infrastructure is exposed but not widely known. This allows the infrastructure to develop resilience to unintended failures, but not malicious failures. When this finally gets found, carnage may happen. An interesting example of this is BGP.


The same concerns were expressed about electricity [1] when it was first invented. The usage of the electric chair for executions was sponsored by Edison Electric [2] (now GE) as a way to link rival alternating current (and specifically Westinghouse; Thomas Edison made sure that the first executions were conducted with Westinghouse machines) with death in the public's mind.

The solution wasn't to give up on electricity, it was to adopt basic safety precautions like step-down transformers to household voltage, elevated power lines, and insulated couplings.

[1] https://en.wikipedia.org/wiki/War_of_Currents#Safety_concern...

[2] https://en.wikipedia.org/wiki/Electric_chair#The_Medico-Lega...


The three-letter agencies haven't changed their minds. They still think that every computer being insecure makes us all more secure, and they will and they have acted to create this situation. That's the difference between now and a century ago. The govt back then didn't feel it was in their interest for electricity to be fundamentally unsafe for consumers. Until the agencies genuinely reverse course, inventions to promote safety can't move forward. We'd have basic safety now, as you describe, had room been made for it. But instead, they played past the edge - way past the edge, and have no profound regrets about that that I've heard.


I've never heard about this part of history before. I wonder if a dentist inventing the electric chair has something to do with why people don't like them :-)


It's not like a power outage. It's more like a power surge that's somehow crafted to exploit every engineering flaw in every surge protector that well-funded state actors have piled up over the years. We would not recover from that in days.


Not even Google, in all their might and splendour, can reliably deliver software to all the versions of a platform they themselves created.

I find it hard to believe that even the most competent villain will work out a way to hit 10% of the devices out there, let alone everything.


Google doesn't use their client's bandwidth and connectivity to help them.


Is bandwidth really the limiting factor for Google?


Kinda. Electricity is pretty darn dumb. You can get cheap-ish 2-stroke generators with a typical outlet or two that is enough to drive anything short of a refrigerator or freezer.

Similarly analog phone lines are pretty darn dumb. Vibrations registrered by one membrane is replicated in another by inducing changes in an electric current connecting the two. The complexity is in the switch boards that construct the circuit over distances. A proper POTS setup can actually handle loss of long distance lines without losing local connectivity, as long as the local switchboards are powered. And for that we are back at generators.

Computers on the other hand are complex. Your typical Windows install have a buch of processes going in the background, talking to various pieces or hardware, and that is just a clean install sitting idle.

Frankly the old microcomputers were more reliable. At worst a crypto attack would take out whatever floppy was in the drive at the time, and would be gone the second a power switch was flipped.


Aren't we already at that point? I haven't experienced any "internet outages" that lasted more than a couple of hours. Critical services are highly redundant.


Kinda. Thing is that with PCs and smartphone it is really the first time that the smarts are at the edges.

With electricity it is in the transformers and the power stations. The kit at the individual home and office is downright dumb in comparison (and can be operated via local generators in a pinch).

Similarly the analog phone network has its smarts in the switches. Not in the phones in homes and offices.


"Necessity is the daughter of invention."

That is: the things we invent, out of necessity or otherwise being irrellevant, often become of themselves necessities. Part of our collective dependency tree.


> I’ve often wanted to sit down with Mark Zuckerberg and argue with him about Facebook. It is premised on the notion that "connecting everyone" is an unmitigated good.

This isn't the author's major point. But worth noting Zuckerberg changed FB's mission recently to "Give people the power to build community and bring the world closer together."

Said Mark: "Connecting friends and family has been pretty positive, but I think there is just this collective feeling that we have a responsibility to do more than that and also help build communities and help people get exposed to new perspectives and meet new people -- not just give people a voice, but also help build common ground so people can actually move forward together."


Wow, that's really interesting. Hadn't noticed the change in the mission statement. For reference, it used to be: "Give people the power to share and make the world more open and connected."

They changed from making the world more "open" to making it "closer", which are antonyms in a certain way. They changed from the idea of having Facebook be this sort of universal community of people getting together to searching for more intimacy, getting people closer, etc. but they keep talking about "the world."

I've always thought there was room for an "anti-Facebook" that would be the opposite of their original vision statement. Its goal is not to "share things", i.e. to make things public, but to create more private communities that are tighter; not to make the world "more connected" (everyone having 5000 friends) but better connected (i.e. the value of relationships is higher).

It seems like FB may be trying to go in that direction after all, but who knows what they really meant.


It seems to be facebook acknowledging that close-ties networks are of significantly more long term transactional value than the loose-ties network who's value is almost entirely advertorial.


Yeah. But what does that mean for their business model?


Not much. If they can still be the place where all the small communities gather then they still have all the data. It's clearly in Facebook's best interest to provide maximum value to their users; whether that is small or large groups doesn't matter to them.


They are in a great position with WhatsApp and Messenger to take advantage of the close-ties transactional behavior of the future.

In short -- they are extremely well positioned for the next 10 years.


> who knows what they really meant.

They meant that several EU countries are pushing them to "do something" about terrorism and nationalism spreading on social networks so they are preemptively increasing moderation under slogans of unity, community and understanding before politicians come up with some byzantine hate speech enforcement which would bring them serious costs or kick them out of this market altogether.


We are building the anti-Facebook: Lyra.

www.hellolyra.com


That doesn't sound different, just rephrased PRBS.


I think most techno-optimists wouldn't say it is an unmitigated good, but a net good.


I think most techno-optimists would probably find in Koster's article much more to which to object than this characterization of Facebook.


"But today it connects networks of vastly different sorts: computers, yes, but also financial networks, distribution networks, road networks, water networks, power networks, communication networks, social networks."

This is, I hope, incorrect.

Power, water, road, etc. should not be Internet connected and in many cases should not be networked at all.

Homeowners doing cute things with arduino and their lawn sprinklers (and learning good lessons about simplicity and fragility) are one thing. It's quite another to bear the responsibility for critical infrastructure.

I hope the adults in the room have the wisdom and experience to eschew these kind of "improvements".


Sensor networks are great boons to any infrastructure. It allows for much quicker reactions and much better tuning. However, this network doesn't need to be public.

I can see how you'd pick the internet as a base-layer for this network, which would be a bad idea. The alternative is building an entire separate network, which seems almost un doable.


For some industries it makes sense. Rail and power networks use their own physical infrastructure as they have specific latency and reliability requirements for protection and signalling systems and it's convenient to just build their own comms networks alongside their other infrastructure.


Rail and power also already have physical infrastructure and property along which they can setup networks.


I don't think this is a revelation to anyone in tech circles. For this reason, I usually choose products that aren't internet enabled.


The article's point is more than about consumer devices being internet enabled, it's about how many things are internet dependent. Everything from critical infrastructure to production and distribution networks are now Internet dependent in terms of actual delivery.


In large part because of companh and government cost savings via off the shelf parts and BYOD.


Do you also choose doctors, grocers, utilities, planes, and cities that are not Internet enabled?


I do look for a RJ-45 port on my doctor (I avoid those), but trying to lure any of the remainder into a faraday cage has been difficult :/


And when the self-driving semi-truck that delivers food to your local grocers is internet connected?


I'm starting to believe we need some kind of Geneva convention among intelligence services.

It's one thing for them all to be spying on dissidents and terrorists or however they want to bother their internal populations, because they'll be limited in how much damage they want to do to their own economies.

Once you have intelligence services engaged in all out warfare with each other, there's really no limit to how much damage they can do. Up to and including deaths.

We need to get to some kind of gentlemans agreement between the CIA and the FSB really quickly before the world economy collapses.


It is kind of ironic, since the internet was designed as exactly the opposite: a damage-tolerant coast-to-coast communications system, i.e. one without a single point of failure.


Double-plus-so. The Internet continues to be damage tolerant. The problem is that we have collectively expended the last 20 years building single points of failure, and hooking them up all through the Internet.


Site got ycombinator'd, so here's a google cache: https://webcache.googleusercontent.com/search?q=cache:AyxA4F...


scnr: Good we have a little redundancy there =)


Unfortunately, the Google Cache is still trying to load data from raphkoster.com

Do you have a pastebin dump?



Thanks!


When that happens you can press "text-only" on the header, and you'll get an embed-less version. Sometimes you may have to press esc first.


Tainter postulates that societies collapse when they reach their limit of complexity.

https://en.wikipedia.org/wiki/Joseph_Tainter#Social_complexi...


When Raph speaks my ears always perk up.


Site not loading for me. Internet archive mirror: http://web.archive.org/web/20170629000253/https://www.raphko... (Also I realize I share a surname with the author but am of no relation that I know of.)


Thanks for reminding me to back up my gmail and gdocs.

I think she makes a good point about not getting one's paycheck - that could definitely have a devastating ripple effect on the economy. But, is it really possible, that the entire internet go down?


That's the thing, he seems to be ignorant of just how redundant and reliable the system is. He uses the example of a worldwide economic collapse if Google went down for a month. Which is probably true. Thing is, that's about as likely as a worldwide economic collapse from the US being nuked off the face of the planet and everyone else left untouched.

The beauty of software is that it can be replaced/repaired quickly. If a power plant gets utterly destroyed, it could take a minimum of months to build a new one. If a piece of software gets corrupted, you load from backups or buy a new copy. Downtime is incredibly minimized for even the worst designed system.

That's not to say disasters can't happen, but they will be limited in time and scope so long as there's enough money to throw as the problem, and if the amount of money-throwing ever becomes too much to swallow then we might finally see some widespread solid security practices.

Honestly simply keeping systems updated would mitigate most of the potential "devastating" attacks, the reason the recent ransomware attacks got as far as they did is lack of funding/will, because it's largely cheaper for organizations to let themselves get pwned then it is for them to protect themselves.


Part of the issue is not redundancy, but heterogeny. It is like monoculture crops - doesn't matter if you're growing 4x what you need if something comes along that kills all of them.

> If a piece of software gets corrupted, you load from backups or buy a new copy

Software is cheap, as you say. Data isn't. The piece's author indicated that the influenza-of-the-month was launched via tax software, which generally isn't interoperable. If there is no reasonable replacement, what do you do?

Heterogeneous systems are more expensive to operate, require more expertise, and cause more compatibility problems than monocultures. But they also don't all die at once due to the same bug.


The same thing you do if physical tax/medical/financial records get lost in a fire, or any other data for that matter. It's a major loss, maybe even an economic crisis for some, but not an unrecoverable one. Worst case scenario you move foreword with less data and insurance does what it can. Create countermeasures so it doesn't happen again. Said data should also be backed up for this very reason.

Also, just because devices are on the same network does not mean they're homogenous. A Windows vulnerability won't take out a Linux server or an IoT webcam. For everything to truly "die at once due to the same bug" you'd need a network layer attack. Compromising the internet protocol itself, even if possible would cut off the attacker as well, so that leaves us with denial of service attacks, which are common. But services like Cloudflare already effectively defend against such attacks, and even compromising an entire major cloud platform (as in the author's AWS hypothetical) will simply result in the cloud provider as well as other interested parties pouring all possible effort into fixing the problem as quickly as possible.

Imagine that your monoculture crop could develop immunity to a new disease within hours, days or weeks, with immunity developing faster for more serious diseases. Then all you need is a big enough field to absorb any potential losses.


> Worst case scenario you move foreword with less data and insurance does what it can

Well, yes. Just like, after a house fire, you rebuild and try to carry on. It is still catastrophic.

> Also, just because devices are on the same network does not mean they're homogenous.

Of course not. The point is that there are many pressures towards running homogenous systems - easier to hire for, easier to manage, fewer support systems to run, fewer interop problems, bigger vendor discounts, etc. etc.

These pressures are hard to resist, but we have to do better at running heterogenous systems, because without them, your entire farm can burn down before you notice.

Nothing you say is wrong, but just because "it's just software" doesn't mean these aren't catastrophes.


Not saying they're not, but the author specifically mentions "existential threat", "paralyzed economies", and worlds where people without technical skills are instantly hacked upon connecting to the internet. He uses the example of water supply so completely poisoned that no one can drink from it.

The author seems to be making an assumption that we could enter a world where insecure technology could be taken offline all at once by some random hacker(s) and remain in such a state long enough to completely destroy economies and institutions. Frankly the system just isn't that brittle, if it was it would have failed long ago.

It's like looking at Hurricane Katrina, which while catastrophic was never an existential threat to the US, and saying, "now imagine if Hurricane Katrina happened everywhere, every day!" without considering if such a thing were possible or likely in the first place.

As for heterogeneous systems, what do you think the cloud is? Sure it may be homogenous for users, but that's all abstraction for what is a VERY heterogeneous system under the hood. Sure having independent/redundant subsystems is ideal for reliability, but at the end of the day I don't see the need for all the frightful abstractions. The farm isn't going to burn down without anyone noticing, there are simply too many powerful interested parties and too much built-in resiliency.

I'm all for building in more resiliency/redundancy to prevent catastrophes as you mention, but the author takes a couple of major security incidents and spins them into apocalyptic techno-panic.


> The same thing you do if physical tax/medical/financial records get lost in a fire, or any other data for that matter.

The thing you fail to grasp is the resilience of physical records. Imagine Germany trying to take down England's records during World War I, one hundred years ago. The destruction of merely 3-5% of all relevant documents would have required the coordinated strikes of hundreds if not thousands of arsonists, all of whom would need to be part of an even bigger spy network. That's for England proper; you might be able to scale it up to Scotland and Walles, but it would be completely unimaginable to pull this off accross the whole British Empire.

Today, it might be harder to take down one hospital, but once you have gone that far, scale it up to national level should be pretty feasible.

> Imagine that your monoculture crop could develop immunity to a new disease within hours, days or weeks

You never have opened a priority bug in one of the big software companies, have you?


If properly backed up digital records are even more resilient, and it's not like this is a new concept. Banks are incredibly resilient when it comes to backups, for example. As are most of the major tech companies. Sure other institutions like hospitals and some underfunded government agencies might need to catch up, but once the will is there the technology is just waiting to be implemented.

Not to mention the insecure institutions are insecure in different ways. Whatever methods used to take down one hospital is unlikely to work on the next, although standardized National Healthcare systems like the NHS might be more vulnerable to such things.

Opening a priority bug is an entirely different animal than responding to an active attack. If something was able to shut down AWS, Google, a bunch of hospitals, or any other critical service, it gets immediate attention and reaction. Humans become incredibly productive once shit hits the fan.


> If properly backed up digital records are even more resilient, and it's not like this is a new concept.

If people would be willing and able to make proper digital records, the Cloud would not exist. Actually, most of the technology stacks in use today do not make any sense until you consider the fact that a very large segment of the market wants to have all the goodies IT-fairy-godmother can provide, but are too damned stingy to pay for even 10% of the cost.

Your characterization of Banks is correct, but irrelevant. In many ways they are the perfect IT customer: deep pockets, an inner culture that values detail orientation and rational risk assesment, appreciation of external expertise, etc. Most organizations are very not like this.

Healthcare IT, in particular, are the stuff of nightmares. A culture of bikeshedding, - excesive regulation of what systems ought to do, plus borderline criminal negligence of the implementation details, - reliance on obsolete OSes that cannot be updated anymore, needlessly large attack vectors... do I need to say more?


Reminds me that one of the first things that would happen during uprisings in the past was the burning of the local bank's loan ledgers...


Ah, suddenly I feel all my 1337 cross-platform skilz could remain useful. I do like standards, but I absolutely do not like monoculture implementations, particularly when they have often been crappy (eg 90% of Microsoft API calls NOT behaving as the little-if-any docs suggest, as used to be my experience).


He uses the example of a worldwide economic collapse if Google went down for a month. Which is probably true.

Except it isn't because all these services are fluff. If Facebook went down for a month, when it came back everyone would have forgotten about it, might even have forgotten social networking as a concept. If Uber was offline for a day, everyone would shrug and download Lyft and one ride later would have forgotten Uber ever existed.


Why would you assume that a catastrophic software collapse would only affect software, though? Picture ships unable to navigate at sea, goods, not being transported, power plants going offline, and the knock-on economic effects, which would be competing with the software problems as existential priorities. I suggest the system is a bit more fragile than you imagine, and that a couple of well-targeted mass panic events or attacks on critical infrastructure could be devastating.

Of course that's a Y2k-type disaster scenario, but it's a sad fact that that there are actors seeking the capability to militarize hacking on a grand scale, and others who seriously and desperately want to subvert the dominant paradigm and revert to older ones (Aleksandr Dugin in Russia being a good example, though fortunately not a very influential one). In a way, the US has yet to fully recover from 9-11, for example; the country has been in a defensive military posture ever since, suffered the economic equivalent of a heart attack ~7 years later, and has now undertaken a course of international isolationism and domestic policy that seems perverse by historical standards.

I don't disagree with you on the general resilience of software and networked information structures, indeed I'd say they're the best hope for our society and (contrary to the arguments of this writer) that maybe we should be hurrying to make our political and economic structures work more like the internet does, so that when they're damaged society can route around the damage rather than being paralyzed by it. What if, for example, we were able to do away with legislatures and the corruption they engender, and find a way to manage our legal codes like Wikipedia or Github projects at their best?

But while we grope towards more responsive political and social structures, we have to deal with the reality of high informational interconnectedness coupled with extremely rigid and asymmetry-maximizing power and control structures that are mostly hierarchical in nature. The vast majority of our organizations, government, institutional, and corporate, are hierarchical/pyramidal in nature with a very small number of executive actors exerting operational control which then propagates downward through the organization. Even if a firm relies on an internal culture or decision-model that's more cellular or distributed in nature, those legal structures of control still matter. It's an open question how well society can function if there's an attack on those critical structural elements.

With the advent of machine learning, is it so hard to conceive of a program that can seek out the critical actors within a corporation based not on things like their PII or job descriptions, but simply the volume, frequency, and centrality of their information traffic patterns? It might not go after the CEO or upper management at all; it might go after the paralegal of the smartest lawyer in the legal department, the person in the logistics department who hasn't called in sick in 10 years, and the manager of the company canteen, whose steadiness and reliability are critical to organizational health precisely because they've become taken for granted by everyone who interacts with them. Right now cyberattacks appear (to my uneducated eye) to be launched and evaluated in terms of scale and intensity, but it's only a matter of time before they evolve a preference for criticality and simultaneity.



If anything, the Internet was created with resilience in mind. In fact there's no "the Internet" but rather a network of networks that is by design very hard to shut down.

You could argue that there's DNS and that's more centralized. Sure, point granted. But in theory you can use any DNS server you want... and there are projects which make it less centralized.

Now, a different topic is how devices can be taken over... and that's something where we are largely to blame. The EFF, and the old-school Open source folks like Stallman have been warning for years that we are giving away our freedoms by trusting in closed source. And they were right. Now we are in the endgame where every single computer has one of these "management engines" that cannot be turned off and have total control over a computer, and where your source of entropy is RDRAND.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: