Hacker News new | past | comments | ask | show | jobs | submit login
Bruce Schneier: 'The Internet Era of Fun and Games Is Over' (dailydot.com)
325 points by mpweiher on Nov 23, 2016 | hide | past | favorite | 198 comments



> When it didn’t matter—when it was Facebook, when it was Twitter, when it was email—it was OK to let programmers, to give them the special right to code the world as they saw fit. We were able to do that. But now that it’s the world of dangerous things—and it’s cars and planes and medical devices and everything else—maybe we can’t do that anymore.

Mark this mindset as the beginning of the end of the open, inclusive programming world as we know it.

Schnier visited RIT (my alma mater) last spring, and his presentation revolved around the threat presented by IoT and the growing need for national legislation to encumber it. I asked him a pointed question about how this scaled to the _international_ level, which he decided mostly not to answer (focus on domestic policy first, and such). Because the answer is simple: _it doesn't_. Without global collaboration, this philosophy is the beginning of national internet feifdoms - moreso than what exists today - and the beginning of the end of the global collaboration we freely enjoy today. I value this freedom a lot.

I respect Mr. Schneier for his poignant responses to popular security issues and his ability to be a public face for computer security, but I strongly disagree with where he's lobbying we take the future to. Maybe I just can't accept the hard reality that "security isn't easy" and that government regulation is the only way to force security on people.


It's not like regulating devices is unprecedented. New devices already have to be approved by the FCC before being sold in the U.S. Suppose that the FCC also checked that Internet-capable devices are safe to be connected to the Internet? This would have global impact because most companies want to sell their devices in the U.S. (And even more so if other countries with big markets cooperated with similar standards.)

Another possible model would be something like having Underwriters Laboratories and other independent organizations check the devices.

This is never going to be perfect, but it doesn't need to be. The goal is to make sure that devices people buy at the store are reasonably secure. In previous eras, the goal of new regulation was to make sure that you can still listen to the radio and watch TV, and that people don't often get electrocuted by their appliances. By and large it seems to have worked.

For more: https://en.wikipedia.org/wiki/Nationally_Recognized_Testing_...


A third possibility, one that I find more likely and advocated by Schneier before, is that government regulates the device manufacturers rather than the devices. For example, if a company sold more than X amount of Internet-capable devices, the threat to society will have to be addressed in form of liability and insurance. Insurance companies will then enforce standards and best practices, and economics will ensure that security teams get funded. All the regulation need to do is to limit the scope of limited liability disclaimers.


Letting insurance companies enforce best practices seems pretty inefficient, considering how that strategy has played out in the medical markets.


Medical markets are a bit different. No matter who you are, you're likely to participate in the medical market.

If iPhones cost $7000 instead of $700, there would be a lot less iPhones. Same for IoT lightbulbs (though they're pretty expensive already)


Are you suggesting that some people should be priced out of using the internet?

This is exactly what I was talking about, insurance companies will use the fact that you need access to something to erect a giant money gate in front of it, justifying it with advanced tech that helps in some ways but is almost always used out of context based on the insurance companies control policies that they force practitioners to follow.


Not out of using the internet full stop, but out of putting internet in every random device, perhaps. If the increased liability of putting internet in your refrigerator gets reflected in the purchase price, so that an IoT fridge has, say, an extra $200 in liability tacked on vs. a non-IoT fridge, many people might choose to buy the non-IoT fridge, which is probably not really a huge loss to society overall.


The insurance market is also relatively free. It's not like there won't be competition there either. Whoever has the best risk modeling to price things appropiately should win.


The phenomenon you describe, demand elasticity, exists just as well in the medical market. There's a lot of rationing going on, explicit or implicit. It's just that the elasticity coefficient may be lower compared to the smartphone market.


The FCC worked because the devices they regulate are mostly local in range, but this stuff is global. Even if you do get some regulation in the First World internet devices, there are still going to be millions of legacy devices, millions more unregulated in the Third World, plus how many illegal ones and even more with latent flaws. All of them are going to be targets in the next DDOS botnet wakeup. So what do you do there, put up national firewalls? How?

There's around 9,000 CVE's this year so far. Should devices be checked against all of them? How about next week, does the vendor have to go back and check your fridge? Do they have to patch it? For how long? Who pays for all that? A $20 webcam suddenly needs $500/year just in ongoing maintenance and updates.

I don't have any answers, only questions.


> A $20 webcam suddenly needs $500/year just in ongoing maintenance and updates.

Yes, but what is the negative externality the market is not currently capturing when these devices are assumed into botnets and used against others? Simple economics states that the $20 device is $20 because somebody else is paying for the security mistakes of the shitty device's developers.


This point is key and a lot of people seem to be missing it. We're collectively paying for a lot of this already, it's just hard to measure.


And, not to be prudish, but the lack of regulation is part of what has allowed the industry to explode. It's more unlikely I can succeed at a kickstarter for a neat computer dongle if it needs an extra few thousand dollars for regulatory approval before I can ship it. While I have no personal experience with the FDA approval process, my father worked for a company in the healthcare industry - I'd hear stories of multi-month long FDA audits of their hardware after a 'statistically significant' number of failures in the field. That kind of pressure is not amenable to single-man operations, as is frequent in our field - nor should a dev be on the hook for lifetime tech support for a silly lightbulb or other trinket. Yea, the infrastructure as it currently stands has issues, but while regulatory pressure will whip the big players into line, it could also easily choke out smaller players and startups.

On the subject of answers, rather than questions... I have a funny story. So the XBox one has this neat feature where you can control your console via an app on your local area network PC or phone. The default setting was that any device on the network the Xbox was connected to could control it. Imagine this, in a college dorm. I saw a lot of xboxes available to control. So, after testing with a friend (yep, I could easily interfere with whatever), I developed a key combo that I could rapidly input from any console state which would open the settings menu and disable the remote control feature, locking out my own access (And I'd know it had worked because I'd be disconnected). That's right, I effectively developed a virus which patched the vulnerability. If attackers have the advantage in this field, then maybe we should put more effort into thinking about friendly counter-attackers. If the silly IoT device can be pwned, then it can be pwned for good, as it were. Does anyone know of any groups working in this area, or any research done towards it? Pen-testing and other white hat hacking activities I know about, but does anyone officially do this kind of guerilla-patching?


I'm having difficulty finding any authoritative or historical resources on this, but I recall that "good" viruses were for some time planned that would do just that: Run around, see if they could infect via the method, then patch and self-destruct.

Ultimately the idea was considered not good because of difficulties with getting it to work as expected, pressure and fear that the fix would introduce more issues, liability issues, and so on, and probably some ethics debates on computer intrusion even for the purpose of securing the device.

I'm not really sure what stance to take on such an issue, since the idea behind it is good intentions, but I feel like it can lead to unintended consequences that ultimately would have no one liable. For my personal machines I have fairly vanilla setups, but many of my friends and colleagues have rahter intentionally complex set ups and most definitely would object to someone accessing their set up and making changes without their permission.


I remember a history¹ about one that was tried. It just took down the university network where it was placed.

1 - I'm sure it is in one of my undergrad textbooks. In other words, no way I'll find it again.



> "Increased ICMP traffic"

Wow. If a virus propergated like this on today's networks, would such traffic event make a noticable dent in the available bandwidth?

Anyways, I hadn't heard of this virus - it's super neat. Patching its own infection vector and even explicitly removing an existing virus from the target machine... The article loathes it for how overtly it affects machines (forced restart to apply an update) and networks (congestion), but the work it attempted to do was decidedly good. Sounds to me like it worked well, but had poor execution in accounting for the network effects it would have. (I doubt it was rigorously tested in a prod environment ;) ) If anything, I'd see this as a case study that this kind of offense-as-defense strategy has the potential to work... Its just nobody wants to take responsibility to do so.


  my father worked for a company in the healthcare industry - I'd hear stories of multi-month long FDA audits of their hardware after a 'statistically significant' number of failures in the field.
Healthcare industry? Statistically significant (which I note you put in scare quotes) failures?

I damn well hope so that such incidents are taken very seriously by the FDA.


Yeah, I think his point was that it's not sustainable for a company selling $20 webcams to get that sort of scrutiny.

The counterpoint is that, if the webcam is used in failure-critical situations, then it absolutely should be under that level of scrutiny. The problem is finding how you can define that operational scenario in law.


In a world where random webcams can mount DDoS attacks on basically any internet service, is there any non-failure-critical situation for an internet-connected device? (Honest question.)


One alternative is you get the standards and quite a few components (eg FOSS) that help meet them. You aren't evaluated unless you're sued due to harms from your product. The potential fines or damages go up with the amount of negligence they find. This way, it only costs money when harm happens.

Meanwhile, people wanting to use higher security as a differentiator can get evaluated ahead of time as some do now.


that's exactly how it is going on now. and it doesn't work anymore - different scale.


No it's not. I can't sue Microsoft for preventable, buffer overflows in Windows. The evaluations they target that government accepts dont even look at the source. There's no software liability or source-based evaluation requirements for mass-market software at the moment.

Matter of fact, NSA's new scheme only requires 90 day evaluation at EAL1 (certified insecure).


It's not unreasonable to expect that a device connected to the Internet should be patchable over the same connection.

As for the influence on other countries -- blocking their traffic is an effective way to convince them of the need to take action. For sure this still means temporary disruptions and maintenance cost for every operator, but it's part of the "cost of living" on the Net as long as the others don't catch up. Take it or leave it.

(yes, I am aware of the security risks of hijacking the updates, but it's still still a better control than no control at all)


Don't limit that to Third World. Everywhere it's possible to open a company, sell some products, close the company and go away with the money. And I'm not saying that it must be malicious.

It's not hard to find bugs and problems that were hidden in unexpected places or triggered by weird combinations of inputs. Simple mistakes (or well thought backdoors) like goto_fail and heartbleed can explode a good time after they were created. And then we will sue, get money, send to jail, but the damage will be already done.


A lot of horrible things are precedented. That something similar already exists is an awful argument for doing it.


"I asked him a pointed question about how this scaled to the _international_ level, which he decided mostly not to answer (focus on domestic policy first, and such)"

He did answer that on his blog however:

"It's true that this is a domestic solution to an international problem and that there's no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change."[0]

And I mostly agree with him. If major markets start require certification of security (maybe something like FCC and CE), the rest of the world will follow as they want to trade in these premium markets. Sure, it will not solve the problem overnight, but at least it could be made less severe.

0. https://www.schneier.com/blog/archives/2016/11/regulation_of...


If you want to see a simple example of this in action, consider the impact that the California Air Resources Board has had on vehicle emissions globally. If you want to sell a car in CA it has to meet CARB standards. This effectively means that if you want to see a car in the US it has to meet CARB standards. As a consequence one of the largest car markets requires a specific minimum level in terms of emissions and almost all cars built for an export market can meet these standards or be easily modified to do so.

One state moved the needle here just by virtue of being the largest component of what was the largest market. There is no reason the same could not be done for internet-connected devices.


CA regulation did not stop VolksWagen to do what they did. And they did it for many years.


It didn't stop them from cheating, but they had to make the effort (a quite complicated effort at that) rather than just ignoring the regulation.


Nice to see he's thought about it since then. But anyways:

> "simply because it makes no sense to maintain two different versions of the software"

Except this isn't true. European law already requires Microsoft to separate certain features from it's core for European distribution - which is why there's Windows N for them and normal Windows for the US. Other pressures (such as government mandated backdoors, or regional media licensing) could easily create a situation where a company would find it beneficial to shard their software by market. (Which is, again, a load only larger companies can easily take on or optimize for!)


Microsoft already had their market segmented, so it was not hard to implement. I really doubt that some no-name Chinese router maker would bother to keep few versions of firmware vs disabling default admin password (yes, even that would be a huge step forward).


I think we're probably transitioning into the stage where the global computing infrastructure finally resembles biologic systems. Malicious agents will be everywhere. Attacks will be carried out all the time, at various scales. Anything that wants to survive in this environment will need to have some sort of immune system. Likely there will be a hierarchy of defenses at various levels, as a matter of business as usual.


Biological systems are different from our infrastructure in that it seems that no part of an organism tries to fuck over another to get more resources for itself. I assume this is a result of a local minimum (an organism that fights within itself would kill itself quick), but maybe there are some mechanisms ensuring cooperation within the organism too? E.g. something that prevents a developing brain to grow to capture more oxygen from the bloodstream than it should, starving out other organs? Maybe someone with a biology knowledge / background can chime in on this?


Cancer is a central example of one part of an organism seizing resources in a way that threatens the organism itself.


Well, you're looking at the Internet as a single organism. I don't think that's appropriate. There are many, many selfish entities that populate it, that would happily wreak massive destruction.

Case in point - the recent IoT Dyn attacks.


If we had other civilizations to compete with for resources we could evolve into the cogs of a single superorganism!


DISCLAIMER: I don't necessarily think the types of regulations proposed are the answer, but at the very least there should be some responsibility on people using high-bandwidth connections. It would probably have to be legally mandated.

Even if his proposal doesn't fix the rest of the world it would set an example for other countries.

It would also give other countries more that they can do themselves. Like with the DNS DDoS, the traffic originated in the US -- even if the perpetrator was in another country.

Getting the US locked down means that those countries can start taking responsibility (and I mean that in a positive way) for their own piece of the internet. Right now there isn't even a place for them to start.


I generally agree with this, but with respect to IoT, let's be honest: these are things, and they are connecting to the Internet from some country.

Would it be so bad if the law required "any connected device sold in the US must permit firmware updates, at least for security patches?"

If the devices are on a US network, it seems reasonable to require them to meet certain standards. Other countries can set their own standards, just like we have FCC standards for wifi/spectrum usage in the US, and in the EU the CE sets the standards. This has not balkanized wifi; if properly done, I don't think it would balkanize IoT.


The problem with "permitting firmware updates, at least for security updates", is there are no vendors that only do security updates. Our industry is guilty of using security updates as the proverbial carrot. :(


Indeed; since I first had a cable connection, I remember the software industry using automated updates as a continuous delivery platform. That's why I always hated updates and often disabled them in the past (back when I could feel the impact on my Internet connection and processing power).


He addresses this in his testimony when he talked about emissions standards in California affecting emissions standards across the US. I believe his words were, "We don't need two versions."

Often times regulations in one jurisdiction will impact behavior and equipment sold in other jurisdictions. This is especially true when the regulations are not particularly onerous, the jurisdiction with the regulations represents a large market relative to the total market, and the number of suppliers forced to comply with the regulations is limited.

I worked for a long time in the embedded networking equipment hardware business and was involved in the rollout of ROHS. https://en.wikipedia.org/wiki/Restriction_of_Hazardous_Subst...

ROHS is a EU directive, but it had the larger effect of limiting hazardous substances in devices across the world. Manufacturers wanting to sell equipment into the EU had to get rid of stuff like lead solder and replace it with safer elements. It was simply too expensive for most hardware manufacturers to have two different manufacturing processes, one for the EU, and another for everyone else. So now in the USA you have a difficult time finding lead solder in new electronic devices, and it's because of the EU and ROHS.

Hardware device manufacturing these days is incredibly concentrated. These companies will abide by whatever regulations the USA or the EU forces them to abide by because they want to sell into these markets. And it's likely not cost effective for them to create parallel assembly for smaller markets.


Security isn't easy. It remains to be seen whether regulation will actually be helpful in getting it right.

Personally, I don't believe that it will, but that's just my opinion.

As with most things in this field, it's one thing to talk about it, another to do it. Schneier has been talking about software liability for a long time.


I'm not sure regulation in this country is capable of making it easier to provide better security.


It was done successfully before. Resulted in most secure products ever to exist. A few still use such methods with excellent results during pentests. Also preempted major vulnerabilities in popular stuff. Bell, of Bell-LaPadula security model, describes it here:

http://lukemuehlhauser.com/wp-content/uploads/Bell-Looking-B...


Well the current statu quo is just unacceptable. A major data breach every week, and that's what is being discovered. I suspect most data breaches are not advertised or even spotted. We are discussing how to best store passwords for when they will be stolen, taking that it will be the case pretty much for granted. And the big players are being hit as much as small fly by night shops.

I just don't see what technological change is coming that will change that. Not only that, but because of the big data buzz, even privacy concerned companies feel the urge to collect and store ever more personal data. How is that going to end well?

I mean the only reason lawmakers and regulators are not all over this issue is because they don't realise how bad things are.


They realize it. Impenetrable systems are also impenetrable to FBI and NSA which advise against the good stuff being mandated. The bribes they take from COTS vendors also brought in a preference for insecure solutions from samd vendors. Everyone paying them wants to maximize profit, too. Costly rewrites will cut into that.

So, they're willingly covering their ears while sitting on their asses. At least those on major committees.


One positive thing that might result is FCC or another agency's requirement to open the source of the critical parts of the system.

An analogy: if you produce a soda, you can keep the recipe secret. If you produce a potent regulated medicine, you publish detailed formulas and include them in every box of the drug.


I like your analogy between drug formulas and FOSS. I might use that talking to decision-makers in the future.


FCC is openly hostile to Open Source, so I doubt they would require anything to be Open Source


Is it? I never knew that. Could you provide some examples of that (besides the whole OpenWRT / patching your router debacle)? I'm curious.


> Without global collaboration, this philosophy is the beginning of national internet feifdoms - moreso than what exists today - and the beginning of the end of the global collaboration we freely enjoy today.

You mean like a national network? I believe something like that existed in several countries. And the Internet of old grew over them, as a way to interconnect nodes within disparate networks (thus the name Internet).

Give it enough time, and the national networks will merge into a single global network again.


I guess I would think something like if several major markets have reasonably interoperable - and non-contradictory - security requirements then probably the strongest will be implemented by default.

Not sure if it isn't possible to have reasonably interoperable security encompassing U.S and EU, although recent events sure do make me more pessimistic.


This might mean a change in the programming industry. A move to a more regulated industry, where programmers must have some sort of certification to practice. Much like other professions, such as accountants, lawyers and doctors.

This will be a hurdle but it might mean better pay and less having to clean up after cowboy programmers pumping out rubbish.


And no more "check out my new Javascript framework".


I dream of a world where JavaScript frameworks mature and last.


Something similar was proposed in Kenya a few months ago. Some kind of certification for "ICT Practitioners".

There was a huge backlash against it.


Yeah it won't be a pretty transition if it does happen.


In the '80s K. Eric Drexler warned of the danger of nanotechnology in his book "Engines of Creation". The terrors of the "gray goo" haven't materialized, perhaps in part because of his warnings. I believe that the IoT does need to be more secure, but I was also part of a start-up in the '90s that had an Internet connection for years without a corporate firewall. I think we'll continue to learn and adopt improved security as we go.

As a systems architect that spends a lot of time thinking about scaling, redundancy and resiliency, it's also my opinion that we need to do some work on DNS ... it's probably the most vulnerable part of the "Internet stack".


I read Drexler's book and I was under the impression he was way too optimistic - he spent a lot of space trying to convince readers that there can be a balance between attacker and defender. I honestly don't see it. Whether on the Internet or with nanotech, attackers always have an advantage - because they choose when to attack, and they can strike when they're sure current defences won't hold them. Patching that would IMO require defenders to have exclusive access to a smarter-than-human AI.


the sad thing is that you're probably on lawyer's mentality and hence deliver mantra "if that does not work on country level, lets push it up". Only that international law is not law-as-we-know-it. It is countries in interaction thing. Everything there is different - scale, speed, error rate, response rate, everything. And aside from "international law" part, it has started to change in earlier 00s, about 2005-2008. Next 10 or 20 or even 30 will be mess on international level, if not worse. So expectations of security global collaboration is... well, naive. And hence Mr. Schneier is right when he skips global level before going national. He is also right in a way that it is way easier to cast leadership with existing and working legislation than jump to table on global level and try to pull it towards imaginary rules. So... Mr. Schneier was right to skip. As per his lobbyism, without seeing his material it would be premature to jump to conclusions.


What better solution to the problem do you have?


Don't connect cars to the Internet? I can't believe companies like GM do it. They recently had huge problems with ignition keys, but I guess they are confident that putting cars online will be OK without hiccups.


Hahaha, right?

You know what inspires confidence? Built in 4g connections during a 5 million vehicle safety recall.

/s.


So when did he get bought? Leglislation? He's getting old or something. The Bruce Schneier I remember was a lot more rebellious.


We just voted Donald Trump into the white house. Sanity IS rebelious.


The problem can only be made worse by involving government. Sad that Bruce Schneier does not understand that reality


I'm not saying that you're right or wrong, but why do you think that?


History.

What you will end up with is likely something that is LESS secure but now mandated for anything made/sold legally in the US. The rest of world will be free to do better things

You will also end up with mandated Backdoors, weakened encryption, and a variety of other NSA/FBI wish list items that will be included in any "Cyber Security" bill

I have no interest in the US congress regulating IoT devices, that will not be good for liberty, or security


History under the last mandate was actually a pile of products and research projects that were more secure than anything today. That included Boeing SNS server, BAE XTS-400, Aesec GEMSOS, esp KeyKOS, a secure VMM, an embedded OS in Ada, GUI's immune to keylogging/spoofing, databases immune to external leaks, and so on.

All that disappeared in favor of Windows NT and UNIX w/out security the second they eliminated the regulations. There's just enough of a niche market for a few suppliers to be left, esp defense contracts. Most are gone, though, because private markets dont produce strong security if incentives are turning costs into profits.


So then I guess you also want a Personal Computer or Home Security System to be $1,000,000. Have no Open Source Technology, and generally have no access to the average person let alone low income persons.

If Boeing will be the only company allowed to make IoT products then you might as well kill the IoT industry, as each product will cost 100000x more than it should.

I classify that BAD... sad you do not

Security is one thing, but if it comes at the expense of Open Source and accessible systems then i choose Insecurity.

I will choose Freedom over Government every time


First you miss history. Now economics. High-assurance development of the TCB, critical part of system needing it, cost around 35-50% extra on development. Volume sales spread that out. Windows would cost $100-200 with key pieces done that way. The same basically. Only negative effect is rigorous dsvelopment slows release and upgrade cycle. Many firms maximize profit by shipping often then fixing problems later. Market currently rewards that. It's why Lipner, who led high-assurance VMM, favored quick shipping over security in the SDL he built at Microsoft.

Now, with regulation, you'd still have the same software being developed. The components would be simpler (JSON subset vs XML). Costs spread out in volume. People would get used to new, huge features taking "two or three quarters" (Lipner) instead of a few weeks. Non-paid or non-critical usage could be done to test out proposals without building whole thing.

Far as IoT, solutions already exist that are either inexpensive at OEM level or cheap per unit. Just getting ignored by most of market since there's no regs or liability. Hell, Im typing this on a device running one underneath the OS that cost the same as a ddvice without one. ;)


So you're saying that things like the Clean Air Act, the EPA, and pollution regulations don't work?

Because I look around and it's been working remarkably well.


The problem is that software engineering is hard.

Immensely so.

On a scale of engineering "hardness" (meaning, we can predict all side affects of action), software engineering is closer to medicine than to, say, civil engineering.

We know stresses, materials, and how they interact. We can predict what will happen, and how to avoid edge cases.

Software? Is there any commonly used secure software? Forget about Windows and Linux. What about OpenBSD?

Did it ever have a security hole?

And that's just the OS. What about software?

There are just too many variables.

So what will happen?

There will become "best practices" enshrined by law. Most will be security theater. Most will remove our rights, and most will actually make things less safe.

Right now, the number one problem of IoT security is fragmentation. Samsung puts out an S6, three years later stops updating it, a hole is found, too bad. Game over.

The problem is that "locking firmware" is common "security theater", which, if there'll ever be a legal security requirement on IoT, it'll require locked bootloader and firmware.

And you can't make a requirement to "keep code secure", because then the question will be for "how long"? Five years? 10 years?


> On a scale of engineering "hardness" (meaning, we can predict all side affects of action), software engineering is closer to medicine than to, say, civil engineering.

This level of hubris is pretty revolting. Software engineering is easy. Writing secure software is easy. The difference between civil engineering or medicine and software engineering is that practitioners of the former are held responsible for their work, and software engineers are not and never have been.

Nothing will improve until there are consequences for failure. It's that simple.


It's not hubris. Software really is hard - that's why it looks more like voodoo than respectable engineering discipline. It has too many degrees of freedom; most programmers are only aware of a tiny subspace of states their program can be in.

I agree lack of consequences is a big part of the problem. But this only hints at a solution strategy, it doesn't describe the problem itself. The problem is that software is so internally complex that it's beyond comprehension of a human mind. To ultimately solve it and turn programming into a profession[0], we'd need to rein in the complexity - and that would involve actually developing detailed "industry best practices"[1] and stick to them. This would require seriously dumbing down the whole discipline.

--

[0] - which I'm not sure I want; I like that I can do whatever the fuck I want with my general-purpose computer, and I would hate it if my children couldn't play with a Turing-complete language before they graduate with an engineering degree.

[1] - which we basically don't have now.


Software really is hard - that's why it looks more like voodoo than respectable engineering discipline. It has too many degrees of freedom;

No, sorry, software does not inherently have more degrees of freedom than e.g. building a bridge has. The reason other engineering fields are perceived as "limiting" is exactly because they have standards: they have models about what works and what not, and liability for failing to adhere to those standards.

I would argue that the lack of standards is exactly what makes software engineering look like voodoo -- but it is because of immaturity of the field, it's not an inherent property. Part of the reason software is so complex is exactly because engineers afford themselves too many degrees of freedom.

And I disagree that establishing standards constitutes a dumbing down of the discipline, in fact the opposite: software engineering isn't, exactly because every nitwit can write their own shoddy software and sell it, mostly without repercussions. That lack of accountability is part of what keeps software immature and dumbs down the profession. As an example, compare Microsoft's API documentation with Intel's x86 Reference Manual: one of the two is concise, complete, and has published errata. The other isn't of professional quality.


I push engineering methods for software. It really is hard for systems of significant complexity. Just a 32-bit adder takes 4 billion tests to know it will always work. The kind of formal methods that can show heap safety took a few decades to develop. They just did an OS kernel and basic app a few years ago. Each project took significant resources for developing the methods then applying them. Many failed where the new methods could handle some things but not others. Hardware is a precautionary tale where it has fewer states plus an automatable logic. They still have errata in CPU's with tons of verification.

So, it's definitely not easy. The people that pull it off are usually quite bright, well paid, have at least one specialist, and are given time to complete the task. The introduction of regulations might make this a baseline with lots of reusable solutions. We'd loose a lot of functionality that's too complex for full verification with slower development and equipment, though. Market would fight that.


Agreed, I never meant to imply that it was easy. I just meant that a "professional" software engineering discipline is neither a pipe dream, nor undesirable.


> Nothing will improve until there are consequences for failure. It's that simple.

Of course it's not that simple. Clearly you've never written much, if any, real software.

You want to make an SSL connection to another web site in your backend. You use a library. If that library is found to contain a vulnerability that allows your site to be used in a DDoS, where do the "consequences for failure" lie? You used a library.

Do you think people will write free libraries if the "consequences" fall back on them? If not, have you even the slightest understanding of how much less secure, less interoperable and more expensive things will be if every developer needs to implement every line themselves to cover their backs? Say goodbye to anyone except MegaCorps being able to write any software.

Where does this end? Would we need to each write our own OSes to cover ourselves against these "consequences", our own languages?


The same could be said for any industry.

Anyone can practise carpentry, but if someone is going to do so professionally and build structures that can cause injury or damage if they fail, then they should be accountable for the consequences. This is why indemnity insurance exists.

In software, a lack of rigour is fine for toy applications, but when livelihoods and safety become involved, we need to be mindful of the consequences and prepared to take responsibility, just like everyone else in society is expected to do.


The problem is identifying potential risks. It's obvious if I build a building it might fall down. It's not obvious if you sell web cams they might be used to take part in massive DDoS attacks.


Well now it is obvious, and honestly it has been so for a while. The reason we have shitty security is not because the risks are unknown.


Here's some risks:

1. Your system might be hacked if connected to a hostile network. Avoid that by default.

2. If connected, use a VPN and/or deterministic protocols for the connections. Include ability to update these. No insecure protocols listening by default. Sane configuration.

3. Certain languages or tools allow easy code injection. Avoid them where possible.

4. Hackers like to rootkit the firmware, OS, or application to maintain persistence. Use an architecture that prevents that or just boot from ROM w/ signed firmware if you cant.

5. DDOS detection, rate-limiting, and/or shutdown at ISP level. Penalties for customers that let it happen too often like how insurance does with wrecks.

That's not a big list even though it covers quite a lot of hacks. I'm with the other commenter thinking all the unknowns may not be causing our current problems.


You use a library.

On what basis did you choose that library? Did robustness of the software come in to your evaluation? Did you request a sample from the supplier, and performed stress testing on it? Did you check for certifications/audits of the code you were including in your project?

If that library is found to contain a vulnerability that allows your site to be used in a DDoS, where do the "consequences for failure" lie?

With you, unless you have a contract with your supplier stating otherwise.


   > On what basis did you choose that library? Did robustness of the software come in to your evaluation? 
   Did you request a sample from the supplier, and performed 
   stress testing on it? 
   Did you check for certifications/audits of the code you were including in your project?
Even if, you did everything on this list, you could still get a library that has a potential bug, because software is just that complex. Microsoft puts millions of dollars into security and it still has regular vulnerabilities discovered.

And even if, you implement rigorous audit of code, that means you can't update, because you have to go through the same audit rigamarole, each time a bug is found. By the time you audit your software, a new vulnerability will probably be discovered.

Not to mention this essentially makes open sources software nonviable.


There's a finite number of error classes that lead to codd injection that causes our biggest problems. Some languages prevent them by default, some tools prove their absence, some methods react when they happen, and some OS strategies contain the damage. There's also CPU dedigns for each of these. Under regulations, companies can just use stuff like that to vastly simplify their production and maintenance of software with stronger security.


I disagree there are finite number of error classes that lead to attackers disrupting your software/hardware. Code injection is just one of many possible ways to gain control of your computer.


If you have no interpreters & sane defaults in config, then there aren't many ways to take over your computer. They basically always exploited a vulnerability in the applications that let them run code. That either was in privileged one they wanted to be in or was a step toward one. Blocking code injection in apps would knock out vast majority of severe CVE's I've seen that relate to apps.

Far as finite amount, the vulnerabilities coming in fall into similar patterns enough that people are making taxonomies of them.

https://cwe.mitre.org/documents/sources/SevenPerniciousKingd...


Writing secure software is far from easy. It's super, super hard. The fact that you are saying this, makes me wonder if you ever attempted to write secure software?


Do you write code ?


Regarding secure software, there are at least some efforts to make writing formally verified software more approachable.

The seL4 project has produced a formally verified microkernel, open sourced along with end-to-end proofs of correctness [0].

On the web front, Project Everest [1] is attempting to produce a full, verified HTTPS stack. The miTLS sub-project has made good headway in providing development and reference implementations of 'safe' TLS [2].

These are only a few projects, but imo they're a huge step in the right direction for producing software solutions that have a higher level of engineering rigor.

[0] https://wiki.sel4.systems/FrequentlyAskedQuestions

[1] https://project-everest.github.io

[2] n.b. I'm not crypto-savvy, so I can't comment on what is or isn't 'safe' as any more than an interested layperson.


I don't really think the main problem is that software engineering in general is hard. I think the problem we're facing right now is that writing secure software using the tools we have available now isn't realistically feasible.

We need to ruthlessly eradicate undefined behavior at all levels of our software stacks. That means we need new operating systems. We need new programming languages. We need well-thought-out programming models for concurrency that don't allow the programmer to introduce race conditions accidentally. We need carefully designed APIs that are hard or impossible to mis-use.

Rust is promising. It's not the final word when it comes to safety, but it's a good start.

An interesting thought experiment is what would we have left if we threw out all the C and C++ code and tried to build a usable system without those languages? For me, it's hard to imagine. It eliminates most of the tools I use every day. Maybe those aren't all security critical and don't all need to be re-written, but many of them do if we want our systems to be trustworthy and secure. That's a huge undertaking, and there's not a lot of money in that kind of work so I don't know how it's going to get done.


Can we remove undefined features? We can get rid of the GCC optimizations which rely on the premise of undefined behavior to break code to win a speed prize or something, but undefined behavior exists for a reason:

It depends on the CPU.

The problem is that C was designed to be as close as possible to hardware, and some places (RTOS? Kernel?) speed is critical.


We can abstract the CPU away. However, undefined behavior is just the tip of the iceberg. You can fix it all you want but we'll be stuck with logic bugs, side channel attacks, info leaks, bad permissions & malconfigured servers, poor passwords, outdated & broken crypto schemes, poor access control schemes and policies, human error or negligence, etcetra.

There is a huge amount of ways security can go haywire even with perfectly defined behavior. Make no mistake, I love watching as unsafe unbehavior is slowly getting fixed, but I think language nerds are too fixated on the UB to see that it's not the big deal and won't get rid of our problems.

Another problem language nerds miss is that we can adapt existing code and tools (in "unsafe") languages to weed out problems with undefined behavior. It's just that people aren't interested enough for it to be mainstream practice. Yet the bar is much lower than asking everybody to rewrite everything in a whole new programming language. So why do they keep proposing that a new programming language is going to be the solution? And if people just don't care about security, well, we would have all the "defined behavior" security flaws in the new code written in the new shiny programming language.


I don't think that better languages will fix all the security problems. (One can, after all, create a CPU simulator to execute compiled C programs in any reasonably powerful "safe" language.) I just think that C and C++ are specifically unsuitable for building secure systems, and we won't make much meaningful progress as long as we're dependent on enormously complex software written in languages that don't at least have some degree of memory safety as a basic feature.


This is only partially right. Software engineering is hard. But trust is harder. Much much harder. And most things you have to trust people with just doesn't matter.

However, in the future where software can do everything, there is no such thing as "limited trust." If you trust someone to operate on your car, you are trusting them with everything the car interacts with. Which... quickly explodes to everything.


software itself isn't intractable, it's that the field is young, and we are stuck with choices made when nothing was understood, and its gonna take a while to turn the ship. but i think we have a pretty good idea of where we are trying to go wrt writing secure software.


> it's that the field is young

The opposite. When the field was in its infancy, one was able to keep whole stacks in his head.

How complicated were CPUs in the 1960s?

How many lines of assembler was in the LM?

How many lines is Linux or FreeBSD kernel? Now add libc.

Now you have a 1970s C compiler.

Now take into account all the optimizations any modern C compiler does. Now make sure there's no bugs _there_.

Now add a Python stack.

Now you can have decent, "safe" code. Most hacks don't target this part. The low hanging fruit is lower.

You need a math library. OK, import that. You need some other library. OK, import that.

Oops, there's a bug in one module. Or the admin setup wasn't done right. Or something blew.

Bam. You have the keys to the kingdom.

And this is all deterministic. Someone _could_ verify that there are no bugs here.

But what about Neural Networks? The whole point of training is that the programmers _can't_ write a deterministic algorithm to self drive, and have to have a huge NN do the heavy lifting.

And that's not verifiable.

_This_ is what's going to be running your self-driving car.

That's why I compared software engineering to biology, where we "test" a lot, hope for the best, and have it blow up in our face a generation later.


The need to hold whole stacks in the head is the problem. That's not abstraction. That's not how math works. The mouse doesn't escape the wheel by running faster.


I'd say the main problem is developpers carelessness and incompetence.

New SQL injection vulnerabilities are being introduced every day. Passwords being MD5. Array boundaries being sourced from client data. I mean there are perhaps 5 to 10 coding errors that are generating most of the vulnerabilities.

That's not the only problem. We also need to trust the users, who are either careless or malicious. But I'd like at the very least to be able to trust our systems.


I can propose a quite straightforward solution for this mess: do not connect things into the Internet.

Your thermostat maybe wants to talk with your alarm clock. I can get that. But it does not have to happen over the Internet. Let them talk locally.


That's actually what Bruce wants - and he figures the only way to enforce it is through regulation. He's given up on the sensibilities of individual companies and programmers, and would instead like to rely on regulations to make some basic guarantees.

You and I know the basics of computer security. We can take a crack at designing a secure system and maybe do okay. But the argument goes like this: Yes there are security conscous programmers, but there are also many which are not. And those people work on products which make it to market. How do we stop those products from making it to market? Government intervention. He's given up on education and relying on the informed developer, and would rather rely on public policy. I find it a bit sad.


The free market crowd would argue that the surviving companies will bake in the right security if consumers demand it. If companies don't take it seriously, either their customers aren't demanding it, or they will be replaced by companies that do a good job at it.

The government has a proven conflict of interest and disincentive to harden the infrastructure. Vulnerabilities are valuable for espionage. And there are already regulations like HIPPA, PCI, etc, yet there are still breaches. Regulation will add complexity to business and will protect market share for the entrenched players who can afford to follow it. That will lead to further consolidation and reduced competition while I feel that the opposite is needed.

The fiefdoms described in another post wouldn't be such a bad thing. At the nation state level, competition will also make for better security. Isolationism doesn't work out well in world history. Movement of goods and ideas does a better job at bringing countries together. I feel there's a pendulum swinging back towards isolationism but it goes back and forth over time.


> The free market crowd would argue that the surviving companies will bake in the right security if consumers demand it. If companies don't take it seriously, either their customers aren't demanding it, or they will be replaced by companies that do a good job at it.

This was addressed in the hearing. Schneier says its a negative externality, like invisible pollution. The problem is that the consumers don't care because they aren't the ones getting attacked by their devices. Instead their devices are quietly using their residential internet connection to help DDoS websites. Would you pay $20 more to buy a different DVR that is less likely to annoy a random person you've never met over the internet? Most people don't care, and don't have the knowledge and experience to care.

Because consumers won't pay for it, the manufacturers don't bother to invest in security engineering. (These are low margin products after all). As a result we're all worse off.


What the free-market crowd can never account for - here, or anywhere - is how the market is supposed to resolve issues that are not of direct, immediate interest to self-serving buyers or sellers, but which are of massive and justifiable interest to third parties outside the transaction.

Case in point: If a seller in one country can lower the costs of a good shipped to another country by scrapping responsible waste management in favor of polluting the commons (i.e., places where individual property rights claims are difficult to press), then "the magic of the market" is likely to increase - not decrease - the amount of pollution generated by the trade.

What libertarians don't like to admit is that they see free markets as more than ideal mechanisms for preserving efficient economies. They also see them as efficient sources of good and just governance. Like all belief systems, faith in the quality of governance supplied by free markets, while rational and well-supported to a point, can be taken to counter productive extremes where it maintenance stops operating like empiricism, and starts to function is ways indistinguishable from fundamentalists religions.

Not sure how many sensible knowledgeable people want this to be the dominant force in guiding global network security.


As a nuance though. one could agree with the notion of negative externalities naturally occurring in a free market economy (perhaps in a total laissez faire environment the affected third parties could retaliate in kind though) but disagree with the form of policy to address it. In principle the market should be able to select appropriate fixes if the costs are internalized, but governments have a tendency to focus less on the internalize costs part, and more on selecting fixes. Pragmatically it might be the best way forward, but in principle it goes against free market resource optimizations and instead becomes a gamble that whatever fix is made policy actually do have an impact on the problem, and doesn't make things worse.


Pragmatically it might be the best way forward, but in principle it goes against free market resource optimizations

This is a pretty good argument the absolute value of principle, tbh

For what it's worth, I tend to see principles like map; useful - even indispensable - in many situations, that are nevertheless abstractions and therefor imperfect guides to actual reality. Use maps, yes, but avoid mistaking them - or any system of symbols - for the things they represent (i.e., the map is not the territory).

Indeed, principle is a form of proxy wisdom for the young and inexperienced. It's better than nothing, but probably not enough to save you from at least a few episodes of hard reckoning. Assuming these don't get you killed, the places where principle doesn't serve are the ones where mature judgement develops.

Granted, there's a point where this doesn't serve either, but that's okay too since mortality always wins in the end.


>The free market crowd would argue

The free market crowd lost all credibility after Enron, Worldcom, and the housing collapse. They had countless reasons why the above scenarios wouldn't happen. Reality proved their theories to be the complete and utter BS any rational human being could see from the start. In a utopian society a lot of ideas are great, unfortunately we've got reality to deal with, not utopia.


None of those things happened in a free market


You never have free markets because even "free" markets exist in relation to other markets that tend towards natural monopolies or oligopolies, or are bound by states' strategic interests or other institutions that for multiple, completely logical reasons do not or cannot have free markets.

So that friction will always exist.


"The free market crowd would argue that the surviving companies will bake in the right security if consumers demand it. If companies don't take it seriously, either their customers aren't demanding it, or they will be replaced by companies that do a good job at it."

I mostly argue that myself where the problem is demand-side where customers don't put money into it. There's a few things on supply-side to factor in, though:

1. Companies lie to customers about how necessary these vulnerabilities are. They condition them to expect it. They also charge them for fixes. It takes almost no effort to knock out the common ones with only 30-50% premium for high-assurance of specific components. Even premium producers often don't do either with those that do so rare most consumers or businesses might have never heard of them.

2. Years of lock-in via legacy code, API's, formats, patents, etc means consumers often don't have a choice or only have a few if they want the modern experience. Many times specific choices will even be mandated by groups like colleges. Market created the problem that now lets it milk a captive audience out of money. It won't solve that problem no matter what they want.

These two, esp 2 given patents and First-Mover Advantage, are huge reasons the market alone isn't likely to fix things. Some regulations could deal with them. The market can also fix things where these two don't apply. The market can also be combined with regulations like with DO-178B market that regularly outputs high-quality software far as I can tell.


> The free market crowd would argue that the surviving companies will bake in the right security if consumers demand it.

No, the free market crowd would recognize the cost of poor security for what it is, an externality. That fits into basic economic theory as something that the market won't naturally correct. No one will demand security if they're not the ones suffering from the lack of it.

Governments need to figure out some way to price in the externality. A good for instance would be to allow companies that don't take reasonable security measures to be held accountable for all damages caused, not just the portion attributable to their negligence. If Dyn had grounds to sue any device maker who, say, has a default password that isn't required to be changed during initialization before the device connects to the internet, then it would start to change things.

After enough of the fly-by-night device makers are hit with large judgments, it will start to become common practice to put every new device through security audits before introducing them into the market. Those reviews and the followup development will take time and cost money which will add to the final cost of the device, but make them safer.


A libertarian here. Governemnts were not need to come before, they are not needed now. Internet is mutual and voluntary agreement between different subnets. The solution is threat of throwing the misbehaving party out of the group. Thats it.

The market oppertunity is not for non-hackable thermostats but for more advanced internet routers which can do more advanced packet inspection under low latency.


A government created the internet. I'm not sure we'd be having this discussion if a government hadn't laid the foundation.


Not like government created Obamacare. There is a huge difference. One is extremely political while other is almost independent other than being funded (insignificant part of whole budget) by government. I would argue involement of government, while appreciated, was not essential in creating Internet considering how independently it was invented around the world.


Dude, read your internet history. We don't have a bunch of incompatible protocols tied to different vendors (That was how things were developing) because the DoD of the US said that anything that was to be sold to them HAD to use this TCP/IP thingy.

That's not the same as "enforced regulation", but it certainly walks and cuaks like it. No direct government intervention -> No freely interconnected internet.


Assuming that did happen, you are overestimitating its importance.

And what about consumer offerings ? There was no such restricitons. The simple reason an ISP did not create its own Internet because by connecting to bigger net it increased networksize and value of its offering. Thats how the whole world, not just US, settled on Internet.


Is the author underestimating the importance of the open, standard, non-profit, publicly-funded Internet vs all the for-profit private nets that got nowhere close to its impact? I think not.

The private networks are still making walled gardens with no innovation in fiber space with innovative, walled gardens in Internet space. Same old same old doing nothing of significance with pure self-interest unless building it on what government created and partly subsidizes. The latter groups usually also plateu into stagnation sucking profit while the open, less-selfish models grow in new ways.


Internet is ability of everyone to send any packet to any part of the world. Thats what matters. What someone builds on top of it and/or how open/close, It still does not negate that characterstics of Internet.

Inviting governments to regulate Internet is unnecessery risk.


"Internet is ability of everyone to send any packet to any part of the world. Thats what matters. What someone builds on top of it and/or how open/close, It still does not negate that characterstics of Internet"

Internet is a set of protocols run on top of huge pipes that interconnect across many companies, nationalities, etc. They all speak common language. You're trying to oversimplify it to expand on your false argument. What I just described was only achieved once... by governments & companies making money off government projects. No private industry has duplicated it.

Closest thing was the cell phone industry where they limited what type of traffic, kept the bandwidth minimal for high profit, charged per amount of data, and so on. They eventually started looking more like the Internet by internally using Internet technologies funded by DARPA, NSF, etc. Originally, though, their model couldn't have created something like we see with the Web or Internet-run commerce. Just like MA Bell before them with their schemes.

Private sector wouldn't have built the Internet on their own since it's too risky and costly with 3rd parties getting most of the benefit. Government did it better.

"What someone builds on top of it and/or how open/close, It still does not negate that characterstics of Internet."

It does within what they build. Much of online activity has transistioned from purely Internet technologies to Web technologies. Companies like Facebook and Slack are where content and activity is going instead of HTML web sites and IRC. The result is people are locked in to vendors to just get what experiences they allow in their walled gardens. With most Internet tech, I could just move everything I had to a different client or server if what I was using wasn't good enough. Standard protocols existed to help. Private sector prefers the opposite as lock-in equals more money.

So, they fail twice: preventing something like the Internet from occurring until government did it; trying to turn it back into wall gardens of the past albeit with web browsers and more graphics.


> What I just described was only achieved once... by governments & companies making money off government projects.

I am not refuting this. What I am claiming is it was not neccessery despite being helpful. After the market made absolute [1] long distance communication and processing on data cheap and at large scale, it was only a matter of time. Even we had/were to go through IP level walled-garden/subnets, the world would have set on non-discriminating Internet.

Think of this way, what Govt. created/helped-created intially was a local network and only after thousands of ISPs coming together, not because of incentives from Govt, but because of demand we have the Internet as we know now.

> Companies like Facebook and Slack are where content and activity is going instead of HTML web sites and IRC.

Facebook, Slack, HTML, IRC != Internet. Question: Is someone being restricted from sending/recieving anypacket to/from any IP in the world ? If no then Its not walled-garden from Internet point of view. Internet is not being harmed in anyway. However bringing Govt. into this will most likely make the answer yes.

> So, they fail twice: preventing something like the Internet from occurring until government did it; trying to turn it back into wall gardens of the past albeit with web browsers and more graphics.

I am not aware of any IP level walled-garden. Facebook/Slack/Myspace/etc are/were in app/website business not Internet business.

[1] Not telephone etc which transform the data nondeterministically.


"Even we had/were to go through IP level walled-garden/subnets, the world would have set on non-discriminating Internet."

It still hasn't to this day in private services. They almost all wall off whatever they build. Those that build connections charge out the ass for them with all kinds of restrictions and schemes. Many get acquired and then crippled.

You need to justify your assumption with evidence from the IT market. Vast majority of it works against your expectation. Further, something like the Internet would require vast majority working for that expectation.

"Think of this way, what Govt. created/helped-created intially was a local network and only after thousands of ISPs coming together, not because of incentives from Govt, but because of demand we have the Internet as we know now."

It was actually a combo of military needing survivable, distributed comms with universities needing to collaborate with groups that were basically self-less and highly cooperative at the time. There were private parties trying to do their thing with their self interests even at that time. It was called OSI and circuit-based lines. One failed entirely, the other isn't what Internet was built on, and itself diminished over time in favor of faster, packet-switched lines. Even in ideal environment the incentives of businesses killed their opportunity while incentives of groups not motivated by profit led to Internet.

"Facebook, Slack, HTML, IRC != Internet. Question"

They make up vast majority of Internet traffic along with Netflix and Google. That makes them the Internet experience for most people. A lot of the rest is walled garden apps on mobile. Sites and services purely building on Internet technology, like IRC networks or FTP servers, are barely used because private parties rarely invest in them. It's simply too easy to escape lock-in that way. We can't throw out how 99% of people and products use the Internet when discussing Internet regulations or issues.

"I am not aware of any IP level walled-garden. "

You should look up ISP's like Comcast policies on web servers or SMTP ports. Stuff exists even at that level to serve the monetary interests of private market. Most of the walled gardens are built on top of the Internet protocols with ecosystem effect meaning you have to work within them to reach users they hit with First Mover advantage in new markets.


It also won't work. The only solution to ignorance is education. Having "smart people" make all the decisions will only get you so far.


He's nuts. Elected officials don't get technology. Hillary Clinton had no idea what she was doing with a simple e-mail server. They. Don't. Get. It.


So what's the solution? When other people keep making and buying products that communicate over the Internet, because it's easier, what do you do about it?


If you don't like the "regulations to prevent bad stuff" approach, then use the "we'll let you do whatever you want, but if you screw up you're going to pay dearly to fix it" approach.

Then companies can balance the cost of adding a secure enclave and a Grsec kernel (just an example) to their smart coffee maker against having to recall all of their infected products from the market when a botnet takes over them.

Besides that stick, I would throw a couple of carrots in there, too, like the companies being able to brag that their products are A+ security rated, etc, in their promotional materials and on their packages.


But the ease of creating companies and abandoning them or otherwise obfuscating their liabilities says that they won't be the ones paying dearly.

In short, I think regulations will prevent more damage than post-hoc legal retribution.


"In short, I think regulations will prevent more damage than post-hoc legal retribution."

Time has proven that wrong so far. We got a lot of highly-secure products after DOD's Computer Security Initiative giving clear guidance plus financial incentive. DO-178B and other safety-critical markets are cranking out lots of them on safety side. So is segment of smartcard industry focused on high-security.

Regulation works so long as it has effective standards, they're clear, evaluated against product, and must be followed to sell the product. As in TCSEC era and DO-178B, reusable components for common cases show up to reduce the evaluation cost or risk. Open-source security would likely get a boost, too, as companies sponsoring it would sponsor certifiable versions with the higher QA.


I read what you wrote as affirming my thought.

The constraints of DoD CSI, DO-178B, and the smart card industry focus are all embodied in regulations, which precede legal retribution.

If a company busts the regs, it can be sued. But the first line of defense is that companies are required to do it right - by the regulations.

I agree regulation needs clear definition and followup to be effective. I continue to regard regulation as a better mitigator of damage than post-hoc penalties.

Regulation is about preventing a mess. Litigation is about cleaning it up. I'd rather not have the mess to begin with.


"Regulation is about preventing a mess. Litigation is about cleaning it up. I'd rather not have the mess to begin with."

That's exactly it. Although, I did propose possibility in this thread of defining regulations that aren't immediately applied but apply in court after harm is alleged. The reason being evaluation costs and time can be a big problem, esp for startups. This lets them simply follow guidelines with evidence produced during development & they only pay the cost if they screw up. The cost goes up with level of deviation and harm it caused.


You're mentioning an industry funded by about 1/6th[0] of the Federal Budget related to war and consequences if things go wrong (like losing/death/geo poltical problems).

Just cause it worked for the DOD doesn't mean it'll work everywhere else.

[0]: https://www.cbo.gov/taxonomy/term/17/featured


The LOCK project reported high-assurance cost them 30-40% over a regular development. Altran/Praxis does it regularly for a 50% premium. You don't need the DOD's budget to get even high-assurance software. High-quality stuff usually just cost a bit more upfront followed by savings in maintenance.


I'd rather put the burden on the owner of the device. If someone puts a device on the internet, he is resposible for its actions. Don't want to be liable? Better buy a safe device. Can't buy a safe device? Don't put it on the internet.


I'm not sure I'm comfortable with the idea of my grandmother being held responsible for the botnet that her coffee maker is a node of.


Which is insecure because the creator of the coffee maker didn't consider security to be a necessary feature. The true liability lies with the person that made that decision and should not be transferable.


If we transfer the liability to the consumer, he _will_ consider this a necessary feature. And ask for it. It takes one serious conviction of a consumer and the company selling him crap will go out of business.


What good is a smart coffee maker? I don't need one. All this IoT stuff is over-hyped marketing nonsense. I've been playing with some Arduino and Raspberry Pi based devices to control things in my house, but they're toys. It's just more junk to buy.


So some hardware maker should propose and publish an open intranetworking guide, ideally.

There could be an interesting market for Apple clocks and Apple thermostats, but to increase the likelihood of something like that becoming popular (vs just using the easy Wi-Fi route), wouldn't it taken an unlikely push like that?

It will be a long time before manufacturers decide whether or not to put everything online, I think.


But why make things compatible with open standards (regardless of who writes them) when you can build an ecosystem?!

I say that sarcastically, but it seems to be where the world goes.


Are there technologies that aim at physical scoping of networks ? Sometimes I joke to myself that IR remotes will be the next privacy graal. Can't hack my RCA TV unless you've been granted access on my couch.


Are there any devices that do this? I'd be very much a fan of a local IoT I control, perhaps with an encrypted data file on Dropbox or something it can accees. I have no need for cloud storage from the vendor...I can bring my own.


Isn't this what ZigBee and Z-Wave are for? Local-only wireless network for your things, just bridge out to the internet for services as needed? Instead of making dropbox the nervous system of everything?


The whole EIB/KNX standard works like this and has been around for more than 20 years and there are hundreds of devices to choose from.

It is designed around local low bound (9600 bps) network where you can connect KNX devices (thermostats, actuators, sensors etc) or gateways/routers to other networks.

Only problem it has is that the communication is not encrypted, but being local and isolated, it is not a very big concern.

Every other home/building automation systems worked like this recently until some marketing geniuses came up with the IoT campaign.


> Let them talk locally.

Although I agree, what if one the devices that they communicate locally is connected to the Internet? Then it becomes just another level of indirection...


do not connect things into the Internet...Let them talk locally.

Too simplistic. If devices talk then there will be a way to listen and scale that talking/listening.


It feels like they are using dyn ddos incident as a 9/11 of the internet. So much fear mongering and push for government involvement, disgusting. Next thing you know you'll need a license to write software for appliances and mandated to put a surveillance API into everything.


Schneier point of view is that government involvement will happen and it is better to shape this involvement before the US government is forced to intervene after a large-scale disaster.

> Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we've been trying to fix for more than a decade.

> A fatal IoT disaster will similarly spur our government into action, and it's unlikely to be well-considered and thoughtful action.

> Our choice isn't between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex ­— and they're coming. We can't afford to ignore these issues until it's too late.

https://www.schneier.com/blog/archives/2016/11/regulation_of...


Yeah it feels very much like that to me too. Taking that a step further, it wouldn't surprise me if it were orchestrated by the people that will be at the helm on this sort of shit if it's implemented.

When we have TLA's subverting security, megacorps sucking up information and leaving it open to attack, manufacturers baking in backdoors etc... I would run an absolute mile from being a developer in this new world.


I'd love to see what a detailed version of security policies and infrastructure look like in a world of backdoor-less strong encryption from Schneier, the EFF, the Hopkins crew, etc. Something that can be used to persuade, or at least influence policymakers by allowing them to see that another way is possible, one that allows security services to do their job in a way that allows them to feel that their work isn't futile, while simultaneously respecting privacy rights. I think the need for strong encryption and no backdoors (which as Schneier himself has explained in the past, are always a double-edged sword) are very important and I support them, but that those on the side of it who also have in-depth knowledge about the finer details don't deign to articulate just what exactly the policy looks like without resorting to just a list of what we shouldn't do and vague allusions to "just go old-school" or "utilize human assets more."

A coherently articulated, normative counterfactual security platform would be a better place to argue from.

It's a cousin to the negative liberty arguments: they only list what not to do to in order to avoid hurting people, rather than what we can do to help them (positive liberty.) Maybe we could frame the question as "If we let the EFF and Bruce Schneier redesign the United States security apparatus from scratch, what would it look like?"

We already have excellent critiques, and are good at articulating "what's bad," but far too little on "what would a good system look like that strikes the 'right' balance?"


A combo of per-customer authentication at packet-level, DDOS monitoring, and rate limiting (or termination) of specific connection upon DDOS or malicious activity. That by itself would stop a lot of these right at the Tier 3 ISP level. Trickle those suckers down to dialup speeds with a notice telling them their computer is being used in a crime with a link to helpful ways on dealing with it (or support number).

Far as design, they could put cheap knockoff of an INFOSEC guard in their modems with CPU's resistant to code injection. Include accelerators for networking functions and/or some DDOS detection (esp low-layer flooding) right at that device.

https://en.wikipedia.org/wiki/Guard_(information_security)


I think I live in a bubble...

But...

Who buys these products? Why does a toaster need to be connected to the internet and synced with your "smart"phone? What exactly can you achieve having this feature?


Not sure about a toster, but there are benefits to having some household appliances connected to the internet. For example, a washing machine may be able to determine what hour it should operate for maximum energy savings, and ping you when finished. I guess a toaster could be similar; pre-loaded with bread so you could ask it to start toasting before getting up from the sofa.

> Consumers may find it totally cool to design images for toast using a smartphone. Meantime, the resulting data would help food companies understand how people approach breakfast, design new products and market to consumers more effectively.

http://blog.rackspace.com/internet-of-things-why-connected-t...


> [...] pre-loaded with bread so you could ask it to start toasting before getting up from the sofa.

But how did the bread get in the toaster?!


Not really. Those benefits that are marketed are mostly just enablers for laziness, but not actually solving anything. It caters to the self repeating lie of "saving precious time" but actually harms responsibility and security at scale.


Regular toasters are just a funny example (reductio ad absurdum), although I did find a toaster that can print pictures on your toast [1].

An internet connected fridge could be really useful if it integrates with grocery lists, recipes, expiry dates, automatic deliveries, etc. These fridges are still in their infancy, but we're moving towards a fully automated kitchen. I've seen a lot of fridges that run Android, and people like to make fun of them because "Why would you need Twitter or YouTube on a fridge?". Those are just apps that you can install on anything, and it didn't take any additional effort. It's weird when it becomes part of their marketing, though.

Anyway, this is the future we are heading towards: http://www.moley.com/ I believe the technology is already here, and it's just a matter of lowering the cost and improving the software.

[1] https://www.kickstarter.com/projects/258723592/toasteroid-fi...


The vast majority of IoT devices are purchased by manufacturers, utility companies, government agencies, health care facilities, airports, etc. etc.

If you think that any significant fraction of IoT devices are located in homes and controlled by phones, you do indeed live in a bubble. (Don't feel bad, so do lots of HN posters).


>Thanks to the Internet of Things, the ability to collect massive amounts of data and use it in new and profoundly different ways also enters the picture.

http://blog.rackspace.com/internet-of-things-why-connected-t...

No thank you very much.


Can you buy a TV that isn't internet connected now-a-days? AFAICT they are very few and far between :(


You can buy a TV, and then not connect it to the internet.


Could we not solve the problem of DDOS through collaboration between hosts and ISP's?

1) DDOS attack is detected 2) Attacking IP addresses are sent automated DDOS abuse notifications 3) ISP, like your credit card company when it's machine learning algos detect fraud ask for human back channel verification like SMS. 4) ISP notifies user of suspected bot on one of their devices. The onus is put on the user to run a secure network and remove or fix offending devices.

This system could work well for residential connections at least.

It could be implemented similarly to the way spam is handled by the internet, bad neighborhoods and networks that don't self police are treated as second class citizens.


> It could be implemented similarly to the way spam is handled by the internet, bad neighborhoods and networks that don't self police are treated as second class citizens.

Please no, the way spam is handled on the internet means that anyone who isn't already a massive internet company is usually treat as a second class citizen


On second thought, agreed. I hope that is not part of the solution.

I think that our routers may be the key, or are at least completely negligent at the moment.

A normal consumer router is essentially a black box, but it should be a watchdog. The router should alert users when suspect outbound traffic is originating from it's network.

Of course the router could be compromised, and router patch cycles are atrocious generally, but this method could notify in case of hacked IP cameras and thermostats.


ISPs do this for bittorrent, I imagine similar things could be set up for DDOS attacks. A little message from the ISP like "hey, you were sending out some DDoS stuff, please check your devices."


This. But with ISP procedures mandated by governments, with international synchronization of rules.

This already works well for medicine, aviation, telephone systems, transit signaling, and a huge number of other things. There's no reason to believe this wouldn't work for internet security.

We could start by mandating the network best practices RFC.


In my opinion, this is yet another example of a social problem that technology simply amplifies.

Humans make mistakes. Computer systems are fragile. As humans keep developing computer systems, new attack surfaces, new vulnerabilities will be introduced. It's pointless to try and keep playing catch-up either with or without "regulations".

Instead, we should consider _why_ such attacks happen in the first place. Who are the targets? States? Corporations? Maybe they're not open enough. Maybe they're too powerful. Or are we afraid that our governments, corporations, or other entities/instituions of our society invade our privacy to manipulate us? Or are we scared for our wealth or status?

None of these are technological issues at their core. Our society needs to adapt and grow up to this powerful technology. Until then, the only thing we can do for our safety is refuse to use it.


"Any sufficiently advanced technology controlled by a miscreant is indistinguishable from a possessed object in a Stephen King Novel."

http://thefutureisastephenkingnovel.com/assets/player/Keynot...


Yep. Keep IoT on the LAN, or give it the same respect you would any other networked computer.

However, the day the the Internet Era of Fun And Games is over is the day that the internet keels over dead. The Internet was (almost literally) built on fun and games.


But the problem has been people haven't done that. So here we are.

Maybe some stick (vs carrot) for unwittingly contributing to a DDoS to make people care will work? Not sure. I'm not really liking any of these solutions, but the problem is that it'll get worse.


I don't know, do they have to run on the same network as everything else?

Maybe a first step is just to have an address range for these demonic things, which isn't publicly routable (unless you deliberating NAT it). The "home network" range (steal back some of that loopback space!).


>>> We get security [for phones] because I get a new one every 18 months. Your DVR lasts for five years, your car for 10, your refrigerator for 25. I’m going to replace my thermostat approximately never. So the market really can’t fix this.

Yes it can. It is fixing the problem as we speak. I, a security aware person, would never buy a connected thermostat. I'll buy the 5$ model that does the job I want perfectly. And my connected DVR lies behind some decent protections on my local network. Should it start participating in a dos attack or talking to those it shouldn't, I'll notice and replace it with a better system. The same is true of laptops, phones and cars. Once people are burnt a couple times they will opt for the safer models. Government could perhaps accelerate this process by increasing manufacturer liability (lol, not for the next 4/8 years) or by mandating that unmaintained products self-brick (again, lol). But the market will react nevertheless.


> It is fixing the problem as we speak. I, a security aware person...

The market of devices bought by security aware people is not the market that Schneier is concerned about. It's this other much larger market of normal people buying the least expensive devices with the most convenient features.

Those consumers won't notice or care if their devices participate in dos attacks, but the targets of the attacks care a lot. It's a classic example of a negative externality.


In the video he makes an excellent point, that i think is valuable to highlight and repeat here again:

Device insecurity is in aggregate very similar to environmental pollution of devices.

It is a latent danger that in some cases can stick around for many decades, it is caused by many people invidicually adding a tiny bit to the problem, it is created because right now accepting insecurity makes the product cheaper without directly hurting either the manufacturer or the buyer.


I remember the last time we called on a government security solution for a problem that was perceived as too difficult and too critical for the private sector that had been, up to that time, managing the issue.

That's roughly how the Department of Homeland Security was born and the TSA was given the critical task of managing airport security where the failures happen. I wonder how Mr. Schneier thought that worked out?

I cannot take seriously any open-ended demand for regulation in the name of safety that doesn't spell out pretty detailed proposals which would work in practice; what such regulations should address and what is off limits. The last time we did this, we sacrificed rights and gained little (if any) additional security; I think Schneier calls it "security theater". I want to know why he thinks his call will result in ANYTHING better than that fiasco before I'm even vaguely convinced that this is the right answer.

Please don't read this as my not believing that there is a real problem and threat. I'm simply dubious of Schneier's answer at this point. "Any action" here is not necessarily the right action.


> the last time we called on a government security solution for a problem that was perceived as too difficult and too critical for the private sector that had been, up to that time, managing the issue. / That's roughly how the Department of Homeland Security was born

I'm pretty sure the U.S. government has taken on other security issues since the DHS was founded in 2002.


And what's your point caller? Your statement sounds like blind trust and assumption rather than any actual knowledge. Sorry, I can't do that.

So let me run the exercise a bit... DHS... Patriot Act?... All that Snowden revealed stuff?... Invasion of Iraq?... Lybia?... Syria?... the U.S. Border?...

OK... a bit glib, but the point is there: where are the successes where the government protected citizens successfully without a disproportionate reduction of rights of those citizens. Off the top of my head, I can cite about as many as you did. I imagine there are some out there... but the track record isn't so good.


Add a physical write-enable switch or jumper, so that malware cannot be installed into the firmware, and hacks cannot survive a reboot.


There have been dozens of discussions on ddos and iot over the last few weeks here and its curious the technical consensus to solve a 'fragile network' problem is with politics and control and a political problem like surveillance with technology.

The first can't be solved with politics without locking down the internet and fine frained control of billions of devices connecting globally in others words an unachievable strategy. And the second cannot be solved with technology unless anyone truly believes they can fight an organization with the law, tens of thousands of programmers working on surveillance 24/7 and near endless resources on their side. This way none of the problems get solved.


I don't understand how this regulation will work.

Will all open source software have to be submitted to some government run or approved certification? OpenWRT? Linux? Random App that accesses the net? Raspberry PI python script?

Sure it matters that 1 million IoT cameras are secure and can be updated but what about a million servers running some npm library? Do those need to be regulated too?

I'm very concerned about the security issues but I'm not looking forward to no longer being able to write software because I need it certified every time I add a line.

Are there other solutions that don't require regulation?


It'll be a continuous sequence of market and government failure until it breaks. And in the meantime it'll involve a lot of finger pointing, not least of which will be at the user e.g. the user is stupid for clicking on that obvious link and deserves to be defrauded, that'll teach them; and other mind numbing nonsense. We'll see a return to walled gardens of private networks very restrictively interconnected, similar to the days of AOL, CompuServe, and Prodigy.


The governments want to get involved - the simple fact is I don't think they can. Software can be much more complex than humans. There are systems out there that are already so complex, no one human can reason about the entire system.

I don't think law could even keep up with the space it wants to govern. They would be better trying to govern the application space as they have already been doing, such as vehicles, nuclear, etc.


Did Bruce Schneier call for regulations that should prevent ddos attacks? What kind of regulation/law does he propose?

In other articles Schneier called to solve other problems by regulation - for example to limit data retention by internet companies. are laws the answer? Is these no way to solve security related problems be better technology?

Will microkernel os solve embeded device security or will it be regulation?


Correct me if I'm wrong, but I believe he's calling for a specific organ of government to take on regulation of the internet and connected devices in a general sense, not calling for some specific regulation or law.

I suspect what he's getting at is having some set of "safety standards" to be put in place, especially for IoT tech. The whole quote of

>Our computers are secure for a bunch of reasons. The engineers at Google, Apple, Microsoft spent a lot of time on this. But that doesn’t happen for these cheaper devices. … These devices are a lower price margin, they’re offshore, there’s no teams. And a lot of them cannot be patched.

sums up that point pretty well.

I honestly don't know how you would really enforce something like this, but the proposal seems to be less about the specifics of regulation and more about the need for new regulation. The FCC really isn't equipped to handle this kind of thing.


% honestly don't know how you would really enforce something like this

Probably like with automotive code - they don't have code audits but have mandated test scenarios. The problem is that you need large organizations to work these out

i guess Schneier has a point, if consumers do not demand secure systems then it can only be done through regulation.

that would be a very German approach - they have strong consumer advocacy organisations like Stiftung Warentest and the Adac, these then push for more consumer protecting regulation. [1]

Once upon a time these requirements were also used as protectionist barriers. In our days that would mean: you did not bother to update your toolchain and have no firmware updates for this smart light bulb of yours? gone is your import license.

[1] http://americastradepolicy.com/german-customer-protection-or...


seems like the answer to this is a federal standard for home firewalls. You buy a random piece of crap camera, the FCC or similar should track and publish a reasonable firewall rule for it. That way hordes of IoT devices could participate in a DoS without the FCC publishing bogus internet rules.


There are few powerful technologies invented in the last 100 years for which the government did not eventually require a license to make or even operate. I do imagine a future where to own a computer that can truly "create" will require a license to own and operate.


We do not need more laws and regulations, we need less. Remember what happened if you had a security vulnerability 15 years ago ? You got hacked by some kid, then you patched your system and restored from backup and gave the kid a cookie. What will happen now ? And why ?


Government is useful to solve the tragedy of the commons. Insecure devices are a negative externality. The government should tax insecurity (or fine device manufacturers that allow an attack) and use the revenue to subsidize software security efforts.


And I think that it'll be interesting when this kind of thing backfires. Like, let's say, USA government wanting that some computer made by Digital doesn't give the correct precision in all calculus done in the USSR nuclear research.


The fun and games are just getting started, it's just the stakes have been raised.


Could ISP be responsible for attacks coming from some of their infected users? Shutting down someone's internet for 'abusive use' would be a good incentive of getting incriminating devices out of the market.


bro imma let you finish but

"the government should be part of he solution?" Hahah wtf

Bro

the gov't regularly "hacks" all the things(including the internet of things) legally that I literally couldn't give a shit if some random fucktard script kiddie decides to hack my router.

the government literally does it on the regular legally.

that's the actual problem. get a mitt bro.


> "Schneier then laid out his argument for why the government should be a part of the solution"

Now we have two problems.


This site's ssl is broken.


Or you're using Chrome.


I too get an ssl error with firefox 50


We're going to need to integrate cryptocurrencies with APIs. Encrypted pay to play is the only way.


Another approach to security - give everyone a nuke, and only prevent untraceable strikes. If everyone can denial of service the guy who started it- he wont start it?

Dear god, now im pro-gun. Also then identity fraud will be the weapon of tomorrow.. I miss the old internet.. maybe if the speed is so slow, that strikes become uninteresting..

Could the servers transferring TCP-IP be adapted, in such a way, that a mass-traffic causing incident (controllserver) leads to a slowing down of the connection of the causality chain origin?

No, crypto prevents that. What a nice little maze.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: