So, given that he might have prevented more damage than he caused, if even the part of the recent DDoS reduction can be attributed to his activity, what to make of it? It can be more of a judgement on society if this person ends up in jail, while IoT makers keep making hefty profits being rewarded for their negligence, and ISPs keep making profits being rewarded for lying to their customers when shit happens and causing more damage than he ever could overall.
It all seems to start and end with technical ignorance of the typical customer. There are popular shows that help pepople choose better food, by exposing shenanigans food makers engage in. Perhaps there could be something similar for tech in the future. Something where a hacker comes on TV and makes a total ridicule of some IoT crap device, and the show uses it to constatntly repeats the basics, like changing default credentials, etc. Same schema that works with food shows, expose someone as an example and add some generally useful advice. Perhaps this might become feasible when IoT is more popular.
It can be more of a judgement on society if this person ends up in jail, while IoT makers keep making hefty profits being rewarded for their negligence, and ISPs keep making profits being rewarded for lying to their customers when shit happens and causing more damage than he ever could overall.
Masked comic book vigilantes aren't ever going to be a reality on the mean non-virtual streets of cities but on the Internet it seems we're confronted with something pretty much equivalent.
So here can consult all the old issue of Marvel and DC for a compendium of moral dilemmas and result.
I believe the "super" in "super hero" refers to super powers, which Jones lacks. Also, vigilantes typically break the law, whereas Jones does not. In the videos I've seen, Jones is usually taking advantage of Washington's "mutual combat" law and challenging wrong doers to a fight. Then, he takes advantage of the fact that he is/was a professional MMA fighter and his opponent is a drunk jerk.
> Masked comic book vigilantes aren't ever going to be a reality on the mean non-virtual streets of cities but on the Internet it seems we're confronted with something pretty much equivalent.
Or maybe a bunch of actual researchers and government agencies working without praise while someone else writes a sensational story to take all of the credit. And the people believe it on the merits of it being the story they want to be true.
Wouldn't an intended side-effect of this action be the end-user getting educated about "why the device I bought doesn't work anymore?" and directly impacting the reputation of the company that manufactures that now-bricked device?
I can only see it as a win-win. Except the internet connected medical devices that are insecure which if affected by such purging bots would literally impact lives. I hope vigilante script kiddies don't get involved with good intent and cause havoc.
Not sure that TV food shows actually teach people anything. All I see is a lot of people going from fad to fad, not really understanding the core principles.
Victor Gevers is cited by Bleeping computer (https://www.bleepingcomputer.com/news/security/brickerbot-au...) as saying there are better ways of solving the problem than forcibly disabling/bricking devices. I'm curious what those are, though, since IoT manufacturers (speaking broadly) don't seem motivated right now to take even basic security precautions with their devices. I know some groups use sinkholes to redirect compromised devices, but while that can be useful for research it doesn't seem like it has the same motivational impact for change.
There's no incentive for manufacturers to do anything. This actually just increases sales, as dead devices are replaced (probably by the same device/firmware). Insecurity converts the equipment to a consumable that generates an annuity.
If the new device would stop working soon enough after purchase, customers would be more likely to return it to store.
I see the normal manufacturers/sellers responsibility as the only good answer to the problem. I don't like solutions involving fines or large liabilities, because these can create wrong incentives to people/companies.
Doing security properly for these devices is not very expensive compared to scale they are being sold. Therefore I think simple things like increased return/warranty repair rates should provide enough incentive to focus on security.
I don't like solutions involving fines or large liabilities, because these can create wrong incentives to people/companies.
Making manufacturers liable for incidental damage their devices do sends the message that they should avoid such damages as far as possible. That seems like a good message to me.
If device $20 device X does $20,000 damage to the hospital down the road, the fact that the consumer can return the device seems woefully insufficient.
> device $20 device X does $20,000 damage to the hospital down the road
If it were able to do that, then presumably that $20 device had eg a good amount of explosive in it, and therefore would be in no condition to return it to the store.
Unless you're alluding to that $20 device being used as a communications proxy, under a mistaken idea that the Internet has some concept of node trust. In reality, the hospitals $20k "damage" would be due to its own developers' negligence and demands for compensation should be placed squarely at the door of its suppliers and integrators.
So what you're saying here, is that you're happy to let me drop a device of my choosing on to any part of your network, and that you'll take full responsibility for any damage caused because it was your (or your own developer's) negligence.
I don't see how you're inferring this, as a consumer's home network doesn't have trusted access to the hospital down the street.
What I did imply is that if I develop a device and put it on my network, then I'm essentially responsible for whatever damage it causes. Eg wiring a RPi to a heating element that will start a fire if left on continuously is a poor idea, regardless if the proximate cause is a cosmic ray bit flip or malevolent Internet noise.
So, you're saying that if random consumer buys $25 dollar IP aware camera, puts it on their WIFI (and hence the Internet) so they can look at their cat at work, it is that consumer who the DoS-hit hospital down the road should look to, when that hospital is hit by massive botnet-drive ransomeware attack.
'Cause certainly random average-consumer should know how dangerous adding crap to their network can be ... for others ... and certainly he/she is capable of making provisions for this.
Fortunately, modern legal theory actually does consider "who are talking about here, what can expected of them." in cases like this. Hospitals could theoretically sue IoT manufacturer on this as far my ianal knowledge goes and it's more that the manufacturers are distant cheap factories in China that prevents this.
> it is that consumer who the DoS-hit hospital down the road should look to, when that hospital is hit by massive botnet-drive ransomeware attack.
Erm, no - the exact opposite. The consumer should look at the camera's manufacturer for their own connection being swamped, incurring overage charges, etc. In your scenario, if the hospital's only problem is that their Internet uplink is swamped, then they should be looking at their link provider for robust upstream shaping, etc. In the case of a simple traffic overload, nothing critical at the hospital should be affected because critical traffic should be segmented, or at least prioritized, over traffic from arbitrary endpoints. If there is more of an affect, then that is due to a further vulnerability that belongs to the hospital!
I referenced the hospital's developers/suppliers for these further vulnerabilities - in those cases they should be looking at their network admins, or at the creators of the failing (defective) equipment. The crux of the End to End principle (ie the Internet) is that edge nodes have the intelligence, and thus requirement/responsibility, for discerning "good" traffic from "bad". And (as I said) coming at it from the other direction, general robust engineering principle dictates that physical devices "fail safe" no matter what noise is presented at their network ports.
Making manufacturers liable for the damage their products do is do allows the insurance industry to set minimum standards. (i.e. you would have to get some kind of certification similar to UL/CE to get insurance)
"Doing security properly for these devices is not very expensive compared to scale they are being sold."
Not sure that's true. I did a 18 month stint with a tiny (6 person) hardware startup - most of my time was spent ensuring our devices could auto-update their embedded Linux securely without bricking or losing any of the custom hardware specific capabilities. It's not an easy trick to pull off (big thanks to the ARCH Linux team for all their work that I built on), and there's ongoing cost involved to ensure newly discovered vulnerabilities have a process to be evaluated, patched, tested, and deployed to the fleet of devices.
The high expense is caused in part by the current high volume of vulnerabilities.
Vulnerabilities and updates are inevitable, but the current volume of them is not, by orders of magnitude.
The volume of serious exploits could be reduced dramatically if we started to more seriously apply the principle of least authority (and an important technique and way of thinking about that is capability security -- in fact an important technique to make 'brickless updates' easier to achieve too, I suspect).
I think how to kick-start the industry to actually do that is a difficult problem.
There aren't any good frameworks for doing so in an embedded space. Traditional linux package managers aren't really great for this (nix is the closest to getting it right, but isn't really ready on embedded yet).
edit- The main objective, of course, being allowing unattended atomic updates with as low a risk of bricking the device as possible. Usually these devices will be updating the whole root filesystem at a time, which is otherwise read-only. Ideally this is also done with as little downtime as possible. There is some work in this space in standardizing a solution, but it isn't really there yet, so it tends to get home-grown for each product.
I got super lucky. Between the design/BOM stage and the actual first manufacturing run, the price of 4Gig micro SD cards dropped below the price of 2Gig ones - so I ended up (at the last minute...) with enough space to do a complete system install on a separate partition. Reduced my fears of remote bricking everybody's devices enormously...
There's a couple of other problems there too - the "manufacturer" is often some unknown and unfindable usually Chinese contract electronics assembly firm, who had little to do with the design or software. The company who contracted them to do it often no longer exists a few years down the track. The place it gets sold can easily be somewhere like Ebay, AliExpress, or Kickstarter - where the vendor really isn't going to care once your payment has cleared.
(Source: I was the platform security guy for an IoT startup in 2013/14, for a company who no longer exists and who's only product run is still out there somewhere no longer getting security updates...)
So the DoD wanted to brick devices instead of just declaring companies making these devices a threat to national security, banning their products from sale, and setting a good precedent?
:|
Would the intel community really need a botnet like this to operate?
I don’t think the DoD getting to unilaterally decide what devices can be sold in the US is a good precedent at all. I expect devices with troublesome features like support for end-to-end encryption would be the first to go, and insecure devices would probably stay as long as the vulnerabilities were known to them.
Presumably a state-level actor could not only figure out how to disable these devices, but actually create patches for the vulnerabilities and force-distribute those patches. They would have access to test hardware and could even demand confidential documentation from the manufacturers to facilitate this.
The better way is to transition to identity networks. The sybil attack is a function of the low cost of identities. In the current stack, IP addresses play a dual role of locators and identifiers. This is bad engineering and leads to the current situation. The Host Identity Protocol (HIP) has been in use by boeing and the US army for years now. The revised version (HIPv2) hasn't seen an open-source implementation yet. If someone with enough time and skill wants to change the world, a good implementation of HIPv2 will have a very positive impact.
"Paras Jha, Dalton Norman and Josiah were also a part of this normal Minecraft server entrepreneur game until they decided to force players from other servers on to theirs by clogging their networks. Therefore, Mirai came into existence and started performing for them very well. However, Mirai started outperforming the creator’s expectations, affecting the Internet outside Minecraft badly."
"The creators of Mirai found potential in the botnet and therefore went on to fine-tune it to improve its abilities. They even started leasing Mirai to other cybercriminals, who used them around the world for their own vested interests."
Malware can be reasonably described using evolutionary mechanism (similar to Memetics), AVs and best practises create a evolutionary pressure in which only the fittest and most aggressive malware can win out. A lot of things in computer science and/or on the internet behave in ways not to dissimilar to evolution
From an evolutionary perspective it's not that different from any start-up that pivoted from it's original goal after finding out it's real effectiveness. The only difference is the criminality of the activity.
My main feeling about all this is that "let it burn" probably is the only way to deal with the mess. It must have been pretty obvious from quite early on that trying to fight blackhats on level field as a single individual is not sustainable, and the time that he has been buying is in no way enough to employ the major cultural shift needed to improve the landscape. Attempting to suppress the attacks seems just to make people complacent, so it might be better just to let the attacks to do their damage and hope that those spur some improvement.
I was using this exact same analogy to explain this to someone earlier, although I have to admit that I agree with the person above. Going out and starting forest fires in national forests without official government sanction, even if your intention is to create controlled burns and prevent a bigger fire will still land you with a verdict of arson.
If there weren't any forest rangers/firefighters organization to back-burn, then the lone person trying to do some good may be the only way forward. But for forest fires, such an organization does exist, and no-one should be burning by themselves.
There is no such organization for the internet. So we are left with lone white-hats who risk personal safety to do some good. I wish there's a better way.
> Going out and starting forest fires in national forests without official government sanction, even if your intention is to create controlled burns and prevent a bigger fire will still land you with a verdict of arson.
I'm probably starting to stretch the analogy a bit thin but: In Australia, similar issues, but we've been doing back-burning/fuel-reduction-burns for quite a few years now to deal with these problems (the indigenous population practiced controlled burns for a variety of purposes well before whitefellas arrived). However, some evidence is now starting to emerge that the mode controlled burns is effecting the long term ecology of the forests, and favouring certain species over others, and preventing the build-up of larger, less-flammable trees.
So, to tie it back to IoT: Would a 'controlled burn' (i.e. a bot which bricks vulnerable devices) only destroy the low hanging fruit (i.e easily exploitable devices) and leave the harder vulnerabilities in place, building up for the bigger fire in a years time?
Correct. The OP was very noble but quite misguided.
0) Let it burn was how CA operated until we started fighting fires and building in unsafe spaces and it worked pretty well. The Thomas Fire is a good example of this problem. You can reduce the damage but there still will be damage.
1) Legal issues if you are detected and someone wants to "make an example" to be malicious doing this, even if you are found "not guilty" the consequences are non-negligible.
2) In any capitalist culture, prevention only becomes "cost effective" when multiple headline making disasters convince people in suits they will look like idiots if they don't pay at least lip service to prevention.
3) If a criminal ever figures out who you are and that you are invalidating their large $$ investment in a criminal attack, they may be willing to engage in criminal acts to stop you specifically and you don't have the legal protection and training a LE organization would have.
> It’s been a remarkable week for cyber justice. On Thursday, a Ukrainian man who hatched a plan in 2013 to send heroin to my home and then call the cops when the drugs arrived was sentenced to 41 months in prison for unrelated cybercrime charges.
If you aren't equipped to handle events like this, tangling with blackhats means a single opsec mistake is going to result in shit like that.
"let it burn", as in a self-correcting feedback present in large dynamical systems (such as forest fires).
major internet disruption potentially larger than anything we've seen before, as in you and I and every single aspect of modern society that relies on the Internet (aka pretty much everything).
The time to let it burn is now, gradually, while we're not that interconnected yet. It may inconvenience a lot of people, but continuing the status quo will lead to a lot of people being ruined or killed in the (not so distant) future.
This is probably the only way to enforce the correct incentive for manufacturers. Build them right or they will fail.
IoT being so hyped right now is specially troublesome since all they will care is time to market, and when the market is filled with insecure devices what else will there be to done?
If it's true then this is fantastic work. Should really be the responsibility of the government. If a device is compromised it should be disabled.
It could also set up a pretty nice incentive structure. If a device is bricked due to a vulnerability, then the manufacturer should have to either repair or refund the device. Corporations will not respond properly until it impacts their bottom line.
>Imagine the public backlash there would be if people knew that the government was scanning and hacking devices for the purpose of national security...
This guy is a hero. He bricked vulnerable device before they could be used for DDOS, or worse. And he did it at great risk to himself. I say well done.
The one point I disagree with is blaming the consumer. The simple truth is that security protocols have terrible UX, and until that improves nothing will change. Personally, I think it's time IoT was regulated and, in particular, we require a secure protocol at the time of deployment: IoT devices must be provably wiped, and then put into physical contact with an "owning" device before deployment. This, in turn, requires that people start using what I call a "Home Brain", a device who's primary purpose is to coordinate and secure all other devices that you physically own. I imagine simple versions to be as sophisticated as a router, and hackers might want to put together their own, something like a little home theater box. I suppose in a pinch your smartphone could work, too.
Do not execute the random, obfuscated python code linked to from this paste. This could be bait, and could PWN you. This must be audited first before anyone runs it.
Obfuscated python, asking everyone to run it? Hah. Good one. Perhaps if you run it in VMWare which runs in Virtualbox which runs in Qemu... and even then. Never run such stuff bare on a machine you intend to still use afterwards.
The irony, if running code from a post from the Brickerbot author would brick your laptop...
My thoughts exactly. The "any programmer worth his salt" statement, followed by this link that you have to download to prove as a programmer you are worthy.
Before anybody else sends me the “Internet Chemotherapy” link or retweets it - it appears to be a wonderful piece of fan fiction. Deutsche Telekom suspect was arrested earlier this year, key details wrong, TR069 modem stuff was bad implementions crashing devices etc.
There are technical solutions for these problems, but nobody will adopt them.
There are political solutions for these problems, but nobody will vote for them.
There are social solutions for these problems, but nobody wants to take responsibility.
Personally I'm waiting for a power plant to explode after someone runs a fuzzer on the internet. And for the inevitable law making security research illegal. And propaganda claiming foreign nations or terrorist organizations instigated the attacks (instead of bored 14 year olds, the more likely cause) used to support new pointless wars. And the intellectually flaccid xenophobic public, clamoring for more jobs and Internet-dependent TVs, fervently supporting it.
I do regular scans myself and the number of these vulnerable DVR devices, printers, and routers you find exposed out there is mind-boggling!
Run just a single instance something like this [1] (which isn't even well optimized) and within minutes you'll find vulnerable devices. It shouldn't be that easy to build a botnet.
Your readme doesn't say much. Looking at the scanner.py code, it appears you just send a GET HTTP/1.0 request and see what comes back. How do you determine vulnerable devices, response headers with versions?
The response headers are kept in a SQLite db that you can perform regex queries on. It's surprisingly easy to find particular devices based on the headers.
And, yes, it's just making basic HTTP requests on port 80 so it's overlooking tons of devices. But I run this from my own devices and doing blast scans of ports other than 80 can look a bit suspicious.
If what this person says is true, this kind of stuff is really dangerous and can get you in jail, even if your intent was just to show potential exploits. I wonder if this person's real identity is one anyone's radar currently.
This is the situation we put ourselves in by constantly complaining that we can’t move fast enough and hiring “talent” straight from coding bootcamps. Brace yourselves, the time to eat your own dogfood is coming! :^)
The devices in this story - routers, cheap IoT devices, etc - aren't products of bootcamp-hiring buzzword-spouting Silicon Valley companies, though. They come from a long tail of decades-old mid-sized electronics businesses with, AIUI, low margins and cultures that don't value software highly. It's not about JavaScript hipsters (for once!); this is much more related to the situation with terrible automotive software.
I'm sorry to leave you in these circumstances, but the threat to my own
safety is becoming too great to continue. I have made many enemies. If
you want to help look at the list of action items further up. Good luck.
This sounds scary - like it came straight out from a movie
You're saying this editor is the hacker or that he fabricated this text? Would he really know enough to sound that credible? If not someone on here should be able to see right through it
Pretty much the only important part of this (tremendous!) message is the bit at the end:
> "The real point is that if somebody like me with no previous hacking background was able to do what I did, then somebody better than me could've done far worse things to the Internet in 2017."
Then again, he might not have a "hacking background" but might have a strong embedded/networking background so he knows where to look, considering how he found exploits that blackhats missed.
So either he's that good or blackhats are two-bit[0] criminals.
I think the analogy is to the entire Internet being a single body, and chemotherapy being a nasty thing done to it that damages it, but destroys even more damaging things in the process.
This is a question of credentials only. Which I don't mean to sound like I'm dismissing it at all - It's actually the whole reason this guy's effort got stranded.
Chemotherapy is essentially controlled killing, to mitigate or prevent uncontrolled killing. Just like (as someone said elsewhere in this thread) controlled burning is done to mitigate or prevent uncontrolled burning. And here we have controlled hacking to mitigate uncontrolled hacking.
Society doesn't support anybody killing, burning, or hacking, unless there's a way to know and recognize, that the person knows what they're doing. Which (the knowing and the recognizing) is a credentialing problem. It's the only difference between (doctors, police, firefighters, cybersecurity experts) and (quacks, vigilantes, arsonists and some-unknown-hacker), respectively. A guy I don't recognize as a surgeon, is just a masked man coming at me with a knife, even if he actually is a surgeon.
If your intentions are good, you should take the trouble to get some kind of credentials, which can take many forms. (If your intentions are not-so-good then it makes a lot more sense to bypass that, but also for the public to put you in the "untrusted" basket.) This guy bypassed credentialing, and that places him in the realm of those who are not trusted no matter what kind of good they do. Now he can't continue. Instead of that, imagine there was a public debate about the relevant issues and this or a similar effort had actual public consensus and resources behind it.
Now of course I'll grant there are numerous problems with the political process and consensus-building and all the rest of it. It takes a long time to get anything done, and learning how to hack is actually the easy part. Good news: This is actually not the emergency he claims it is. What's the worst-case scenario, the whole internet goes down tomorrow? Well, it's only the internet. There are ways of getting food, water and shelter without using a network at all. Nobody has forgotten how to use pen & paper. Even blankets will still work without an internet. (Not if these IoT clowns have their way of course.) Whatever, don't listen to me, I'm older and I lived almost half my life quite happily in a world where nobody had the internet, yet still everything got done.
Anyway, the other thing is, a "state of emergency" is how all sorts of atrocities and shitty decisions are justified by governments, so it's not something to emulate. The attendant issues deserve to be publicly recognized, debated, decided and then tackled. Maybe it's wishful thinking to demand that much from people today.
And credentials are merely a way to build consensus around what is due process and who can determine it. The net effect is the same, but the process for achieving it is without consensus. This is another example of no proper channels existing, and individuals taking matters into their own hands.
Holy obfuscated Python, that's quite the jumbled mess. This is going to be a fun one to pick apart.
I don't think the string is .. worrying on its own given that, you know, this thing is meant to kill IOT/etc devices that are insecure. I'd argue that just given the source (the internet) is enough reason to be wary.
We as humans are pretty bad at responding to threats proactively. It takes some damage before we'll take action. In this case, IoT devices are going to have to take down some major infrastructure before we start regulating them.
This guy knows he's in trouble for what he's done and decides to rewrite history to make himself out to be a "good criminal". Re-framed bragging about his criminal exploits and hilarious (in his mind) escapes from detection ensue.
Frame it as ISP "conditioning" or as "criminal attack", fact of the matter is he committed crimes and now he's feeling the heat so he's putting out re-framed confessions to get the jump on his accusers and try to save his skin.
Then again, people can be more complicated than I give them credit for, and he might actually be a good hacker. It has happened before. I don't know. Let the investigators find out the truth. We have no reason to trust him.
Clean exploit code could be used maliciously (or accidentally) by people that might not understand the true nature of the program. There is a tradition when releasing exploit code to obfuscate it or introduce obvious errors that make the code safe{,er}. This prevents script kiddies from immediately using it, while anybody who should be reading the code should be able to see through the obfuscation/errors.
Most of the value is in the data (common login credentials). The bulk of obfuscated code is probably stuff doing the network communication and perhaps some other mundane stuff, that you can do manually (over telnet, or whatever) if you like and don't care about creating a botnet.
So apparently I am the only one who think that device manufacturers shipping unsecure devices is not an excuse for an unrelated third party to damage it?
The same as the fact I did not lock my door do not allow anyone to rob my home, that I did not replace my fire detector batteries is no excuse for anyone to set it alight, etc.
To me good security starts by catching this guy and if he had done the damage he claims he had then make him pay for it.
Only when one had been hold accountable for what he has done should we go after companies for negligence, no?
"I discovered this security hole then proceeded to exploit it and damage some random system/workflow for the sake of demonstration" should not be acceptable.
> The same as the fact I did not lock my door do not allow anyone to rob my home, that I did not replace my fire detector batteries is no excuse for anyone to set it alight, etc.
First, let's fix your analogy:
If you don't lock your door, this is generally not an issue for the general public, i.e. it doesn't impact anyone else negatively, just you in case you get robbed.
A better analogy for this case here would be that you leave your door unlocked every day on purpose AND have a huge arsenal of guns, rifles, etc. right in your entrance room, all loaded up ready to be used.
If I'd encounter that, the least I'd do is call the cops; the only problem here is that if I call the police telling them my neighbor Bob is running an insecure IoT device that can be used for DDoS attacks against anyone on this planet, they won't do shit. So if the police told me they don't care about your publicly accessible arsenal of weaponry then yes, I think it would be a good idea to try and prevent anyone from accessing it.
> Only when one had been hold accountable for what he has done should we go after companies for negligence, no?
That would mean we shouldn't punish anyone for putting individual people or the general public in danger unless something bad actually happenes. This is not true in other scopes too, there's a lot of things you can get fined/arrested for that don't actually hurt anyone but just have the potential to.
> "I discovered this security hole then proceeded to exploit it and damage some random system/workflow for the sake of demonstration" should not be acceptable.
My take on this: "I discovered this security hole that others were exploiting to DDoS other parties on the Internet, so I proceeded to exploit it too and damage some random system/workflow to prevent this abuse from continuing"
Ok my analogy was not great, but yours is also a bit exaggerated I think, for two reasons:
Obviously an iot device is not obviously a weapon, and secondly not only compromised devices were targeted ; arguably a non easy task since once it's compromised it is likely to be "locked".
Still, I want to reiterate that the society I want to live in is not one where we are protected by perfect bulletproof fences but one where vandalism is not a cause of fame.
I don't have a final opinion of who's right in this debate, but I can come up with a counter-argument.
Your "lock my door" analogy is not complete. Imagine if you don't only don't lock your door, but also abandon your home, so criminals squat in there - and they end up terrorizing the whole neighborhood. Wouldn't it be logical for the police to at least barricade the place somehow? May be even fill doors and windows with cement, or take other measures to prevent squatting.
It would still make it unusable for you, and you'd have to renovate if you ever return - but many would argue that it's a reasonable way of preventing the damage to the whole community around you.
I want to add that I am not sure yet what my position on this is. But I think your analogy is not correct - and please feel free to correct me if I am wrong!
Imagine the following scenario:
Manufacturer's sell weapons cabinets to people. Only weapons cabinets are not hidden in your home, instead they are put out on the street. And now comes the dangerous part: These weapon's cabinets do not lock. Because the manufacturer decided that locks are not necessary.
If someone comes along, should he just ignore this or should he make the weapons unusable?
Because that is what these devices will be used as: weapons.
I think a big problem here is that people are discussing points like manufacturers should be blamed / fined, or that the owners of the devices should be responsible etc, but no one is addressing what seems to be the real issue that most owners of these devices don't understand them enough to actually know these issues potentially exist.
In your analogy I would say its not that people are leaving these weapons cabinets on the street unlocked, but don't know that its possible to lock or hide them.
Another good example would be cars, if you went to the showroom and someone showed you a two cars, one with no locks on the door that was easy to start without the keys, and one that had proper security the choice would be easy and you could understand what you're seeing.
If you go to a shop and someone shows you two IOT devices that do the same thing, and look identical, you cant really see anything that helps you learn about the security features, and as an average person if you're told one has certain "tech speak" security words you don't understand are you really going to make the choice to spend more money to protect against something you don't fully understand, that you don't see any effect from?
Yes manufacturers should be made to release products that are secure and respond to security issues in a timely manner, but we also need to educate people that computers aren't some magical box beyond their understanding, blaming owners in these cases would I think make the issue worse.
If you consider that the majority of people who have the internet get their ISP to set up their router, and only have a basic understanding of what that device does, how can we expect people to understand what the potential issues of open ports or vulnerable firmware are? And then could these people even be educated to the level we would need for them to be responsible for securing their own devices?
Forgetting to lock the house or batteries in detector running out is an accident. Not providing secure and up-to-date firmware for devices which are known to be an easy targets is a company policy, which comes out of incentives (the cost of security is huge compared to the cost of a breach to IoT manufacturers). While we cannot/should not legislate against accidents, we sure can and must legislate against harmful policies. Shifting blame from companies to hackers simply is not as productive as making companies responsible for the quality of their product.
Your unlocked door affects only you, buggy IoT devices affected thousands of devices and networks.
It's more like discovering a remote-controlled car can be hacked and allow terrorists to drive it into crowds of people, and you have an option to brick such cars. I'd brick them all.
> Deutsche Telekom Mirai disruption in late November 2016. My hastily assembled initial TR069/64 payload only performed a 'route del default' [...]
AFAIK this is not what happened. Deutsche Telekom had one major incident on November 27th 2016, but that incident was caused by a denial of service attack [1]. Given that the routers in question don't even run Linux, 'route del default' is not really an option anyways, making the first claim of the story likely to be fabricated.
This seems to be a good fictitious story, but nothing else.
The Host Identity Protocol (HIP) is the answer to this, and many more problems. We do not need to disable the dangerous devices if we can have an identity based network which allows us to whitelist the nodes we allow to interact with our devices. HIP would also allow mobility to work properly, and would render the network too opaque to efficiently spy upon. The Internet is incomplete: It is missing an Identity layer ... HIP!
It all seems to start and end with technical ignorance of the typical customer. There are popular shows that help pepople choose better food, by exposing shenanigans food makers engage in. Perhaps there could be something similar for tech in the future. Something where a hacker comes on TV and makes a total ridicule of some IoT crap device, and the show uses it to constatntly repeats the basics, like changing default credentials, etc. Same schema that works with food shows, expose someone as an example and add some generally useful advice. Perhaps this might become feasible when IoT is more popular.