I'm actually working on something based on this idea - I've created a SIM card for the Internet of Things - basically a smartcard (microSD form factor) that offers a persistent crypto-secure identity for Internet-connected objects.
It works by holding the TOR hidden service private key inside a tamper-resistant smartcard and generating/signing hidden service descriptors with it (plus an entire mechanism to prevent you from signing future descriptors - meaning that the identity is closely tied to the physical SIM card, just like in the GSM world - whoever has the SIM has the identity, once you've removed the SIM the identity goes with it).
I work for a large IoT PaaS provider. Would this be valuable for customers on our platform if we supported something like this? I'm trying to think through all the problems with this and while I think the redesign of our platform will actually make this harder, there's no reason it wouldn't be possible.
I also don't really understand how Tor works. Does this break anything like persistent connections or letting screw up anything with using UDP over longer time spans? (My other pet project is CoAP.)
I don't actually work on the products team, but I bet I could fight for something like this. (Plus figuring this out would actually be fun rather than the tedium that my real job has become.)
Supporting Tor is a niche thing, but... it would be very very helpful if everything in the IoT could be persuaded to go through a customer-controlled local gateway, which could proxy, rate-limit, redirect, and firewall connections according to the owner's policy.
Every device that reaches back to the mothership with a proprietary protocol is another device which gets discarded when the mothership loses interest in supporting it, and probably has major holes in it as well.
What do you propose manufacturers do? (This is a serious question.) Customers aren't witholding money from products, so there isn't much in the way of economic incentives. Having and supporting open protocols is more work than internal only. Often times the mothership is using other products to provide the service which are proprietary and paid for, so nothing is free (as in beer) or free (as in speech). "Do more work for no additional revenue" is a very hard sell!
I desperately want LAN SSL for a product I work on. It isn't simple, but there is a known approach (Plex do it) and it would be great for letsencrypt to solve. You can see my last plea from just over a month ago in HN comments: https://news.ycombinator.com/item?id=11956088
I think part of the problem here is that everything is so new, relatively speaking, that almost no company can actually trade on their reputation. If today a company was able to advertise on having sold internet connected devices for a decade and their earlier devices still worked well in some manner, then people might actually use that in their purchase criteria. Who can say that as of right now?
Compare this to other industries. For cars, you have companies with varying levels of reputation (and that ebbs and flows over time). A big part of this is that car companies (and dealerships) aren't continually forcing you into using their related services. Oh, they'll encourage you and try to set up situations where it's strongly encouraged, but they know that many people will refuse to buy if they had no choice but to service their vehicle at the dealership they bought it from. What do you think would happen if Honda decided that for the next model year all fasteners were going to be non-standard sizes and with proprietary heads (think locking wheel nuts for for everything)? After the first few horror stories, Honda would deservedly start getting a reputation for being much harder to deal with, and lots of hidden costs.
Right now we are moving towards a situation where all the IoT companies work like this, but they are all so new, and changing so often, that there's no benefit to a good reputation because there's no associated history to go with it.
What we need are standards. Google and other major video platforms could help with part of this. Adopt a github type model for youtube or some service like it, where it works as it does now but if you want a private device-driven stream of video to be accepted, there's a standard for that and you can pay a small monthly fee (or it's free. The standard is the important part). Similarly, a standard for how to change settings for devices would be useful. HTML obviously works, but being so general may not be the best solution for what amounts to simple needs. A REST endpoint that supports a few well defined routes for discovery of routes and descriptions of them, as well as a route to return the entire configuration for backup, might work better, especially as it could easily be embedded in a larger dashboard for other devices (such as a "home" dashboard).
> A REST endpoint that supports a few well defined routes for discovery of routes and descriptions of them, as well as a route to return the entire configuration for backup, might work better
We called that SNMP. Unfortunately, it was embraced and extended to death. It wasn't a bad idea though.
The trick is, how do you get everyone to play fair, when there's incentive not to?
By making the ecosystem useful enough that the benefits of doing it your own way are often outweighed by the benefits of being conformant to a standard. If I have a nice dashboard to ties together all the items in my house, giving me easy access to them all, nice status info at a glance, and summaries of usage and/or problems, any device I want to buy that doesn't tie into that nicely is going to really have to justify its existence through some really nice features.
As for who would buy into a system like this initially? Lots of Asian manufacturers that aren't interesting in being software companies, and would rather give you some standard firmware that you can tie into some other management console or replace with what you want.
I agree it often does. That's not to say that concerted effort can't make a change though. The internet itself works this way, partially because there wasn't a lot of concerted commercial interest initially, and partially because there existed standards, and that's cheaper.
It's hard for any one company to compete with Google or Apple when they decide to make their own standard and it's semi (or totally) proprietary, but then there's hardware and manufacturing it's harder to be totally dominant. If there's hundreds of companies that all do it a standard way, there's a lot less use in being the one company that's not compliant. There just needs to be something for all those other companies to target.
That's not to say it succeed, but if there's a chance, I think it's worth it to try.
Cars are also relatively expensive. iot products seem more in the disposable and non-essential space. You can always just throw it away without much inconvenience, so why put a lot of effort into the purchase process? Heck people don't even seem to care much about cell phones and keeping them up to date (iPhone and Nexus aside). And cell phones are far more important than lights etc.
As for the last paragraph, perhaps certification instead of standards? The equivalent of UL (Underwriters Laboratory) would work, but it would need to suit the nature, prices and timelines of these products. At least consumers would then know to look for the logo.
While there could be conventions for REST endpoints, I'm sceptical they would be of any use. The devices by their nature are special purpose (rather than general) so there would be a lot more domain knowledge needed to make sense of them. Additionally making them robust against general clients versus only in-house ones is a lot more work. The general ones would attempt to do nonsensical or potentially even harmful settings, and so would require more engineering spend to protect against.
> Heck people don't even seem to care much about cell phones and keeping them up to date (iPhone and Nexus aside).
I think that's inherently a feature of them being too closely linked with the companies that provide them. Phones are the extreme case of what IoT devices could become. Once the phone OS is no longer supported by provider, keeping it is both a liability, and increasingly problematic as things stop working. Compare to old Nokia bumb brick phones. People still use those.
While the close control over phones has yielded some advantages (a boost to their markets and security, for one), a phone is generally much more complex than the average IoT device, and is closer to a general purpose device.
> As for the last paragraph, perhaps certification instead of standards?
Certification doesn't prevent a service from going away a year later. We need a way to hook the device up to another service. If it's a standard, there's nothing to prevent an open source product that you can install on your own server, local or remote.
> While there could be conventions for REST endpoints, I'm sceptical they would be of any use. The devices by their nature are special purpose (rather than general) so there would be a lot more domain knowledge needed to make sense of them.
I would be interested in exploring some specific examples. What, besides aggregation of audio, video, or stats, and configuration options is there that would be hard to support? Also, I'm not saying we can't or shouldn't have specialized services where it makes sense, just that we should have open standards that companies can work to implement if they choose.
> Additionally making them robust against general clients versus only in-house ones is a lot more work.
They always need to be robust against general clients. Allowing companies to think they can get away with ignoring this will be very problematic in the future. Having a well defined standard and likely off-the-shelf software to help implement it will probably do quite a bit towards making things more secure, not less. See home routers and the use of openWRT, Tomato, etc. I think this would more than likely raise the lowest common denominator when it comes to security of IoT devices.
Note that certification could also include verifying there is a plan in place for whatever period of support is reasonable. (We could dig deeper on this.)
Configuration options are hard to support. Say for example a "power level" setting is exposed (how much power is consumed to provide the device functionality). But it is also the case that more power shortens the lifespan of the device. A general client can end up setting that to 100% but won't know about the consequent lifespan issues, or too low a level results in half the functionality being switched off. By robustness I don't mean malformed inputs etc, but rather what is done with valid values. A general client will be oblivious, but an in house could present only part of the power range (the rest behind an "advanced" section). Settings will also interact with each other in device specific ways (eg power would affect maximum volume for an audio device). Pretty soon you'll need a complicated highly featured description language for the settings, interactions, cautions, effects etc. Also would you want to take support calls if external tools have been modifying the configuration, given that a single support call costs the same or more than the profit on the device?
That all said, I see nothing wrong with someone publishing and maintaining a set of best practises. IMHO it is the route most likely to get traction and improvement in devices.
I think any certification that requires a plan be in place for future support is going to be either too weak (if the company goes under, support is gone) or too burdensome (requiring some trust be in place for future support should the company go under is very inefficient). Just allow more control by the user. This lowers barriers to entry for the market, and allows for interesting and unique future uses.
As for non-standard methods of configuration and support, I see no problem with requiring your own software if you want it to be supported, and requiring it be set to default levels (or a verification that no values are outside norms) before a support incident is allowed. You want to permanently log whether any settings have been set that void your warranty? Go ahead. This is a solved problem. Try to get Apple to fix your phone if the little water sensor inside has been tripped.
I see no problem certifying hardware, I'm just not sure how something like UL for software works in practice.
We likely have different ideas of what constitutes future support. You seem to want the A+ package. I think it is sufficient that consumers get a pro-rated amount of their money back, depending on time since purchase. That can be done similar to insurance or escrow. (Although that is still a form a trust for future support.) If you buy something for $100 and 24 months later they pull service, should you get back $10 or $100? Only the smaller end seems viable to me.
> Just allow more control by the user
Given infinite people, time and money that is easy. I can't see any way of it being practical though. It is considerably more engineering effort, considerably more support effort (all that stuff has to be documented, tested and revised), will increase time to market etc. The only winners are those folks still using the device when service is pulled!
I'm sceptical that generic third party configuration could be reasonably done well. Is there any example of something similar? SNMP was mentioned as a failure by another commenter.
Certifying the software could be done like some other certification is done:
* Publish the list of best practises (eg authentication, update mechanisms, how they participate in the network, long term service)
* Have auditors (yeah they would need to be certified too) that will come in and examine a product/code base/service and give their blessing for relevant best practises. I'd imagine some open source libraries would end up certified too (eg an updater, or authentication) so a quicker path to product certification would be use already certified components.
* Award the certification based on auditor report
Yes it could be gamed, there would be wiggle room, bad actors etc. But it would be a lot better than we have now, and a feedback loop would help improve.
BTW I am working on a product somewhat in the space. I'd love to make it completely open, available for ever, plugin your own servers etc. But it isn't remotely viable for people, time and money reasons.
I am not by any means an engineer so you can most definitely take what I say with a grain of salt. But my understanding is that all that really needs to happen for IoT devices to be usable by a third party is for the manufacturer to simply publish the API and provide a mechanism for the end user to control what interfaces with the API. It doesn't actually require you to standardise the API as you have never guaranteed compatibility with any third party it simply requires documentation, which would have to already exist internally for development purposes, and on some level some loss of control. But surely, this way you then get to maintain control and development for as long as you want to support it and for those people that then want to support it themselves after you have abandoned it can as API is a known entity. I am probably fundamentally misunderstanding an aspect of the development of this somewhere but this does not seem that burdensome to me I don't even see the necessity of a standard that needs maintaining here just documentation of the API and a mechanism to change where the data is sent. Hell if you want to maintain control for the life of the device fine, but what is so hard about releasing that information when you no longer want anything to do with it, what possible commercial value does that information hold then?
> ... for the manufacturer to simply publish the API ...
That is more work than not publishing the API. How much more work then? I'd pick stats from The Mythical Man Month where they claimed that doing something for external consumption is nine times the effort than internal only. Admittedly it does predate AWS, containers etc, but then again those aren't that useful for iot.
Ok makes some sense but why not just provide the API and a mechanism to change where data is sent after you have decided not to support the device anymore? Surely that is not to burdensome and will give someone out there the option to continue using the device (and probably save some goodwill over the dropping of support)?
No, I'm fully on board with zero support, given that's communicated up front. Let the market sort out whether people think that's a good idea. I care less about certifying devices to some standard level of quality, mainly because we're so far away from knowing what that is, than I am in opening up the ecosystem to fight planned (and negligent) obsolescence.
> I'm sceptical that generic third party configuration could be reasonably done well. ... SNMP was mentioned as a failure by another commenter.
SNMP is still used quite heavily in network devices. It's painful because it's complicated, but it's still one of the standards you can rely on any enterprise network device to support (as I understand it, I've dealt with that less the last five years or so), at least for the reading of information. In that respect, it's a success story. That said, I imagine the average network device in that category might be an order of magnitude or harder to deal with configuring (physical connector options, link level options, protocol options, route control and propagation, access lists, etc).
> Is there any example of something similar?
IP, TCP, UDP, POP3, IMAP, HTTP. The core protocols started as free standards, but for mail, compare to exchange. Exchange persists still due to its advanced capabilities and tight integration with other Microsoft products, but it's not what ended up being used for integration in the current generation of mail programs. They all support IMAP and/or POP3, because why write your own tools to migrate mail or provide third party access when a suitable protocol and suite of tools exist.
> BTW I am working on a product somewhat in the space. I'd love to make it completely open, available for ever, plugin your own servers etc. But it isn't remotely viable for people, time and money reasons.
Yeah, and I'm not arguing that you should do this. But if a standard existed, and there were free or cheap libraries that did most the heavy lifting for you, it's possibly that would make it the more viable choice for those same reasons. E.g. A light weight HTTP server with route engine to hook into your own libraries,or even to just write to a config file and HUP your daemon.
How about positives too, like "uses best practises for updates", "uses appropriate authentication and authorisation for its function", "uses a reasonable amount of power" etc
> Similarly, a standard for how to change settings for devices would be useful.
I wonder if OPC UA could serve this role. It's something I only discovered recently due to a work project, but it seems pretty relevant to IoT and also is used in the industry (as in, industrial industry, not the shiny startup cloud party).
I am imagining a world where a local gateway isn't even needed thanks to Tor's decentralized (and secure) ability to do NAT traversal. You could make your phone the hub.
If IoT accessories connected directly to Tor, you could go through a pairing process once on your phone (via Bluetooth or whatnot), and access the device from anywhere in the world – no hub and no mothership required.
I'm not sure what you mean by "persuaded", since unless the IoT device has its own cellular connection, it can't prevent being passed through a consumer-controlled device. Do you have such a device right now that is being bypassed? Because if the answer is no, that's your real problem.
The problem with IoT is that the 'I' expands to "Internet" instead of "Intranet". A device should not be rendered useless because it can't connect to manufacturer's servers. The cloud is a good value-add option, but should not be considered a primary and required element.
The other thing is that most of those devices are gimmicks, toys - a good device intended to be useful should embrace interoperability - devices working alone have only a fraction the of potential of devices working together[0].
As for bypassing your device, at some point a "clever" entrepreneur will discover SSL and certificate pinning, and then you'll be SOL.
[0] - that's why e.g. I recently shelled out and got myself Hues. It's not the cheapest option, but it's reliable, works perfectly well over LAN, has a decent API exposing pretty much all possible functionality and then some over said LAN, no cloud registration or other bullshit. Also I kind of trust Phillips not to burn my house down with crappy manufacturing.
Sure, but with any device that's doing security correctly right now, the only thing you get to control is whether you let it connect to the service it wants to use or not. It's not like you can re-target it connection and choose which types of calls make it through and which don't. And at that point you might as well just choose to not plug it in to the network (or give it your wifi credentials or whatever).
> but... it would be very very helpful if everything in the IoT could be persuaded to go through a customer-controlled local gateway, which could proxy, rate-limit, redirect, and firewall connections according to the owner's policy.
They all already do? What IoT devices supply their own internet connection?
> "The following subscription-only content has been made available to you by an LWN subscriber..."
You're probably not the only one who missed this. This relates to another active story on HN now regarding LWN. Maybe this should be a clear in-your-face-but-please-not-modal-clickthrough header similar to Wikipedia's pledge drive style? LWN, take note: some of your guest readers don't know they're guests and might be willing to subscribe or contribute if it were more obvious.
Yes! This is a subscriber link, which can be used to share articles with non-subscribers.
My understanding is that as long as it's not systematically abused to avoid subscribing, sharing is encouraged.
Yes, LWN has a feature that lets subscribers share a subscriber-only article with non-subscribers. I think there should be a message on the page along the lines of "A subscriber has made this available to you, would you like to subscribe?"
There is a message to that effect when non-subscribers read a sublinked article like this. Sometimes with a trial offer. The occasional (occasional!) posting of subscriber links is, I think, one of the best marketing tools we have.
Jon, I've been looking into a lot of elements of information goods, and there are a few bits I'm coming to realise, slowly (it's only been 30 years I've been studying this).
1. Information is a public good. Nonrivalrous, only very difficultly excludable. Strong positive externalities. Hal Varian's got a good piece on this. Doesn't mean you cannot sell it, but it means that doing exclusively has serious negative effects.
2. Free-sample giveaways are almost always an excellent idea. John Dvorak's recent "Whatever Happened to Wordstar" had an excellent illustration of this:
Worse, in 1985, the company produced Wordstar2000, a copy protected program that was nothing like the older lovable Wordstar and which contained annoying copy-protection features that scared most users away. While many pundits including Esther Dyson predicted great things for Wordstar2000, users rejected it. The product was big and slow and expensive. And despite complaints by the company and others, people wanted software they could copy and use on more than one machine. During this era piracy sold software and created market share. People would use a bootleg copy of Wordstar and eventually buy a copy. Wordstar may have been the most pirated software in the world, which in many ways accounted for its success. (Software companies don’t like to admit to this as a possibility.) Books for Wordstar sold like hot cakes and the authors knew they were selling documentation for pirated copies of Wordstar. The company itself should have just sold the documentation alone to increase sales.
3. Corporate and foundation grants and sponsorships or support may be an option. I've been following the Rockefeller Foundation's "100 and Change" program with some interest -- the application process is structured to also seek projects benefiting from other forms of support.
It works by holding the TOR hidden service private key inside a tamper-resistant smartcard and generating/signing hidden service descriptors with it (plus an entire mechanism to prevent you from signing future descriptors - meaning that the identity is closely tied to the physical SIM card, just like in the GSM world - whoever has the SIM has the identity, once you've removed the SIM the identity goes with it).
Discussion at https://lists.torproject.org/pipermail/tor-dev/2016-June/011... - I've made progress since that post, it's now fully operational and tested, just need to figure out a way to turn it into a product and get some funding for it.