Hacker News new | past | comments | ask | show | jobs | submit login
The Internet of Incompatible Things (mjg59.dreamwidth.org)
156 points by simoncion on Sept 22, 2015 | hide | past | favorite | 94 comments



I have the same issue with chat protocols. Some of my friends are on iMessage, some on Hangouts, some on Whatsapp. None of these talk to each other and I'd really just rather use TextSecure.

We had a standard for chat, it's called XMPP, and before that we had IRC. I know they're all non-optimal for mobile use but some goddamn standards (tm) would make my life easier.

I can't believe we were better off in the heydays of Trillian - at least all my contact lists were in one program!

Maybe I should go back to SMS.

Now, on home automation. I came to some of the same conclusions as the author - we're screwed when it comes to being at the behest of big companies and proprietary protocols to control devices we should own.

I just dread having to write yet another home automation framework when I do finally automate my doors, windows and lights. Now where did I put my keys again?


Ugh this is the bane of my life. We use basecamp, hipchat, Skype, Skype for business, email and google hangouts all at once. My kids use hangouts, WhatsApp, Facebook messenger.

None of them work properly. None of them has everyone on it. None of them add any value to communication. All of them throw ten minutes to the wind finding everyone to start with. I've missed 20 minutes of many meetings dealing with audio problems, network problems and firewall problems when we could have used a simple dial in conference system that has been around for over 10 years.

And I'm treated like a Luddite because you have to call me, SMS me or email me?!?

I don't want this any more. After a rant yesterday about WiFi and how it sucks and I've switched back to wired ethernet. I have to suggest that there was a technological singularity around 2005 which was 'peak utility' before the proliferation of sharecropped communications, technology with false utility, high cost and pay-per-everything.

To add other things to this problem set is going to cost me more money and time of which both are finite and at least one runs out uncontrollably.


One of the things I was hoping would take off was SIP: URIs - in theory you could "dial" my email address and the call would be routed to wherever I was at that moment. Texting is built in too. Goddamn standards strike again.


Yeah that was a great idea. Until NAT, firewalls, your average contracted out MSP at an SME sysadmin who's idea of admin is lock everything down and pay £300 to open a port (I have to deal with this at least one a week).


If normal Skype supported SIP it could just be bounced to Skype in that case.

Also, IPv6 FTW!


Yes that's true although I've lost count of the hours I've thrown out on Skype.

Yes IPv6 definitely FTW. We just migrated our office and two data centres to it. Life is good now. V6 firewalls are just packet filters; all our nets are in the public address space.


May I ask how you deal with multiple redundant Internet connections at your office?

It seems that you either end up with devices having two addresses, and have to deal with all the broken software that assumes a machine only ever has one address, or you use a ULA and NAT which is no better than IPv4 in many ways.


I really don't know how that works - we contracted it out.

There is a router that has an ethernet port on it. That's our internet in. Anything in front of that is ours. Anything behind we don't care about but there is a 100mbit local shared ethernet and a backup FTTP line. I assume it's switched to the same network at the peer end as if either goes down, we only have one address space. We have a couple of front-facing v4 addresses that handles incoming mail.

This all sits in a cabinet we don't touch owned by the provider.

The stupid thing is the LSE and fibre cables leave the building in the same pipe. Wonder how long it'll be before someone digs through it.


Thanks!


Stupid question:

Just in case you want some nets to not be globally routeable, you guys are aware of the Unique Local Addressing block?


You can make a non-routable subnet if you need one by configuring a firewall/access list. Pretty much any router can do that. Plus you get way better granularity in controlling access - office-local subnet, DC-local, branch-local, country-local, whatever-local access - unlike venerable RFC1918.

But never use non-globally-unique addresses as other commentators suggest with wonky probabilistic allocations. There is absolutely zero benefits for that, and it will inevitably end up to be a huge PITA down the line. IPv6 used to have 6-to-6 NAT and site-local addresses, but sanity won, and both were dropped from the current slew of RFCs.


Sure, you can go to all that hassle.

Or, you can use RFC4193 (AKA ULA) space, which allocates a /48 and is designed to be non-routable, unless an admin chooses to make it routable.

RFC4193 specifies a prefix generation method and lays out the probability of collision. If you join two networks together, you've a (1.81x10^-10)% chance of prefix collision. If you join ten-thousand networks together, you have a 0.0045% chance of collision. [0] If my math is right [1], this means that you'd need to join 2,500,000 networks [2] together to have a 1% chance of a prefix collision.

I am squarely in the "If there's a chance that it can happen, it will happen." camp. However, I'm rather okay with those odds.

[0] Either my figures are right here, or you need to take those percentages and divide them by 1000 to arrive at the correct figure.

[1] And it may not be right!

[2] Networks -each with its own /48- as opposed to hosts.


Collision probability is indeed low if you'll assume perfectly generated prefixes, and even in an unlikely case of /48 collision, those themselves would be pretty sparse, so again assuming perfectly random distribution, chance of collision of say fifty /64s on one side with the same amount from the other network is still pretty low.

But bigger point here is that often you will need external connectivity for those ULAs, similar to what everybody used to do in IPv4 world (i.e. NAPT, a.k.a. "nat overload" a.k.a. "IP masquerade"), and that is not something that is available, really. Instead, IPv6 offers prefix translation (a.k.a NPTv6, see RFC6296). So effectively that means that:

- for each LAN segment (i.e. /64) you have to have a "private /64" and a "public /64" (or same at coarser granularity up to /49 or even more). Twice the number of networks to configure and keep track of;

- "NAT as a security measure" argument flies out of the window, as NPTv6 is stateless.

In other words, you get all the hassle of IPv4 NAPT with none of its (meagre) benefits.


> But bigger point here is that often you will need external connectivity for those ULAs...

The whole point of using ULA space is for giving IP addresses to machines that you don't expect to need to contact machines in the globally-routable IP ranges. [0]

From the RFC:

   This document defines an IPv6 unicast address format that is globally
   unique and is intended for local communications, usually inside of a
   site.  These addresses are not expected to be routable on the global
   Internet.
If it turns out that some of those machines actually need access to globally-routable space, then advertise a globally routable prefix to those machines and either keep advertising the ULA prefix, or -yanno-, don't. :)

Fixing this particular network planning error shouldn't be much harder than unsetting some firewall and/or routing rules that break Internet access for machines that have been assigned a globally-routable address. :)

[0] On my LAN, I advertise a ULA prefix, as well as a globally-routable prefix. Because my globally-routable prefix is not static, [1] I use the ULA prefix as an ersatz poor-man's Provider-Independent allocation. This setup would be probably be somewhere between unreasonable and silly at a site that does have a static prefix assignment.

[1] Well, it's effectively static, but I don't have an SLA, so... ;)


I wonder how I do this as a home user.


I assume that you're asking "How do I advertise a ULA prefix on my LAN, as a home user?". If you're not, then the following probably won't answer your question. If you're already familiar with ULA prefix generation and IPv6 prefix advertisment, and are just trying to find out how to advertise on your home router, almost all of the following will be redundant information. (I'm happy to read the manual for your make and model of router and try to see if you can easily advertise prefixes on it. :) )

I'm not sure how much experience you have with screwing around with router settings, your level of familiarity with IP networking concepts, or even what software you're using on your router, so I'll give some background information on prefix advertisement, and describe prefix generation and how you might actually advertise a ULA prefix on a router running OpenWRT. Because the underlying software that does the work should be the same regardless of router model, reading about how it is done on OpenWRT might give you a starting point to discover how it's done -if it is at all- on your router model. I'm more than happy to clarify anything that's confusing, give additional information, or track down and read documentation for a particular router!

Background information:

Many routers use a piece of software called radvd to advertise IPv6 routes. Radvd -effectively- takes a list of network interfaces, each with a list of router advertisment options, and transmits advertisements on those interfaces. radvd has a config file whose syntax is documented here [0]. A minimal radvd config file would look like

  interface eth0
  {
    AdvSendAdvert on;
    prefix ::/64
    {
      AdvOnLink on;
      AdvAutonomous on;
    };
  };
This would instruct radvd to advertise whatever prefix had been assigned to the interface named eth0 to machines attached to eth0. Note that an interface section can have multiple prefix sections. If you have a particular prefix that you want to advertise, replace "::/64" with that prefix. The overwhelming majority of options in the radvd config file are both entirely optional, and entirely unneeded in most situations.

Now, I suspect that on most routers that support prefix advertisement, you'll never need to touch an radvd config file. Like I said, this is just background information. So, let's get to generating your ULA prefix:

Generating your ULA prefix:

Get the MAC address of a computer that you control.

Go to https://www.sixxs.net/tools/grh/ula/ and plug that MAC address into the MAC address field on the page. You'll get a value back that of the form: fdaa:aaaa:aaaa::/48 This is your ULA prefix.

Advertising a /48 is kind of awkward for downstream devices. You need to carve out a /64. So, take that ULA prefix, add a colon and four zeroes to it and replace the /48 with a /64 like so: fdaa:aaaa:aaaa:0000::/64 You can now go to advertise this on your LAN. [1]

Advertising the prefix:

I actually have no idea what the prefix advertisement option is called on most consumer routers, or if it even exists on your router. A quick look at Belkin's and Linksys's support sites didn't turn up anything promising. I would dig around in the IPv6 settings tabs until I found something that looked promising, or had exhausted all options. [2]

If you're using a recent version of OpenWRT (That is, OpenWRT Barrier Breaker or later), /etc/config/network has a ula_prefix option in the globals section that's used for exactly this. [3][4]

If you're using a version of OpenWRT earlier than Barrier Breaker, you can tell radvd to advertise your ULA on your LAN interfaces. The prefix section of /etc/config/radvd is used for this. /etc/config/radvd is documented here [6]. You can have multiple prefix sections for each LAN interface. [7]

Now, reload the network config (Call /etc/init.d/network with either the reload or restart option, I can't remember which.), wait for a minute or two, and machines on your LAN should have a new IPv6 address in the /64 that we carved out of the ULA.

If everything's working, -optionally- register your newly-generated prefix with SixXS at the page where you generated your ULA prefix.

Feel free to ask any questions that you have. :D

[0] http://linux.die.net/man/5/radvd.conf

[1] Note that you can create 16 bits worth of networks out of your /48, not just one network.

[2] If you can tell me the make and model of your router, I'd be happy to try to find and read the documentation for it. :)

[3] http://wiki.openwrt.org/doc/uci/network

[4] Though, to be frank, I can't remember if you plug in the /48 and it automatically creates /64s for each LAN interface that you have configured [5], or if you can only plug a single /64 into it. I would try the /48 first.

[5] OpenWRT lets you set up multiple LAN interfaces. I found this useful when I was using VLANs to make isolated WiFi guest networks.

[6] http://wiki.openwrt.org/doc/uci/radvd

[7] So, if you have IPv6 service from your ISP, you can advertise the ISP-provided prefix and your ULA prefix. Have one prefix section that contains no prefix option, and a second prefix section that contains a prefix option whose value is the /64 that you created earlier.


Yes aware of that. There are some gotchas with that, particularly address space collision between two sites with a tunnel between them. I've never seen one in v6 but one of our clients had a 10.0.0.0/8 and so did we and I don't want to get into that state of affairs again. I'd only use that for totally airgapped networks.


There's a mechanism for randomly generating a /48 with a low probability of collision. [0] SixXS maintains a list of "allocated" prefixes, [1][2] but who knows how complete it is?

Edit: Also, you might be interested in reading about the NETMAP iptables/ip6tables target. See iptables-extensions(8) for more info.

[0] https://tools.ietf.org/html/rfc4193#section-3.2

[1] https://www.sixxs.net/tools/grh/ula/

[2] https://www.sixxs.net/tools/grh/ula/list/


You seem to know a lot about this so I thought I'd ask: is there a good book getting into this level of detail about IPv6?


I'm sure that such a book exists, but I have no idea what it is. I've picked up what I know by reading RFCs, Wikipedia pages, and the like and experimenting on my LAN.

It's been quite some time, and the details are hazy, but I remember the process of getting a tunnelbroker.net tunnel configured, and the subsequent (totally optional) series of knowledge tests required to unblock IRC over the tunnel taught me a fair bit about IPv6 and some about (reverse DNS? forward DNS?) glue records.

Figuring out the meaning of all of the options available in an radvd config file was also rather educational. (20->40% of the educational value was in reading about things that were tangentially related to whatever the radvd configuration item was.)


uPnP/NAT-PMP pretty much solves the NAT problem, for the home user. :)

I'm having difficulty parsing your second sentence. So, I'll say this: If you're a tech business and can't manage to cost-effectively manage the settings on your border equipment, you're doing something really wrong. :)


Sorry my bad. You got the point though. You'd be surprised how many messes there are out there. I tripped over a whole network of unpatched windows XP sp1 machines the other day.


I'm not sure I got the point. Were you trying to say that before NAT, businesses contracted out to vaguely-competent third-parties who charged an arm and a leg for changes to firewall configuration?


No I'm saying that still they do that today and always have done. UPnP is always off. And that's £300 and a service case open to turn it on. Then a 48 hour turnaround.

MSP=Managed Service Provider. Basically rip off merchants who charge you for everything because you are trying to run a business with the tech rather than run an operations team.


It seems like you're saying that Technology has passed the "Second Watershed".

http://www.preservenet.com/theory/Illich/IllichTools.html

"Watershed (figurative definition, as used by Illich): A point in the historical process when a particular “tool” creates a quantum shift in society, like the sudden application of gravity feeding water into a terrain basin (literal watershed)."[0]

[0] http://www.democraticunderground.com/discuss/duboard.php?az=...


WiFi is a great win, I think. The problem is just that we have allowed ourselves to think it is a complete solution. That everything should be wireless, that wired has no advantages. In reality WiFi and ethernet complement eachother, each with their own strengths.

In particular, I've found a 10Mb LAN outperforms a 1000Mb WiFi for networked filesystem access.


Might your issue with WIFI be specific to your setup than a problem with WIFI itself? I replied to your comment yesterday, but just how noisy can WIFI get anyway? There's a finite number of people who can live within 50 metres of our homes. So there's a limit to how noisy it can be. I'm saying this because I have no issues with WIFI in a cluttered inner-city suburb full of WIFI. I just got myself a good WIFI modem router, and it's smooth and no drop-outs for multiple users in the house. Nobody expects super reliable HD video to stream through without ever glitching, but it actually does a good job of that anyway using Netflix, Youtube, and Chromecast and other methods like iPads.

I would suggest getting a better router, and making sure other factors are set correctly to maximise signal strength rather than blaming WIFI. Location of the router, whether you use a range extender and so on.


I've had inssider running, channel hopped, tried three different routers (netgear, router board, Asus) to no avail. I've tried several client devices too from high end centrino, apple and Motorola kit.

I don't stream any media, just do file copies. It sucks whatever I try.


> Maybe I should go back to SMS.

In my experience, SMS and Hangouts are equally unreliable messaging protocols. Messages sent through either transport can be delayed for arbitrary lengths of time or lost entirely.

Everything that's happened with Hangouts that's unrelated to video calling just makes me so sad.


I tried Hangouts v1 and more or less instantly backed out. I was having none of that.

Not for IM nor for SMS. After that Google Talk (and thus XMPP) was dead to me.

For SMS I could at least revert to the stock AOSP messaging app.


Google Talk is still alive, well, and performing bi-directional message delivery between XMPP clients with a Google Talk account and Hangouts clients. (Or maybe alive but for the fact that a Google VP hasn't noticed that it's still up and running. shrug)

Any client that speaks XMPP should work just fine to reach into Google's IM silo for direct messaging. I make frequent, daily use of Pidgin for this purpose. Sadly, the only way to access group chats is through the official client.

Also: If you haven't already, check out TextSecure by Open Whisper Systems. It has the best group-SMS handling of any MMS client I've tried. It's also made and maintained by a group of folks who are really concerned with making secure software that also happens to be as easy to use as is possible.


This is the reason I am pushing myself towards simpler solutions, even if I end up losing some functionality/connections out of it. The added cognitive load of dealing with managing all of these services is becoming really painful.

Yes, I may not have push via the cloud to my latest iGizmo, but at least I don't have to worry that I forgot an important calendar, todo, file, or chat message on one of 5 services that mix and match between what they offer.

Org-mode can run calendar, todo, notes, habits, journal, etc.

Chat is where I still haven't found a solution. Switching to XMPP/SMS semi-exclusively is probably the next step.


I would like that, but the thing with calendar, todos, etc, is that syncing with my iGizmo is what makes them most useful for me. That iGizmo is on my person at all times. At the doctor's office and need to make an appointment or talking to a contractor about getting away during the day to meet them at home? Calendar is right there so I won't schedule over a meeting or other engagement. Grocery shopping? My shopping list is there, no way I'd forget it at home. The syncing is precisely what I need it to do to be useful.

Before my iGizmo I carried around a notebook, but that was larger, more cumbersome and had less flexibility (I can add a grocery list item from my PC in the kitchen or even input it by voice if I realize I need something).

That means that for some of us, a better solution rather than a simpler one is what we desperately hope for.


Hey, at least we've finally gotten to the point where I can reliably guess that probably 80+% of the android phones I encounter will have a microUSB plug!!


You can thank EU regulations for that.


Why your fallback is SMS and not e-mail?


Most of my non-programmer friends don't check their email addresses, if they maintain one at all. They use Facebook instead.


Email is the same as Voicemail: ignored, neglected, filled with spam more often than real messages and on the way out.


SMS is also filled with spam these days. To be honest what medium is not filled with spam/ads?


SMS is filled with spam? That's honestly news to me - I think I've gotten one spam SMS in the last two years? I get a lot more spam calls, though Google Voice filters them for me now.


If your SMS is filled with spam then you need to talk to your carrier and get a new number


It's really too bad that the mythical man-month is really mythical, otherwise everybody here on HN could pitch in 5 minutes and make a successor to XMPP that is optimal for mobile...


I'm rolling out a test network at the university of IoT. Instead of going with XYZ expensive setup, I'm rolling my own using an Arduino (clone) nano, sensors, and nRF24L01+ using mysensors library. On the acquisition end, I'm using node-red with the mysensors en/decoder.

I have node-red generating dynamic HTML, and dumping it in var/www/html and apache serving it up. All I'm waiting for now is 30 more setups for testing purposes.

(nRF24L01+ can generate a self-healing mesh network. Given the weird shape of our building, makes this ideal. And a small change turns a sensor node into a sensor/gateway.)


This is what happens when everyone focuses on services, rather than protocols.


You can make money on a service, but not on a protocol. You're never going to build a "unicorn" company around an open protocol.

(Unless it's a patented protocol, but then people avoid it like the plague)


Not quite. This is what happens when everyone doesn't give a damn about interoperability. Whenever they build new protocols (that no one else uses because not caring about interop) or unique services doesn't really matter.

There are many examples of non-interoperable protocols that do the same thing in a mutually incompatible manner. The most famous one, I suppose is GNU/Linux-surrounding ecosystem with its GUI toolkit mess and audio subsystem jungle.


There are three major ways to do audio on Linux:

* ALSA: Focusses on providing a unified API for audio driver and audio application authors to code to.

* JACK: Focusses on very low-latency audio.

* PulseAudio: Sits on top of ALSA and makes it trivial to move audio streams on-the-fly between audio output devices.

There are three major GUI toolkits:

* QT: Used if you don't hate C++.

* GTK: Used if you do hate C++.

* WxWidgets: Used if you want to always use whatever the "native" widget style is on your target platform.

If you want your audio and widgets to work on Linux, Windows and Mac, and you only want to use one toolkit, use SDL. If you want a higher-level widget toolkit than what SDL provides, use QT or GTK, or -I guess- WxWidgets.

There's neither a mess, nor a jungle here. I suspect that you've been paying way too much attention to what Poettering has to say about the state of Linux. :)


You're right - as a programmer, it's simple - there are plenty of choice. But I'm looking at this from desktop machine user perspective.

And I really do have a desktop with eight different somewhat differently looking and behaving (although they try to co-operate and it's possible to bring some consistency to be not too bothered) widget toolkits.

I'm not exaggerating - it's really 8 different toolkits here. Ok, I got lazy and run KDE those days so it's primarily KDElibs+Plasma and plain Qt (which has its own differences, like file dialogs being notorious one), which are mostly consistent between each other. Then I have a few apps that use GTK2 and GTK3, a few Wine apps, too. There are also KDE3libs for kTechLab, WxWidgets used by pgAdmin3 and Swing for IntelliJ-based IDEs. There are also some old Tk apps but I run those once in a blue moon, so they doesn't count. But on the other hand, LibreOffice and Firefox have their own widget sets...

I've long given up on OSS4 (which, in my personal opinion, is^W was the only Linux sound system with a KISS API) and surrendered to ALSA+PA stack many years ago. And don't have apps that need JACK anymore (used to run some streaming-related software). And ESD & aRts are dead. So, no objections here, although I had quite unpleasant experiences making ESD+JACK+ALSA combo working (that was at least a decade ago). That was before Poettering's rants - honest - although I won't deny that the "jungle" part is obvious reference.


> But I'm looking at this from desktop machine user perspective.

From a desktop machine user perspective, there is no difference between the various audio libraries. So, I'm not sure why you brought them up if that is the perspective from which you were speaking.

> Then I have a few apps that use GTK2 and GTK3, a few Wine apps, too. There are also KDE3libs for kTechLab, WxWidgets used by pgAdmin3 and Swing for IntelliJ-based IDEs.

1) It's the same story on Windows. GTK, QT, Swing, WxWidgets are all cross-platform UI toolkits.

2) The look and feel of a given piece of native Windows software (and its stock dialogs) often differs drastically depending on when the software was written and whether or not someone has bothered to update its look & feel recently. I can't speak to OS X, but given how iTunes and QuickTime player's UI were so dramatically different from what I understood the Mac look & feel to be, I would be surprised if this "problem" was completely absent.

3a) There enough variance in the way you interact with different programs that I have a hard time being sympathetic to the complaint that moving from QT's file picker to GTK's file picker to the WIN32 file picker to a modern Windows file picker is tremendously hard on the computer operator.

3b) Most of have been required to hop into a one-ton missile whose control layout and handling characteristics are radically different from all other such missiles we've operated in the past. Despite these challenges, we almost universally have great success in operating these unfamiliar missiles at highway speed with only the most fleeting of self-generated and self-guided training programs.

Unrelated to all that: I'm finding that I'm substantially more happy with JACK than PulseAudio. I started using PA maybe around 0.9.2 or so. I was -for the longest time- SUPER excited about both transparent network audio transmission and about the ability to move streams between audio sinks on the fly. As time wore on, I discovered a few things:

* Apart from the week or so of demos to myself and friends, I never made use of network audio streaming. [0]

* I almost never had more than one audio sink attached to my computer.

* PA -on my hardware- has widely variable audio latency. (I'm 100% okay with rather large audio latency. I cannot tolerate unpredictable audio latency.)

Recently, I decided to take another look at JACK. I'd been shying away from it because it had a reputation as being really hard to configure and work with. Turns out that that reputation is undeserved! Additionally, JACK has -AFAICT- rock solid audio latency, and manages to use less CPU in the process.

[0] I make constant use of SSH's X11 forwarding, even between my desktop machine and my laptop. This is something that I would conceivably use, and thought that I would use all the time.


1&2) Yes, you're right. All popular operating systems out there are more or less plagued with the same issues. I'm not bashing GNU/Linux (or any other OS) here - I use it and it's a good system. My only point is, that designing protocols (as opposed to designing apps/services) doesn't really help to get rid of inconsistencies and bring compatibility. Sometimes, yes, I bet having a well-defined protocol saved many from inventing their own. But - in my personal perceptions (and I surely could be wrong) - not even remotely enough.

3) They're not hard, just inconvenient to some degree. Given enough time and patience, everything can be worked around (and there are compatibility wrappers/hacks). But - honestly - it's still a mess, both on the surface and under the hood. I really don't want to even think that this particular file picker is unaware of "recent documents" list of another framework. Don't want remember that PgAdmin3 has its own peculiarities interacting with clipboard and selection. And, back to the original article's topic, surely don't even ask myself whenever a smart bulb would be compatible with some phone app, especially one that I may haven't even encountered yet.

@Unrelated: Thanks for the advice. I don't use those features either. Although I don't have any complaints with PA, next time I'll have time and consider messing up with my system I guess I'll give JACK a try.


> All popular operating systems out there are more or less plagued with the same issues. I'm not bashing GNU/Linux (or any other OS) here...

Except that you did. From the comment that started this sub-thread:

> There are many examples of non-interoperable protocols that do the same thing in a mutually incompatible manner. The most famous one, I suppose is GNU/Linux-surrounding ecosystem with its GUI toolkit mess and audio subsystem jungle. [0]

> 3) They're not hard, just inconvenient to some degree.

More inconvenient and more of a mess than than operating a new and unfamiliar car? :)

[0] https://news.ycombinator.com/item?id=10257834


IoT is one of the least well thought out ideas in quite some time, right up there with the "semantic web".

Hopefully it dies a quick death and is resurrected in a more sensible and mature form years down the road. There are worthwhile ideas there, but running headlong into a world where everything has the same quality and security as a typical software project doesn't seem like a notch in the "advance of civilization" column to me.


What, specifically is poorly thought out about IoT?

Is your beef with the typically abysmal quality of software written by hardware companies, or is it something larger than that?


Is there any part of the "Internet of Things" that is well thought out? The idea of our lives becoming surrounded by hojillions of things that are constantly connected to the internet running software written by whomever with no standards, no testing, no regulation or certification, that doesn't seem so great to me. Even the best software written by the most capable engineers who continue to be dedicated to the projects they support is still a horribly bug-ridden mess with occasional enormous security holes. Even if all IoT software was written by, say, a division of Google, it would still be a nightmare. But it will be far, far worse because IoT software will be, and is being, written by anyone and everyone.

When you buy a networked device today, how can you tell what level of security it has? What level of testing it's been subjected to? Who has signed off on the design and the code? No such thing of course, it's all at the same level as the kids down the street building stuff out of their garage for kicks. And while there's some merit to that when it comes to innovation, for production, in the modern era, it's not just naive, it's either insane or idiotic.

We need more rigor, more standards, and less vulnerability surface-area when it comes to internet-connected devices. IoT is precisely the opposite of that. It's already bad enough that there are millions of zombie PCs in botnets around the globe, we don't need zombie to add toasters, garage doors, light bulbs, and what-have-you to that mess. Anyone who thinks that we can get security and quality in the IoT for free without making a concerted effort is, again, either monstrously naive, insane, or idiotic. It's 2015, that naivete is no longer defensible.


Before anything else, we need our main networked devices to survive being placed on the same network as some malign device. The current situation is a joke, and much before one even thinks about IoT.

Having a compromised device at home should mean at most that it's tracking you with its sensors. Not that it can compromise anything you have around, and get any data you have, or that it can use your network to get anybody else.


My personal laptop, my servers, and my Linux-running home networking equipment are fairly resistant to attack. ;)

It's true that the consumer electronics security situation is a sad joke, but PCs have generally been becoming harder targets for quite some time now.

> ...or that it can use your network to get anybody else.

I assume that by this you mean "Can't use your Internet link to launch attacks against other networks."? If you do, then it is my somewhat-informed opinion that maintaining a default-deny policy for outbound communication on a home LAN is far too much trouble than it's worth. I suspect that -on most home LANs- you could probably get away with denying all outbound TCP other than to port 80 and 443, and allowing all outbound UDP. [0]

However, if your attacker is somewhat determined and even a little bit clever, he will use the Universal Firewall Bypass Protocol [1] to talk to his C&C system(s) or other malicious devices or whatever. In this situation, you will have spent more than 0 minutes on managing your firewall rules and gained nothing from the work.

[0] So that most multiplayer video games, and Bittorrent-over-UDP work just fine.

[1] That is, HTTP(S).


> Even the best software written by the most capable engineers who continue to be dedicated to the projects they support is still a horribly bug-ridden mess...

This is far from universally true. While I agree that the actual security of a system is currently nearly impossible for most purchasers to evaluate, and I agree that in far, far, far, too many cases proper system design and implementation appear to be the least important priority of commercial projects, there do exist well designed, secure, complex systems.

> Is there any part of the "Internet of Things" that is well thought out?

The part where we use this largely ubiquitous, globally accessible network to give folks the power to control and/or monitor the devices that they own from any place of their choosing?

I 110% agree with you that -for an enormous slice of the consumer electronics market- security is -at best- an afterthought. However, just for a moment(!), assume that all IoT devices are secure and only exist to serve your interests, rather than the interests of a national intelligence agency or corporate overlord. In this hypothetical scenario (that's really far from what we have today) doesn't the statement in my previous paragraph highlight a thing that -at worst- provides no benefit to society and -at best- provides a large benefit?


> > Is there any part of the "Internet of Things" that is well thought out?

> The part where we use this largely ubiquitous, globally accessible network to give folks the power to control and/or monitor the devices that they own from any place of their choosing?

I think IoT conflates a few different ideas, some of which are better than others:

- Remote control. Moving household controls like heating to a virtual interface would allow me to control everything from the same place, eg. my laptop (which I'm normally sat in front of anyway). It also allows meta-level controllers to be written, eg. using cron jobs, or defining a bunch of common settings like "cold night", "frugal", etc.

- Home automation. This makes control automatic, and can involve homeostatic properties, eg. thermostats keeping a constant temperature; reactive systems like motion sensors switching on lights; to constraint/rule-based scheduling of tasks and appliances (eg. the washing must be done by Friday, but use the cheapest electricity; ensure heating and cooling are never on at the same time; etc.)

- Putting it all online. I think this is the biggest problem at the moment. It's being pushed by vendors, presumably because they can gather usage data, push automatic updates, and mobile access is easier to set up with one centralised server rather than using WiFi. There are benefits to doing this, but I think they're massively outweighed by the security considerations; it would take a big technological leap from the current models to change that (eg. the widespread use of practical verification; or robust damage limitation mechanisms, like capability models; etc.)


If we had such a great pony as described in your final paragraph, then yes it would be great. But that's not what we're anywhere near getting at the moment.

And it's always worth questioning what value the "things" are providing. Home automation has been around for decades and remains a niche. Even business computerisation (other than communications and ecommerce) seems to result in surprisingly marginal productivity improvements.


> Even business computerisation (other than communications and ecommerce) seems to result in surprisingly marginal productivity improvements.

You must be joking. :) The spreadsheet is an enormous force multiplier. Email -when used for useful communication- often substantially reduces the time to close a decision making loop. You can point to telephones as a replacement for email, but telephone switches -themselves- have been computerized for many, many, many decades.

> If we had such a great pony as described in your final paragraph, then yes it would be great. But that's not what we're anywhere near getting at the moment.

You appear to be talking as if I don't understand that the state of system security in the consumer electronics space is dire. I... kinda covered that in the comment to which you replied. There was no need to qualify your "Yes.". :)

(And yes, I do notice that you're not my original conversation partner.)


I was referring to the Solow productivity paradox: computerised technology certainly feels much more efficient, but in terms of the economic variable "productivity per person-hour" the difference is not as striking as expected.

The qualified yes was really a no: I believe that the market structure can't and won't deliver open, secure, interoperable devices in the forseeable future (say 10 years).


> The qualified yes was really a no: I believe that the market structure can't and won't deliver...

My hypothetical to which you replied was: "Assume that all IoT devices are secure and only exist to serve your interests, rather than the interests of a national intelligence agency or corporate overlord. Doesn't the ability to remotely monitor and control the devices that you own from any place of your choosing bring benefit to you directly, and -directly or indirectly- to society as a whole?".

I even took pains in my comment to mention that I recognised the wide gulf between the situation in my hypothetical and the abysmal state of consumer electronics security.

Now that you've been reminded of what my hypothetical question was, is your answer an unqualified yes, or an unqualified no? Remember that market forces don't apply here; these devices are correctly designed, implemented, secure, and only serve your interests.

> I was referring to the Solow productivity paradox: computerised technology certainly feels much more efficient, but in terms of the economic variable "productivity per person-hour"..

Ah. shrug

Good software to solve complex problems is often not easy. There are clearly areas where the proper application of computerized systems either save substantial amounts of time and effort, or let personnel more easily collect and digest the information required to do his job.


But, but, I'm on a desktop running an open, secure, interoperable device right now. Just migrate this software into a device, right? The capability of devices right now is comparable to desktop computers in the 90s. And way beyond what it took to put a man on the moon. So it'll fit, soon if not now.


It's the capability of markets and coporations that is in question, not that of devices.


You know, there've been a few months since the last article about bullshit jobs here. But it is still funny how people can now discuss the productivity paradox and nobody mentions it.


There's an interesting open source project OpenHAB (http://www.openhab.org/) that aims to connect all these devices. There are quite a few plugings available (e.g. Philips Hue, Tesla, Sonos etc).


This was mentioned by a commenter on the OP.

The author had this to say in reply:

"[OpenHAB] provides a consistent interface, but unfortunately it doesn't provide the right consistent interface - the Echo only speaks Wemo or Hue, so I'd still need something to bridge from there to OpenHAB"


OpenHAB is great, if it's a house.

When you start to move in the area of building automation tech, we have to deal with Luteon. And that means MSSQL database, and "pain and suffering". And there's no way to dynamically extract that data, so it's a little island of all its own.

We've been working with Citrix for quite some time. And they purchased IoT automation company Skynet, and rebranded it OctBlu. Seems good, really good. Well, until I decided to build my own sensor net using cheap tech.

All I should really need for a sensor is a 8 bit micro, a sensor, and a nRF24L01+ board. Turns out, from China, those little boards are a whopping $.67/each. Add that to the $1.87 arduino clone, and a sensor (starting with PIR, $.82) along with a dinky project box comes to $5-$6 per node. MySensors library for arduino/nRF24L01+ works superb. Simple API, mesh healing just works.

Well, after building a few nodes, I wanted to connect them together in a sort of a dashboard that makes programming them easy. We're already working with Citrix/Octoblu so what about that? Well, there is a MQTT-gateway for MySensors, using an arduino-ethernet module. No. I didn't want these sensors directly hanging off the internet, as our IP's are fully routable. Same with ESP8266-gateway. I wanted something more secure than "put it on the internet".

So, I made a mySensors-serial gateway. Plugs in via USB, and provides a serial port of all the data on the network. Simple and effective. From there, I have it going in to a linux machine running Node-Red. Turns out, this is what Octoblu uses. They just provided some pretty dockerizable instances and whatnot. But I have data coming in via serial port, through a mySensors decoder block, and then rest of the logic laid out nicely.

It's easy to see how the data flows through the system. I can control how it's logged, as well as where data is terminated. With potential fears of data leakage, I can provably show that the flowchart doesn't leak data. (well, aside mathematically proving said code: we're not doing that).


It's true that OpenHAB doesn't have plugins for all possible hardware, but it does have both Wemo and Hue bindings. Because the Wemo/Hue 'interface' to the Echo it might be a bit combersome to set up or use, but the idea behind OpenHAB is the way forward: one central vendor-agnostic hub that drives all connected devices. This way, the only thing that needs to be done to add support for a protocol/device is to write a driver for the 'hub'. Otherwise each device needs to interface with every protocol out there, obviously impossible.

I've written my own 'hub' software for just this purpose.

I know that ZWave doesn't get much love from the OS crowd because 'proprietary' and 'Zigbee is an open standard', but ZWave works quite well wrt interoperability - because the ZWave consortium does some active interop checking and because everybody needs to use the same (debugged) chips and software, you don't have the situation of everybody writing their own slightly different implementation which is usually the source of 'interop in theory but not in practice' problems. Which is what is happening to Zigbee now, for that hand full of Zigbee devices you can actually buy.


Does the OpenHAB Hue plugin also pretend to be a Hue bridge, or does it only talk to Hue bridges? If you've used it, and know that it can pretend to be a bridge, you might want to reply to The Author's request for more info. :)

> ...the idea behind OpenHAB is the way forward...

Tragically, the folks who make the hardware disagree. :(


Yeah sorry my post reads like 'I know it's possible', it was more meant as a 'it's theoretically possible, if it's not possible right now it's because nobody has done it yet'. The OP sounds like its a fundamental problem, my point was that it's just an implementation problem.

"Tragically, the folks who make the hardware disagree. :("

Not all of them do, actually most don't; but even for those who do, the good thing is that we don't need them (mostly). Most of these protocols aren't rocket science, so even those where manufacturers tried to lock it down, people have reverse engineered it (e.g. Somfy). And the good thing is that because it's hardware, vendors can't just roll out a 'patch' or break the interface in the next cycle because that'd leave their existing customers in the cold.

Look, I understand the point, and it would be great if everybody went kumbayaa around The One Perfect Standard, but the situation is not as fundamentally flawed as the OP laments.


> And the good thing is that because it's hardware, vendors can't just roll out a 'patch' or break the interface in the next cycle because that'd leave their existing customers in the cold.

Are you sure about that? I thought the whole point of IoT devices was to be attached to an IP network and to -often- be field upgradable.

> ...the situation is not as fundamentally flawed as the OP laments.

I believe you when you say that these protocols often aren't rocket science. I don't know for sure -because I've not looked at any specs-, but I would suspect that -most of the time- reason #1 for building a new "smart home" control protocol is to create a new walled garden to call your own.

The situation is rather bad. Consider the difference between the current state of email and IM:

With email, you use any IMAP or POP3 client of your choosing, input your username, password, and email server hostname, and you're done. Anyone who uses email can send an email to you using any software of their choosing at any time.

With IM, you have one scenario for reachability, and two for client choice.

Reachability:

In order for someone to contact you on any IM network that's not a federated XMPP network [0], they must first create an account on that network. Users of one network -with rare exception- cannot contact users of another network.

So, to speak with a friend, you must know their username, the network that they use, and you must have an account on that network.

Client choice:

There are two scenarios here:

1) You and your friend use an IM network built before ~2008. You get to use the official clients, or Trillian, or a libpurple-powered client. Hooray for you! All your messaging can be handled in a single piece of software!

2) You use a contemporary IM network. To speak with a friend, you have to dig up the official client for the network that your friend uses. If you have friends in multiple networks, you have to use other, entirely separate pieces of software to speak with each group of friends. Hope your device of choice makes switching between applications quick and easy. :)

With email, you get your choice of clients and can contact any other person hooked to the Internet who uses email. You know that if you send a message to a valid email address, it will eventually get there. With IM, the situation is far more complicated and uncertain.

So, to tie it back to lightbulb control protocols:

A standard, comprehensive, vendor-independent control protocol -let's call it SmartBulb- assures you a few things:

* A non-technical consumer goes out and buys any SmartBulb equipment from any store. He knows that it'll work without hassle.

* When your current lightbulb vendor goes belly up, you can purchase bulbs and controllers from any other players in the industry.

* You can switch to more capable or better designed bulbs and controller models without rendering your existing devices useless to you.

With a fragmented market, there are a few hazards:

* Folks who would reverse-engineer proprietary protocols and sell devices to interoperate with multiple vendor's hardware are opening themselves up to great uncertainty in the form of frivolous legal action. [1]

* Non-technical users must purchase equipment from a single vendor or cabal of vendors. If those folks go belly up, these users will likely have no place to go to replace their hardware when it eventually wears out.

* Somewhat-technical folks who use gratis [2] controllers that are the product of reverse-engineering still have to live with the uncertainty that future revs of products that worked well with their controller no longer do, because of changes to the protocol, or use of parts of the protocol that the reverse engineers were unaware of.

If all that is not compelling, then consider this: How would your life change if you couldn't be certain that your electronics worked in any given building? That is to say, what would you do differently if -to reliably use your electronics on the go- you had to carry around a fat sack of adaptors, and -even then- sometimes you didn't have the right adaptor on you?

There's a lot of value in having a single, comprehensive standard.

[0] These days, effectively no IM networks are federated XMPP networks. Noone is sadder about this than I am.

[1] Just because it's frivolous doesn't mean that it's not expensive and time-consuming.

[2] Free-as-in-beer


From the Conditions of Use for the Hue Developer Program[1]: "We want all your apps to work with our API to form a rich ecosystem of interoperable applications, so it is a condition of access to our API documentation that you do not use it to develop or distribute any bridges or devices which interpret the hue API in order to control physical devices. Emulators are allowed provided they only control virtual bulbs"

Umm… what are they trying to say? They want interoperability, but only if it's done their way? Seems kinda elitist.

[1]: http://www.developers.meethue.com/documentation/conditions-u...


I think they're trying to say, "you can create an app that uses the hue api to control virtual lightbulbs, but you can't build software/devices that accept hue api messages that will also control your window blinds."

It makes sense: the hue api is for lightbulbs, not for window blinds or door locks. You're free to create a layer on top of it that accepts hue api messages for lightbulbs, and other messages for other things.

I think they don't want people to get your hypothetical hue-api enabled deadbolt and mistakenly unlock their deadbolt by turning using the hue app to turn their lights off.

On the other end of things, you could make an app that will unlock your doors and turn the lights on, as long as the message that unlocks the doors does not conform to the hue api.


It's more that they're saying "You can't make a bridge that would allow a Hue app to control a non-Hue lighting system"


Succinct, but I think it's more restrictive than what they're saying - more like, "You can't make a bridge/device that would allow a Hue app to control something that isn't a lightbulb."


They say:

"...it is a condition of access to our API documentation that you do not use it to develop or distribute any bridges or devices which interpret the hue API in order to control physical devices."

A lightbulb is a physical device. It seems clear to me that they distribute this API and documentation with a license that ensures that you can only write software to work with their hardware.


The real money for IoT is not in domestic appliances. But in companies that already had networked devices. For example: Logistic, City automation, large medical devices, factories, Transportation and more of these capital goods. Which can afford to have these different vertical silo's for the coming time.

The domestic sector will remain screwed and you're better of to roll your own open source solution. Another solution would be to use a blockchain or one single database were the other appliances can see what is happening and roll their own interpretation of it.

Or we remain screwed and only put in an automatic light in toilet and done with IoT.


This is what Homekit[1] (and Brillo[2] from Google) is trying to fix. We're early on in IoT - things are very immature right now but that doesn't mean the entire concept is flawed.

[1] https://developer.apple.com/homekit/

[2] https://developers.google.com/brillo/


The very nature of there being two things is a problem already.


No, it's not, it's actually great that there are literally hundreds of them. They are just the 'hubs'. Whatever 'hub' implements most 'protocol bindings' and provides the best added value on top of it, 'wins'. And if one gets behind, you can 'easily' (depending on circumstances...) switch to another without having to replace the devices.


Also, it's third-party over the internet, but IFTTT would connect all of the items mentioned in the post. LIFX, Alexa, Hue, and WeMo. It's not ideal to be dependent on a third party web application that's still in its early stages, but it is a partial solution.


IFTTT is a third-party cloud service where you give a random startup access to all of the devices in your house. Far from a good idea, and more likely an eventual security nightmare.


Sorry, "Brillo" was Google's "wait a minute, we have to own the home automation standard, we can't be compatible with others!"

Neither Google nor Apple behave themselves well enough to provide the standard in home automation.


Maybe the European government will do something similar to what they did for cellphone's charging ports (require all manufacturers to use micro USB). And as for the cellphone change, it would hopefully be applied mostly across the board by manufacturers (in USA too).


That bears the question of whether or not the law will select the right standard, or be able to change it when practical.

For instance, with USB Type-C coming out, with EU law be permissive of this transition? Or will the EU requirement adapt to allowing Micro-USB and Type-C for a while, and then eventually mandate Type-C?


This is a common misconception.

The way the EU's "common external power supply" standard was implemented allowed the manufacturers to pick the standard themselves.

Essentially the EU put a gun to manufacturer's heads, and said "pick a common standard together, or we will!" And they did. So the EU never picked micro-USB, the manufacturers did.

Additionally the original memorandum of understanding has since expired (2012) but it has been deemed a "success" as all major mobile phone manufacturers continue to utilise micro-USB.

So there's no specific reason why USB-C cannot become the new normal. And it seems manufacturers have actually become used to the status quo of having a common standard (except Apple of course).

I don't think you'd see the EU get involved if all of them together moved to USB-C (from micro-USB), they would only get involved if they started to developer proprietary ports again, or all had different incompatible ports/chargers.

Now they just need to do laptops...


That's a very helpful explanation, thank you.


Our company has been working on an Open Source IoT Integration Platform to connect services together into one single protocol that can be used to integrate devices and services together. Notice that it's an integration platform, not a platform for purely building devices on. It can be used for that, but it's primary purpose is integration.

http://iot-dsa.org/


So you've created this: https://xkcd.com/927/


Yep. Node-red remixed with some MQTT in a non-supported way.

yay.

As for me, I'll roll my own and stick with Node-red, mySensors arduino/nRF24L01+ stack. It just works, and it works Now. And compatibility is easy to do. Connecting to Octoblu (or any MQTT broker) is 1 node away, and just works.


For now, it either uses CoAP or REST web service, otherwise I won't consider it. Easy to understand, human readable messaging keeps the integration open and is the second most important concern for IOT (security being #1), but most everyone wants to create a platform and lock in a user/customer base. No thanks.


[flagged]


If you're going to spam on HN, at least use a real URL.


There was a time when HN was all about self-promotion. I'm surprised the comment was flagged to death, even with the broken URL.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: