Hacker News new | past | comments | ask | show | jobs | submit login
No U PNP (computer.rip)
129 points by zdw on Nov 27, 2021 | hide | past | favorite | 36 comments



> I am actually somewhat skeptical of the security advantages of disabling UPnP for this purpose. The concern is usually that malware on a machine in the local network will use UPnP to map inbound ports...

For me the concern is not malware but crappy/insecure software proudly exposing itself on the internet.

Many IP cameras will use UPnP to make their web interface publicly accessible. Because of the long history of opening router ports, seeing a random open port on my public IP was quite surprising. If you haven't recently, check your own public IP in Shodan.io .

The cameras I've used run crappy web interfaces that almost certainly do not get security updates. If an attack was successful on the camera, from there the attacker would have access to my internal network.


They go on to say “ The point is that I would agree that it's a good idea to disable UPnP, but not because of what UPnP does, and not just on your router. Instead, it's a good idea to be very skeptical of UPnP because of defective implementations in many embedded devices, especially routers, but also all of your IoT nonsense.”


They do, but that is referring to the potential for reflected DDoS attacks which is discussed in the previous paragraph. The referenced 'defective implementation' is that of UPnP. In my case, I'm also concerned with the service that is exposed.


jokes on them, i have CGNAT!


One of the interesting points he makes is that forwarding ports with IGDP is a small part of UPnP's functionality.

By equating the two and discarding all of UPnP's other local service discovery features, for fear of IGDP specifically, perhaps we are throwing the baby out with the bathwater.


Disabling upnp on your router does not prevent your devices from using it amongst themselves.

The only service advertised by the router is the part you don’t want anyway.


That's not entirely true, a router with a full PnP stack would listen to and participate in SSDP discover queries, enabling more discoverable services per host. Only one service can listen on port 1900 for DISCOVER queries, but a SSDP-capable router can cache NOTIFY messages from services that can't listen, and reply to queries for them itself with the same record.

It's admittedly a weak point to make in the context of consumer gear, though. Last I checked, even router OS's with sprawling functionality like OpenWRT and PFsense don't offer SSDP relaying as a supported feature.

If you spent less than $200 on your router, there is probably no difference between disabling all of its UPnP capability and disabling IGDP specifically.


> For me the concern is not malware but crappy/insecure software proudly exposing itself on the internet.

my thoughts exactly.


> Shodan.io

How do I use this site? Do I have to register?


Its a search engine, you type your IP into the search bar at the top of the page


The site already knows my public IP why do I have to enter it? This is sad.


Well, checking your own IP isn't exactly its primary use case


Does Google pre-fill your search queries?


The google doodle provides information depending on your location and date upon entering the google website.

They are stating that they have information about you and that they have information for you. And, yes, they do it proactively. Doesn't that count for you as search results with a query you didn't submit?


The issue with UPNP is not that it's misunderstood. The issue is that somebody implementing it has to pay hundreds of bucks to read the ISO.

Nobody cares about ISOs in the web world, everything that's not an RFC will never receive well penetrated adoption.

On the other hand printers are working on your smartphones but not on your Linux desktop. Ever thought why that is? The reason is an outdated resolve concept that needs lots of (possibly going wrong) manual configuration just to get a Multicast DNS-SD resolution feature going.

It's such a hardcore fail when considering that the goal for zerconf and DNS-SD was to ease things up, and on Linux it made things worse because of config fatigue.


I addressed this in another comment, but as a short thought on the ISO issue - DLNA itself was never adopted by ISO, but UPnP A/V on which it's based was. ISO really wasn't an issue here, though, as the actual DLNA specifications (and the UPnP specifications they incorporated) were maintained and distributed by the DLNA incorporation itself. This is very common for these types of standards (general standards bodies tend to not even be interested in them until there is a decent degree of industry momentum, which you have to get by putting together an implementers group), and I mentioned USB as another prominent example, it's standardized by USB-IF, not IEC, even though each major revision is submitted to IEC after USB-IF adopts it as basically a rubber-stamp exercise.

All of that said, DLNA Inc. provided the specs only to members. I do not know what membership cost, but I suspect it was well more than ISO charges for standards copies. I have no doubt this was a hindrance to DLNA acceptance in software, as it would have blocked out a lot of small and open-source efforts. For hardware vendors, having to pay hefty membership dues to an implementer's group to get access to standards is par for the course and how about half of the hardware interconnect standards work, and I don't think it caused much hesitation there. This might help explain why, for a period of 5+ years, embedded DLNA clients were pretty standard in HD-DVD and Blu-Ray players but there were surprisingly few software options. It's no doubt also a factor that the major promoters of both HD-DVD and Blu-Ray themselves were also members of DLNA and may have incentivized their licensees to implement it.


> It's such a hardcore fail when considering that the goal for zerconf and DNS-SD was to ease things up, and on Linux it made things worse because of config fatigue

What distribution are you using? On those desktop oriented ones, it works out of the box.


Zeroconf on Linux works so well I get to see my printer three times - advertised in Windows format and Mac format; and by IP and mDNS hostname.


You probably have several discovery protocols enabled on the printer, and your computer dutifully shows each one.

I see two scanners for the same reason - one via WSD, one through AirScan.


Yeah i'm not getting the parent either, on Debian you just install the avahi package and it's done. Maybe other distros are more complicated.


> The issue with UPNP is not that it's misunderstood. The issue is that somebody implementing it has to pay hundreds of bucks to read the ISO.

I think you can get the specs here: https://openconnectivity.org/developer/specifications/upnp-r...

> On the other hand printers are working on your smartphones but not on your Linux desktop.

I don't think this has anything to do with upnp? Most printers nowadays also support IPP (thanks to apple's cups push?), so ..


> On the other hand printers are working on your smartphones but not on your Linux desktop. Ever thought why that is?

That's not the case though. The only issues I've had were: 1. missing drivers (a universal issue), 2. changed IP after initial config (because the configurator save the IP and not hostname - I've seen this happen on Windows too, 3. incomplete drivers (because the printer doesn't fully adhere to the standards and vendor drivers aren't complete/available)

So what do printers have to do with any of this?


> So what do printers have to do with any of this?

The original post was about zeroconf, and therefore the later multicast DNS-SD based discovery protocol [1]. Namely aireplay, airport, airprint, airscan based protocols that multicast their functionality and protocols via DNS to the 224.0.0.251 or [ff02::fb] addresses on port 5353.

Printers these days do have primarily support for airprint + airscan so they work out of the box on MacOS with all the GUIs that the OS has to offer. Not so much for WSD or other protocols that Linux/Windows still need. CUPS or postscript support usually isn't complete for anything else than TIFF, because nobody seems to give a damn about implementing a PDF rasterizer on-device, let alone gutenprint or postscript/ghostscript support for their scanner devices.

Reading through the sibling comments you have to recognize that my interpretation of "what is working" is a different one than a developer's perspective. If someone without programming/linux configuration knowledge cannot print or scan via WiFi, your tool is pretty much useless and didn't replace the 100 fragmented alternatives that existed before it already.

My complaints were mostly about avahi's integrated discovery tools like avahi-browse and avahi-discover which are only tools to discover printers, but are themselves useless for printing or scanning on their own (and they're not transport-level libraries for DNS-SD IANA registered protocols either).

Literally the only scanning tool that works in the Linux world is "simple-scan" [2], which requires preinstalled "sane-airscan" and an integrated avahi-daemon in the resolv.conf/nsswitch.conf. Don't ask about parallel IPv4 + IPv6 support, because that's totally unsupported and will crash in multiple NAT scenarios the daemon in an endless loop, and is the reason why every Linux wiki will recommend to add "mdns4_minimal" to the /etc/nsswitch.conf file instead of "mdns_minimal". [3]

Coming back to my point: I mean, every developer can send a postscript file via bash to an IPP port, but I wouldn't call it a working UX or UI. When comparing the aforementioned shitshow with how everything is nicely working on MacOS, Linux is basically a bad joke when it comes to mDNS support. I mean, the technology is almost 15 years old now, and we still can't have nice printer support on Linux. Using Apple's CUPS as an excuse to say we support printers is a 20 years old excuse. And a deprecated one, btw, because PPD support has been deprecated for years now ... so good luck getting a new printer that works with that.

Nonetheless fixing my printer on Linux and debugging what all this stuff is for days inspired me to build a Web Browser that uses mDNS for local peer discovery, so at least something good came out of it.

[1] http://www.dns-sd.org/

[2] https://gitlab.gnome.org/GNOME/simple-scan

[3] e.g. https://wiki.archlinux.org/title/avahi


I'm under the impression that mDNS / DNS-SD worked out of the box on Ubuntu at least, for printers (I'm not sure, maybe installing avahi manually is needed if it's not there by default, but I wouldn't call that "a lot of manual configuration")


Avahi often seems like a mixed bag. Sometimes it Just Works, but other times it doesn't---one of the big pain points of zero-configuration systems has always been opacity and difficult troubleshooting. For example, it's not at all unusual for well-meaning consumer network appliances to monitor and interfere with IGMP ("IGMP snooping"), ostensibly for performance reasons, but then unintentionally break mDNS. It can be very hard to figure out that that's what's happening when your printer just isn't showing up. In general distributed systems tend to be a pain to troubleshoot, but especially so the zero-configuration efforts since they often try excessively hard to be completely invisible.


If I had a network appliance configured to do any sort of nebulous network optimization, I'd start to suspect it any time anything doesn't work

Coffee too cold? Teapot protocol should have used QUIC.


I wonder how I can finally make it work on a Linux desktop


The issue I always had with DLNA is that I found its implementations and apecifically controllers and renderers rather wonky:

- Open-source controllers are unstable and with terrible UX when they manage to discover the renderers and servers. Closed source controllers all seemed suspicious to me, esp. on Android. - Renderers all support different media formats, some support sutitles, others not at all...

Don't get me wrong, I marvel at the engineering thought that went into designing the spec, however adoption was never up to par IMO. I'd love to be proven wrong that this is no longer the case, however the defeatist tone of TFA makes me doubt this...

On the issue with NAT-PMP, PCP, IGD, I am somewhat jaded. I use OpenWrt and XMPP. For my home use this setup works hassle-free and file transfers and calls just work. (IoT is in a different vlan, cut off from the internet). However I doubt setting this up is not really something the average consumer would do. Maybe only if pirating as Torrent clients, too have very good support for the protocol.


Absolutely, I'm not sure that I sufficiently covered in the post on DLNA specifically that... a lot of DLNA implementations sucked. Windows Media Center had a pretty high level of polish, but it was sort of surprisingly limited in terms of features and the original, XP-era implementation was such that it felt awkwardly disconnected from the normal operating system (I think launching WMC caused the session host to start a whole new shell session?). Basically every other DLNA implementation was a mess in some way or another. I've owned around three devices with DLNA browsers/renderers over the years and all three have had serious stability or UX problems.

And this is all sort of beside the fact that Microsoft's incomplete implementation lead to Twonky and Plex as competitors, both of which had their own problems and sort of muddied the waters on the whole thing. The HP Home Servers shipped with WMC, Plex, and Twonky all running by default! You can imagine how confusing that could be to deal with.

It's hard to say why exactly this was, other than that I think Microsoft sort of heavily pushed DLNA support in embedded devices like HD-DVD and Blu-Ray players but there was no real quality enforcement, so a lot of vendors put in a token effort. The fact that the DLNA spec was relatively complex is presumably another reason, and lead to a lot of rework of the same ideas over the following years.


> And this is all sort of beside the fact that Microsoft's incomplete implementation lead to Twonky and Plex as competitors, both of which had their own problems and sort of muddied the waters on the whole thing. The HP Home Servers shipped with WMC, Plex, and Twonky all running by default! You can imagine how confusing that could be to deal with.

I haven't looked into twonky and plex, but are they even trying to solve the same problem that DLNA aimed to do? My understanding is that neither of those have submitted anything to standardizing bodies like IETF or ISO so far, but maybe there is no need for such as the content is being streamed directly from the internet.


My general understanding is that Twonky Media was originally developed specifically to be a DLNA-compliant solution. Plex was not, but had DLNA support added as a bolt-on feature to the server. Plex had better UX than Twonky but I don't think any of the Plex clients ever functioned as controllers, so Plex wasn't quite "real" DLNA in the sense that Twonky was, it was more of like a compatibility solution to allow embedded DLNA clients to access your media in Plex (I believe this still works but... don't have any DLNA clients around, so who knows). Twonky was "first-class" DLNA but never felt very high quality and didn't survive to the modern era.

While UPnP was accepted as an ISO standard and by extension DLNA was very ISO-standards based, I think the ISO angle tends to be a distraction... the meaningful authority on DLNA, including the specifications and certification, was always the DLNA incorporation itself (a non-profit partnership of its promoters). Many DLNA implementations predated ISO acceptance of UPnP A/V on which its based, and the way to get the specifications was not by paying a hefty sum to ISO but by paying an even heftier sum (membership dues) to DLNA Inc. So DLNA was an "open standard" compared to e.g. Plex that does not intend to be a multi-vendor ecosystem, but it was not as "open" as you would hope which was no doubt a factor in its limited adoption... some device vendors like Logitech and Sony bet on it pretty heavily in hardware products but it probably would have seen more of a software ecosystem if the specs were easier to obtain.

This is of course not dissimilar to many other "open standards" like USB (controlled far more by USB-IF than by any other standards body). Of course USB versions are typically submitted to general standards bodies like IEC, but when there's already a domain-specific organization promulgating standards usually the only reason it gets submitted to general standards bodies is for compliance purposes, e.g. it's not unusual for government customers to require that things be standardized through ISO (which is basically the origin story of POSIX). In the case of USB I think IEC acceptance is mostly driven by safety regulations in some countries that require electrical interconnects to be approved by a recognized standards body. I'm far from an expert in this field, this is just what I've observed about the interaction between implementation groups and general standards bodies.


This was interesting and I was definitely confused about the difference between UPnP and port forwarding. I gathered that UPnP was something "more" than just port forwarding and it had something to do with DLNA, but this cleared that up for me.

On my network zeroconf/service discovery is still a thing. I won't be using cloud services for anything that can be done locally. As someone who grew up in a world of TCP/IP everything (or at least I thought so; that's certainly all I learnt), it's interesting to know it wasn't always the case. Even though we learn about the various layers of network architecture I think too many people still consider IP to be the only possible solution at that layer. With IPv6 there is a certain elegance to it, I suppose (address any device from anywhere in the world), but it's completely unnecessary for local traffic when you think about it.

On the "DLNA is dead" note. As someone who has only recently got into the whole network attached media player thing, what are "current" solutions to the problem of "multiple players, multiple controllers, multiple sources"? Basically I want to have multiple speaker systems and use my phone to play stuff from my local music collection.


What UPnP/PCP implementations lack is a one-time authorization workflow. There should be a list of pending requests in the router's web interface with a unique token for each request. The next time a device wants to open the same mapping it'll have to issue to pass the same token again or require reauthorization.


> the short life and quick death of DLNA.

Surprising number of recently updated repos that match the search term "DLNA": https://github.com/search?o=desc&q=dlna&s=updated&type=Repos...


Yeah, it's news to me that DLNA is dead considering I use it regularly.


The DLNA organization dissolved in 2017. It's still possible to get products DLNA certified via an authorized third party lab, and DLNA products still exist and work, but it's pretty clear that it doesn't have an ongoing future at this point. For the most part, the consumer need it fit has gone away and so have the DLNA hardware devices that were expected to be it's main adoption driver (mostly HD-DVD and Blu-Ray players which are no longer common). The Xbox One shipped with only partial DLNA support, a good indication that it's main champion Microsoft has lost interest. Other principal backers Intel and Sony haven't really pushed it since even before 2017.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: