Hacker News new | past | comments | ask | show | jobs | submit login
MBONE: The Multicast Backbone (1994) (ucsb.edu)
44 points by Lammy on April 7, 2020 | hide | past | favorite | 31 comments



IP level multicast routing turned out to be too difficult in large scale. There are a several somewhat subtle reasons and mis-incentives, perhaps the most pressing being scheduling and time and bandwith synchronization between receivers. TCP does not work there...

The problem was solved by application level "multicast", or content delivery networks. You can keep your TCP and all your regular clients.


TCP doesn't work, but the multicast version is a real classic of IETF engineering, and I am really just looking for an excuse to post it here

http://ccr.sigcomm.org/archive/1995/conf/floyd.pdf


What's interesting that the IETF unconsciously tried to replicate classical TV with multicast. Apart from the complications of adding a lot of highly dynamic state to routers, it turns out that people actually prefer on-demand streaming. Live TV is a niche.


Ooh I remember mbone! At the time I was a kid and didn't have an internet connection at home, but I remember Wired talking about it back when they covered tech in detail.

Does anyone on HN know what happened to it?


I spent a decade working on multicast, helped write some of the second generation multicast conferencing tools, and co-authored a number of the standards. Generally, the issue was that multicast requires per group or per-source state in routers, and router memory was a scarce resource. This made it hard to scale globally. It also made forwarding problems hard to diagnose, taking up operators' time. These costs, plus an unclear connection between who benefitted from multicast and who paid for it made it difficult for core ISPs to justify. And that meant that commericial content or application providers needed to implement a plan B in case multicast wasn't available, and if that worked well enough, they didn't bother with also doing multicast.

Still, RTP and SIP came directly out of the MBone, and they bootstrapped the whole Internet multimedia phenomena we're all enjoying today while working from home.


I would just add that router memory is still pretty scarce and assigning more of it to multicast state is not going to win you any more sales. A bit of a chicken and egg problem there with regards to trying to use it at scale.


> A bit of a chicken and egg problem there

A best-effort multicast approach would help here. Reach as many endpoints as you can utilizing otherwise-unused router memory for the group memberships then bridge the gaps created by saturated routers via unicast relays.


> otherwise-unused router memory

This makes it sound like a dynamic pool of memory which can be allocated for different purposes. I was talking about an ASIC where the amount of available memory for these state tables might be fixed and not available for sharing with other parts of the system. You get whatever the designers back in 2015 decided was needed and the point is there's not much reason for them to spend area implementing a bigger memory when the reality today is that it's hardly used. So I get what you are saying but it's not necessarily possible.


It is a bit of a hypothetical scenario. The assumption is that the router is already capable of multicast routing in the first place (e.g. whatever ISPs are using to forward their IPTV) and have some spare capacity for future growth in those multicast routing tables that could be used to enable best-effort forwarding.


The downside is that CDNs are far less democratic than hypothetical end-to-end multicast support. Imagine being able to stream to thousands of users from a single machine with a single 100Mbit NIC.


Wouldn't torrents sort of help with that? (I guess only once a couple of peers are present...)


Torrents don't work well with mobile devices, multicast could be implemented more efficiently at the physical layer of a shared medium. Torrents also have higher latency and you're at the mercy of the aggregate upload bandwidth of the swarm. On a good day they can be good enough for VoD style streaming but that's far from assured unless the distributor at least keeps some backup bandwidth ready if demand exceeds supply (which sounds a bit like a CDN). For realtime streaming IP-multicast would still be the ideal solution.


Well, https://www.bittorrent.org/beps/bep_0014.html isn't even that new, and yeah, it's only for service discovery right now, but I'm pretty sure you'd get multicast support in the popular clients, at least for Merkle-Tree (BEP-0030) torrents with e.g. 2^10 piece size.


BEP14 is totally irrelevant to the discussion here. It's only for finding peers in a local lan, not for distributing payload via multicast. The latter is and always has been via unicast.


I wouldn't exactly say I 'enjoy' using SIP.


Sorry about that! It did rather morph from our nice simple initial design into a monster once it met the real world.


As everything is wont to do.


Why not? Personally I enjoy SIP itself greatly, it works well and is basically human readable, to the point that I can explain it to non-technical people.

My main problems over 15 years of working with SIP have been with "helpful" boxes in the middle tampering with my SIP messaging. NAT used to cause a bunch of problems, but these days I have workarounds for most cases that work great until the aforementioned middleboxes start "helping".

Give me a NAT-free network where the packet that reaches the far end is the same one I sent and my SIP systems will be quite happy.


While it has some wrinkles, i must say that i have generally much better experience with SIP tools than with a current wave of WebRTC tools.


Well it sure beat using H.323


Scalability issues, no interest from commercial ISPs (I suspect). It was the precursor to modern multicast routing protocols (PIM), but its still a challenge to run large multicast networks today. Unicast (with CDN's) just won because it so much simpler. Multicast today is mostly used by ISPs for their own IPTV services.


And in large corporate networks.


Indeed, we run multicast over 4 continents. One problem with multicast is the lack of real understanding by network engineers, because it's so rare (certainly anything that involves PIM), few people really get it, fewer tools are available to work out what's going on.

In the broadcast industry, multicast at a local level is very important though - with SDI being replaced by things like 2022-6 and 2110. I believe that's mainly at the IGMP/layer 2 level though, and there aren't many 2110 implementations over routers.


I remember the MBONE. I was working at a National Lab, and had an SGI Indy workstation with an IndyCam, so I spent some time setting it up.

One day I forgot to close the app when I left for the day, and came back the following day to a NastyGram from Van Jacobson, chastising me for using up bandwidth with an image of my darkened office door for 16 hours.

Good times.


I had a similar circumstance and the same rig with my Indy and for fun left it pointed at a Triops tank I'd set up .. people thought it was just an empty tank of water, until a week later there were epic fights between the banana shrimps and the triops babies...

Good times, until I got kicked off MBONE and ended up on CuSeeMe instead .. ;)


I was working on streaming video in QuickTime at Apple, made approximately the same mistake, and got a nastygram from Van as well.

Now I wish I’d kept it.


Good memories. In 1994, about 5 years later after Steve Deering published RFC1112, IP multicast got implemented in our Dutch spanned amateur packet radio network [1]. There were about 20-30 network nodes acting as Local Access Points for access to users, and interconnected via interlinks. The nodes implemented the AX25 packet radio protocol with rates of 1200-9600 baud, on top of that you could transport IP datagrams hence these nodes were part of the 44.0.0.0 AMPR.ORG Internet network. There were nodes that had an Internet gateway through a local origanisation or university, but these gateways were mainly used to support linking the subnet of the 44.0.0.0 network and connecting the AX25 networks (via AXIP and wormholes).

The nodes here in the Netherlands used the multicast to distribute link information and IP autorouting. It was very well possible to join a multicast group somehwere in the network by multiple parties, and then stream UDP frames accross the network.

A very fun experiment at the time was to send CELP-compressed (1-2kb/s) audio packets through the multicast network, and hence it was possible to have conversation with multiple people spanning a distance larger then the radio horizon. The latency and packet loss were disrupting good operation, but it more or less worked.

[1] https://www.jj1wtk.jp/nos/history.html


I have been disappointed that multicast isn't used by the mass video chat apps (e.g. Zoom), but after talking to some of the folks "in the know" it looks like you can't even count on it being supported in all the devices you need to reach your endpoint.

It seems like it would be good for that and great for "cord cutting" live apps like sport broadcast. Of course the majority of "last mile" carriers are also TV providers so would prefer to charge for that separately over their existing physical plant.


I wonder if multicast could be reclaimed as a layer in application-specific overlay networks. E.g. if a videoconference tool wanted to join a session, it would create stateless IP tunnel to an access point, and then just run regular multicast RTP over that.

It would allow to use existing tools and infrastructure to scale that and remove forwarding and distribution from applications to a common layer. But it has a disadvantage that there is no common API for applications to setup ad-hoc transient IP tunnnels, as that is usually a privileged operation.


that's kind of what the mbone was, a tunnel overlay. it just look a lot of fussing around for too many years.

the only reason that a tunnel is privileged is that it creates a kernel interface. given that we can't really use tcp anyways, there isn't any reason why the whole stack can't live in user space on a generic UDP port.

you would have to have pim or torrent style discovery/rendezvous machinery

aside from efficiency, I wonder what the use case is?


One the early commercial products that used multicast was InSoft's Communique and their streaming InTV product from 1993.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: