Hacker News new | past | comments | ask | show | jobs | submit login

We got funded for this idea, by Sony, during the first bubble. Before BitTorrent, when FEC-based multicast file transfer was either FLID/DL (an IETF standard that never went anywhere) or the startup by the guy who invented Tornado codes.

We had a centralized tree-structured directory for discovery, and then (at first) deployable nodes running group messaging and running a weighted link-state routing protocol (more or less cribbed from OSPF), then later a small kernel message forwarding scheme with a programmable control plane so we could build arbitrary routing protocols in Tcl.

Our initial application was chat (we overengineered, like, a little) and we pivoted to streaming video.

We died in part because we hadn't the slightest clue what we were doing, and in part because the VCs replaced our management with a team that decided our best bet was to take our superior technology and go head-to-head with Akamai with our own streaming video CDN.

We'd have been better off just open-sourcing.

Anyhow: I'm obviously biased, but the way I think this is going to happen is, some open source project to build arbitrary overlays is going to catch on (the overlay will be a means-to-an-end for some killer app people actually care about; my bet is, telepresence).




Very interesting. The main difference in our work is that we really latched on to VJ's idea of breadcrumbs and caching blocks. When I alluded to Bittorrent I wasn't really referring to FEC-based distribution. The problem with FEC is as you said somewhere else -- it's not very efficient for small files and it currently isn't very location aware (aside from Ono, my professor's research lab's Bittorrent plugin) so you can end up going halfway around the world for relevant pieces of the file.

Instead, we were more interested with the robust swarm implementations that were already present in some Bittorrent clients, like fairly efficient DHTs. It is also an already existing protocol with millions of users so we could piggyback on Bittorrent clients with plugins to measure the real performance all over the world.

The key difference is the caching. Jacobson proposes that the caching would happen in the ISPs, but there's no reason peers couldn't take up the slack, only different monetary incentives. The protocol here becomes less like OSPF and more like distributed k-minimum spanning tree. If you can find the "supernodes" then you can implement caching on their front and prevent unnecessary communication.

Of course, if you think anything like me your first thought is, "do we actually know if data requests are reliably concentrated in a specific area such that local caching will actually provide much of a benefit?" My partners thought the same thing, which is why they ended up doing a Bittorrent traffic analysis to attempt to plot an AS distribution curve for data requests[1]. They found that there was huge redundancy in intra-AS requests, so there's at least potential here.

Of course there are still a number of problems to be worked out -- if the supernodes don't have an easy incentive to provide caching to other people in their AS, how do we develop such an incentive? But the overall protocol seems very interesting. Most notably it would not work like FEC and small files would probably benefit even more from the caching (which is why you'll also note that files are split into blocks or datagrams or whatever you wish to call them in the VJ protocol).

[1] http://acm.eecs.northwestern.edu/site_media/cccp.pdf

I really don't deserve to be on this paper -- Zach and John really ran this show.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: