Hacker News new | past | comments | ask | show | jobs | submit login

First, OpenConnect is just a CDN run by Netflix, they could call it bunny protocol it’s just a name. But, they don’t have unlimited boxes at every ISP, in almost every case you can get P2P connections between specific users with lower network overhead than those users connecting to one of the ISP’s data centers.

Anyway, the CDN knows the connections that are on the network because they are what are connecting to it. Segregating based on large scale network architecture is a solved problem, if you’re confused read up on how CDN’s work. What happens inside each ISP can then be managed either via automation based on ping times etc, or ISP specific rules.

In terms of P2P it’s trivial to include 99% of the data for a movie, but not enough data to actually play the movie. It’s codec specific, but that’s not a problem when you’re designing the service. Ensuring the correct users are part of the network is the basic authentication at the CDN node. That’s what’s keeping the list of active users.

As to data validation, the basic BitTorrent protocol handles most of what your concerned about. Clients have long been able to stream movies with minimal buffering by simply prioritizing traffic. Improving on that baseline is possible as you’re running the service not just accepting random connections and you want to be able to switch resolutions on the fly, but that’s really not a big deal.

PS: And yes, some Netflix content deals would create issues. But, that’s irrelevant to their own content and it’s just another negotiation when negotiating licensing, much like allowing content on a CDN in the first place.




> First, OpenConnect is just a CDN run by Netflix, they could call it bunny protocol it’s just a name. But, they don’t have unlimited boxes at every ISP, in almost every case you can get P2P connections between specific users with lower network overhead than those users connecting to one of the ISP’s data centers.

They have boxes in a lot of ISPs.

If “P2P” requires you to transit your last mile to your ISPs POP and then back down to another user, and Netflix requires you to transit to the ISP POP and back out again... has P2P gained you much? In most cases downstream throughput is much higher as well, making the in-ISP cache box far better for most.

P2P has its place but it’s hard to argue its better for video distribution.


Many ISP’s have significantly more and largely unused bandwidth between users vs the overall network. This is often done for simple redundancy as you want a minimum of two upload links if not more. However, it’s much simpler to run a wire between two different tiny grey buildings in a neighborhood than run a much longer wire to another section of your core network. Ideally that’s just a backup for your backup, but properly configured routers will still use it for local traffic.

Another common case is if you want X bandwidth from A to B you round up to hardware with some number more than X. This can result in network topology’s that seem very odd on the surface.

PS: I think you’re confusing what I am saying, this is not pure P2P it’s very much a hybrid model. Further Netflix was seriously considering it for a while in 2014, but stuck with a simpler model.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: