Hacker News new | past | comments | ask | show | jobs | submit login

Maybe if you had fiber directly connecting Ashburn to Pittsburgh, but it's more likely that you connect Ashburn -> Philadelphia -> Pittsburgh, which is more than double the physical distance.

Just looking at distances on a map is insufficient for actually characterizing the network path.

Also: bear in mind that you need to double all these numbers when considering round trip time. I recognize I phrased it in such a way that might have been interpreted as one-way latencies, but that wasn't my intent.

Even if you had a direct path from Ashburn to Pittsburgh, speed of light through fiber would be about 3.5 ms to travel 450 miles (there and back). And while you might expect that from just plugging numbers into an equation, I have never seen anything resembling 4ms RTT between DC and New York (which are a comparable distance apart from each other) on Google's production network, even though those are definitely directly connected (6-7ms is more realistic).




> Just looking at distances on a map is insufficient for actually characterizing the network path.

I would expect big datacenters to usually have links around, and I searched those two cities in particular and there was a news article at the top about a fiber link between them.

> Also: bear in mind that you need to double all these numbers when considering round trip time. I recognize I phrased it in such a way that might have been interpreted as one-way latencies, but that wasn't my intent.

I know. I was accounting for that too.

> I have never seen anything resembling 4ms RTT between DC and New York (which are a comparable distance apart from each other) on Google's production network, even though those are definitely directly connected (6-7ms is more realistic).

How much of that is inside the datacenters? I would expect extra and slower hops for servers as compared to data bouncing from one trunk to another.


> I would expect big datacenters to usually have links around, and I searched those two cities in particular and there was a news article at the top about a fiber link between them.

That's a good point; I later looked at Lumen's network map (https://www.lumen.com/en-us/resources/network-maps.html) and saw there was a link between iad and pit. But even if you have a network link, you do need diversity. I've seen examples where an ISP decided to do maintenance in Chicago, shutting down all their peering with us in the metro; all those users we served there then transited peering in DC, where their next closest peering point was. Unsurprisingly, their users had a bad time.

> How much of that is inside the datacenters? I would expect extra and slower hops for servers as compared to data bouncing from one trunk to another.

We generally attribute less than 1ms for all the links between datacenters within the same metro area. Neither iad nor lga are exceptions to this.

I ran a traceroute just now and the the backbone hop between lga and iad was ~4.8ms. So, better than 6ms, but still not 3.6ms which you'd expect from 450 miles / (2/3*c), and definitely not the < 2ms you claim. And we're certainly not transmitting this over copper, which would get you pretty close to full speed of light but at the cost of far lower bandwidth.


This is where things stood in 2014. An update a decade later would be nice.

https://arxiv.org/pdf/1505.03449.pdf


Indeed. In practice, it looks like many things haven't actually changed, other than numbers becoming even more exaggerated. Some things I noticed in the paper:

> Undercutting a competitor’s latency by as little as 250ms is considered a competitive advantage in the industry.

I'm pretty sure my director would tell you that number today is closer to 10ms.

> While ISPs compete primarily on the basis of peak bandwidth offered, bandwidth is not the issue.

As the submission makes evident (and you are well aware), this is still very much the case today.

> For instance, c-latencies from the Eastern US to Portugal are in the 30ms vicinity, but all transatlantic connectivity hits Northern Europe, from where routes may go through the ocean or land Southward to Portugal, thus incurring significant path ‘stretch’.

Sadly, this still holds today. Almost all cables land in UK / Ireland, although MAREA does land in northern Spain, and there are a couple others in flight.

> Most routes to popular prefixes are unlikely to change at this time-scale in the Internet

Ha. Maybe at the level of ASNs, but we certainly perform traffic engineering on much smaller timescales (see https://www.cs.princeton.edu/courses/archive/fall17/cos561/p...).

Protocol improvements have definitely come a long way in the past decade. QUIC is now an IETF standard, with 0-rtt session resumption as you mention, as well as initial congestion window bootstrapping to reduce the number of round trips in slow start. But we haven't made much progress in many places that the article points out are in need of improvement.

I think the focus on speed-of-light in vacuum and the development of a c-ISP is not as useful for discussing the internet backbone, at least until we have viable replacements for fiber that are able to satisfy the same massive bandwidth requirements. Even ignoring YouTube video serving, we still have many terabits of egress globally, so the 80Gbps capacity target is not anywhere close to enough, even for 1% of our traffic in the US. That's barely enough to serve 100k qps of 100kB files. (A full page load of www.google.com with all resources clocked in around 730 kB transferred over the network, according to Chrome devtools. That's probably an argument that we should be making our home page lighter, but more than 90% of that is cached for future requests.)

And just accounting for 2x peak-to-average ratio absolutely doesn't account for inorganic demand induced by events like the Super Bowl or a popular video game release (https://www.pcgamer.com/baldurs-gate-3-launch-slams-into-ste...).


2ms was for a single direction, so 4.8 is pretty close.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: