Hacker News new | past | comments | ask | show | jobs | submit login

> (I will exclude the cost of generating the crypto)

The cost of generating crypto is very real when you're talking about single-digit ms latencies :( RSA-2048 TLS certs add about 2-3ms to any connection, just on server-side compute, even on modern CPUs (Epyc Milan). (I believe a coworker benchmarked this after disbelieving how much compute I reported we were spending and found that it's something like 40x slower than ECDSA P256.)

> To go back to your HN example - HN loads fast because it is ONE IPv6 address (for me) and very lightweight so tcp slow start ramps up pretty darn fast, even going all the way to San Diego.

I used HN as an example not because it's bloated, but due to its singly-homed nature to illustrate how much content placement matters. Yeah, we could quibble about 80ms vs 65ms RTT from improving peering but the real win as I mentioned was in server placement. Throwing a CDN or some other reverse proxy in front of that helps as far as cacheability of your content / fanout but also for TCP termination near the users (which cuts down on those startup round trips). This is why I can even talk about Los Angeles for www.google.com serving even though we don't have any core datacenters there that host the web search backends.

(For what it's worth, I picked Chicago as a second location as "nominally good enough for the rest of the US". Could we do better? Absolutely. As you point out, major CDNs in the US have presence in most if not all of the major peering sites in the country, and either peer directly with most ISPs or meet them via IXes.)




I did slide towards mentioning QUIC there in that that first time crypto cost is gone there too. (I rather like quic), but yes, crypto costs. The web got a lot slower when it went to https, I noticed.

A highly interactive site like HN is inconvenient to cache with a CDN (presently). Also auth is a PITA. And HN, even at 70ms, is "close enough" to be quite enjoyable to use. On the other hand, most of my work has shifted into a zulip chat instance, which feels amazingly faster than any web forum ever could.

It would be cool if hackernews went more like a modern chat app.


I agree that caching is difficult in this case, but even a local reverse proxy would eliminate most of the connection setup overhead by reducing the roundtrip on ephemeral connections while keeping longer-lived connections open for data transfer (thereby also eliminating the cost of slow start, although this HN thread is only ~1 extra roundtrip assuming initial cwnd of 10 packets -- 44 kB on the wire, thanks to text being highly compressible).

TLS 1.3 also eliminated one of the round trips associated with HTTPS, so things are getting better even for TCP. And yeah, most of the time we think of the cost of crypto as the extra network round trips, which is why I pointed out that RSA is expensive (not to mention a potential DoS vector!) -- at some point your total TLS overhead starts to be dominated by multiplying numbers together.

(I like QUIC too! I appreciate that it's been explicitly designed to avoid protocol ossification that's largely stalled TCP's evolution, and so it's one of the places where we can actually get real protocol improvements on the wider internet.)


Geoff´s talk was really good. I hope discussion starts over there: https://news.ycombinator.com/item?id=39561073




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: