60ms @ 1gbit (120 MiB) -> 7.2 MiB TCP buffer size required for bandwidth saturation. That’s more than the max buffer size a run-of-the-mill modern OS gives you by default, meaning your download speeds will be capped unless you fiddle with sysctl or equivalent. And that’s within the same continent assuming minimal jitter and packet loss.
Another big one is optimizing applications for number of round trips, which most people don’t do, and it can be surprisingly hard to do so.
I am a throughput freak, but you’d be surprised at the importance of latency, even (especially?) for throughput, in practice. It’s absolutely not just a “real-time” thing.
If you’re on Linux, you can use ‘netem’ to emulate packet loss, latency etc. Chrome dev tools have something similar too, it’s an eye opening experience if you’re used to fast reliable internet.
> 60ms @ 1gbit (120 MiB) -> 7.2 MiB TCP buffer size required for bandwidth saturation. That’s more than the max buffer size a run-of-the-mill modern OS gives you by default
Windows was quite happy to give me a 40,960 * 512 = 20,971,520 byte TCP window for a single stream speed test from mid US to London, run via Chrome. Linux is the only one I've noticed with stingy max buffers. I never really understood why user-focused distros like to keep the max so limited when there are plenty of resources.
That’s great to hear! Thanks for the data point, hope this is more representative. I’ve encountered Windows (10) instances in the wild with auto-tuning turned off (64k max).
> I never really understood why user-focused distros like to keep the max so limited when there are plenty of resources.
Yeah, agreed. That is way too conservative, especially for an issue which most people can’t easily triage.
You have around 150ms of one-way latency before perceptual quality starts to nose dive (see https://www.itu.int/rec/T-REC-G.114-200305-I). This includes the capture, processing, transmission, decoding, and reproduction of signals. Throw in some acoustic propagation delay at either end in room scale systems and milliseconds count.
>but outside of pro gaming and HFT
But that only allows for current technologies/consumer habits and not the future.
Gaming explodes into online VR where latency is incredibly noticeable, I still think virtual shopping will become a thing, everyone buys crap online now but it's nice to be able to "see" a product before you buy it.
As we start getting autonomous vehicles/delivery drones etc they can all make use of the lowest latency link possible.
Lower latency also usually means (but not always) lower power usage aka the "race to sleep".
It would also enable better co-ordination between swarms of x, whether it's aforementioned delivery drones, missile defence systems (which launch site is better to launch from, stationary or mobile) etc.
But also just human ingenuity and invention, we should always try to make it faster just because we can, at the end of the day.
I'm sure plenty of money will go towards trying, but I hope that virtual shopping does not become a thing. Especially with respect to clothing, there is simply too much information that can not be communicated visually. This includes the texture of the fabric, its density, degree of stretch, how the item fits on your body, how it looks it different lighting, etc. Online shopping also makes it easy to scam people, where some cheap, mass produced thing is misrepresented as a higher quality item.
Oh, no I definitely will still always buy clothing IRL. The biggest part being that no single company has an agreed way to do sizing, even simple things like shoe companies, as it can say be a UK-X size, but the interior shape can still change.
But for much else idm at all, people buy tonnes of stuff off Amazon based on 2d pics already.
Seeing the submarine cable maps for Hawaii can be a bit misleading. If I'm not mistaken, a lot of the fibre capacity simply bounces through Hawaii and isn't trivial for the islands to hook into. I don't believe Hawaii itself is any better connected to the internet than say, New Zealand
You are historically accurate. For a long time there was a lot of fiber that physically hopped through Hawaii, but didn’t actually talk to any gear locally (switches/routers). It was just an optical landing/regen station.
These days I believe there is less “optical only regen” and more IP-connected links.
Thanks for the precision. My point was: your latency will probably be more impacted by the number of nodes from you to the destination server than the distance to this server.
New York to LA is ~60msec
New York to Hong Kong is ~250msec