Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I still cannot fathom why the wifi handshake process takes more than a millisecond. I have no problem with the idea that wifi signals should be unreliable since the frequencies interact with typical everyday objects like walls, but the connection strength should be known on sub-second timescales.

This Quora thread is a good example of the typical non-answers provided to this question.

https://www.quora.com/Why-does-it-take-so-long-to-establish-...



Time-division multiplexing. The router has to wait for everyone else to stop using the band—and then take that opportunity to tell them to all shut up for a moment—before it can say anything. And it has to finish what it was saying before it does even this, because it has guaranteed its clients it would send packets of certain lengths by certain deadlines. (Plus QOSing: as the person owning a router, you almost always want your established wi-fi connections to take service priority, so that your phone coming gradually into range as you get home doesn't kill your family's downloads.)

Add more antennas, and you can say more things on more channels at once. "Enterprise-grade" multi-antenna routers (e.g. routers in conference halls) negotiate new connections much more quickly.

Cell towers are essentially a whole bunch of antennas (or a few antennae in highly-MIMO setups, so effectively the same thing) so there's comparatively next to no latency for sending "unexpected" packets to the tower.

(Establishing connectivity with a new cell is still hard, though, since it takes a moment for a handset to get the tower's attention when the handset doesn't yet know where precisely the tower is for MIMO-beamforming to kick in. This is ameliorated by many cellular ISPs mirroring connection state between neighbouring cells, so that when you move from one cell to another, the new tower is already "expecting" you.)


You have added to the list of differences between wifi and other standards, but you have not actually shown where the timescale comes from starting from first principles. Why 10s to connect? Why not 10 ms or 10 minutes?


I'm not sure why it's 10s; my point was mostly that, once you figure out exactly what part of the connectivity process eats up the majority of the 10 second delay, it'll be much easier to figure out the why.

Since connectivity takes (far) less than 10s on enterprise-grade multi-antenna routers, all that time is being spent in some component that's different between the two classes of hardware. So looking at the difference in BOM between average products of the two classes might be helpful in figuring things out.


Right, but there are multiple accounts of this floating around the web, many contradicting each other, and none that I've found that can explain what sets the actual timescale.


I should have mentioned something earlier, but I totally agree, and I'm not sure why these things happen on human timescales. =/. Looks like no one answered with a detailed-enough response to 100% understand it, but maybe that's an opportunity for a good blog post by an expert. (and hackernews karma from me!)

Otherwise, I hope things have been going well, btw =).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: