This is what I did when running the WiFi at PyCon. For 5GHz we generally had enough channels that we could run higher power, but for 2.4GHz I would run it on the lowest power settings.
I'd also set the APs on the tables, rather than trying to elevate them, so that the bodies would limit propagation.
Whenever someone asked, which happened at least a couple times each conference, I explained it like this: Imagine you have 10 groups in the ballroom all trying to have discussions. Turning the power up is like giving each group a bullhorn: the whole room becomes a noisy mess. Lower power is like each group gathering close and talking at normal volume.
It worked well, mostly because the venue-provided wireless at the time were all trying the "few, big AP" solution.
I've heard at tech conferences like CCC instead of using a few big powerful AP's at central and high locations, they put lower power units in between the audience's seats. Humans, as water bags, are good at dampening signals. So you end up with a lot of small networks cells with high throughput, that don't interfere with each other. Compared to the few big central cells where thousands of devices are fighting for the same frequency slots.
Yep I think this is a very good idea too. It is a way of reusing existing spectrum space. Besides small cell networks use low powers and being portable they can deployed on demand.
In fact small cell networks might be how the networks are deployed in future. Website below has some interesting research on this topic
Best thing I did was wire my house with cat6, so my TV, computer, server, security cam, nas and printer are all wired. Its so worth the effort. Wifi is now for a rarely used laptop and phones.
I had everything wired until a lightning strike took out everything connected to the router (tv, avr, PC). Now everything that can be connected by Wi-Fi is. I also added a grounding rod to the coax service entrance, knock on wood...
Yeah I tried that route after losing equipment over my broadband cable connection. Didn’t help. Next major lightning strike and I lost more Ethernet equipment.
I’ve since isolated my equipment from my ISP’s with a pair of fiber adapters. I’m also on AT&T fiber now instead of coax (but the ONT is outside and Ethernet runs from it to the AT&T RG inside).
Ethernet ports seem to be really sensitive (HDMI too) and it’s possible a current is being induced in them from some other path. I’ll find out this summer. (I live in NC which has the second highest number of lightning strikes in the US after FL.)
Edit: just realized you wrote lighting rod. You actually would need a lightning protection system which is multiple rods properly installed and grounded. It isn’t a DIY thing to install and I’ve been told is not cheap. Also probably doesn’t help with nearby strikes, only keeps your house from burning down from a direct strike.
I’ve also installed a couple type 2 surge protectors.
Nope, two story home w/basement built in 2004. It’s at the high point in the neighborhood. I’ve had the grounding rod checked. I think I’ve just been really unlucky. I’ve lost networking gear two or three times (which I believe was actually via the cable modem) and the most recent strike 2 years ago took out circuit boards in an A/C unit, a separate air handler/furnace, one of my garage door openers, and an arc-fault circuit breaker in one of my panel. That last strike was the first time I had more than a few hundred dollars of damage and enough to claim against my home owners policy.
This is since 2004. And we get a lot of lightning storms here in the summer. I grew up in S Florida and I don't recall getting lightning storms as frequently or as violently as here.
If you buy a house before wiring for network was common, you can check the phone outlets. A lot of homes use Cat 5 for the phone cable so those outlets can be repurposed for networking.
Another (albeit, terrible) solution is HomePlug[0]. I once had broadband installed in an apartment and was provided with a couple cheap "ethernet over power" wallworts. I think the tech has improved, but they seemed to get a massive amount of interference (I think they capped out at like 65mbps?). I think I lost connections whenever the microwave was turned on.
My houses wiring is pretty new and although my homeplug units are advertised to run at up to 500mbps, I get about 300mbps out of them which isn't terrible.
I have the cat5e throughout the house but in a ring topology as it was wired for phones only. RIght now I just have two points connected end to end. Next step is to think of a clever way to use the 2 unused pairs to support another 100BaseT and create a fake hub and spoke topology. This G.hn standard might be a good way to get even more performance out of this setup.
This was one of my highest priorities after buying my current house and it was well worth the effort.
The cable line in my livingroom connects to a modem. The modem feeds into a wifi router so that I get a good signal throughout most of the house. One of the ports in the router's on-board switch feeds into a wall outlet, which connects to a master switch in the basement. I could never go back to pure wifi.
What I find strange is that, like the OP said, the original network was setup for phone outlets... but the house was only a couple years old. Are RJ11 phone networks really still a bigger selling point than RJ45 switched networks? With the high adoption of cellphones and the increasing demand for internet-connected devices, it seems like RJ45 should be standard.
I recently bought a house built in the 1960s. It's surprisingly easy to add Cat6 to an older home as the interior walls aren't insulated. I've been adding drops as needed and it takes me about 30 minutes start to finish to run a new drop.
I'm out the cost of materials (maybe $200?) and the price of having an electrician add an outlet to my "MDF"/hall closet ($175).
Dont forget about powerline-to-ethernet adapters, esp if you only have a small number of devices or dont need full gigabit bandwidth. experience varies, but if the two endpoints are on the same breaker, you can get > 100mbps, and if it has to cross breakers, you can (potentially) still get over 1mbs, which is enough for streaming and basic surfing. Much easier than running wire or messing with drywall if you're not handiman-inclined.
If that doesn't work, and you either don't look up often or you don't mind a little ugliness, I've found running cable along the tops of walls near the ceiling works pretty well, and is easy to do. Here's part of my home network cable run and also part of the speaker cable run for the right rear surround sound channel [1].
Where a cable needs to make a turn to follow a corner, I screw in a cup hook. For support along a straight section of wall I nail a wire nail partly in and hang the cable on the protruding part.
If you want to do the work, adding in crown molding after running some cat6 can make the room look much nicer if the style fits. You can then drill a hole in the drywall at the top of the wall, drop the cable down between the studs and install an actual outlet. Crown molding usually has a decent amount of space to run a few cables.
As others have said, crown molding can help that. Also, there are flat cables that can make running cable between baseboard and carpet (and even under carpet) virtually invisible.
Staples leave small holes when removed which are easy to patch. You can also use many of them to get nice straight and tight runs. You can put a bit of white (or whatever color your walls are) paint on them to help blend in.
It's not ideal aesthetically, but nice straight lines and right angles go a long way to making it look better. The next step up would be to run some channel to hide the wires, but I don't find that necessary.
Careful with the staple gun. Some of them are more powerful than you might think, and are potentially capable of pushing the staple through the insulation and providing a nice conductor to connect the wires. It has happened to me.
That’ll depend on how or if they daisy chained the phone jacks. You should be able to get at least one Ethernet connection (two ends) but beyond that you might have to get creative.
If you put your modem at the same jack where the first tee in the line is, you should be able to get at least two connections, but for more than that you may need a switch at each tee.
Don't forget Ethernet over Power... works great on some houses and you don't need to lay out cables. But if you can nothing beats wiring the hole house.
I thought about doing either powerline networking or CATV cable. We have CATV to every room and aren't using it. After some research, the speed and reliability seemed quite questionable.
I'd prefer to run Ethernet, but one portion of my house, the part that has most of the devices, is really hard to wire to my standards. I should probably just find a pro to do that run, or bite the bullet and either do some drywall work or put up some crown molding to get that run.
I'm afraid I won't know where to stop though. :-) If I'm gonna do drywall work in the spare bedroom, I might as well pull it ALL down. If I do that, I might as well run some more power circuits, seal the HVAC ducting. I might as well remodel that closet. Which might mean taking space from the closet for the basement bathroom and laundry. etc...
I use MoCA (over CATV) and it's really solid for me. Price has come down, you can get motorola devices that support MoCA 2.0 bonded which can provide around 800MBps for $60. Latency is in single digits with minimal jitter, compared to powerline, which doesn't play nice with the AFCI circuit breakers required by code in most US cities now, and also has pretty bad latency and jitter.
I also made sure to use a proper MoCA cable splitter, as well as installed terminators on the unconnected end points.
Which single digits? Single digit seconds? :-) Milliseconds? Microseconds? Just wondering... Looks like my local gigE at work is 0.3ms machine-to-machine, for comparison. And at my data center, pinging google.com is 0.6ms. :-) At home I have nothing close to that though.
Anyway: This is a fascinating idea. I had looked a year or two ago and the options didn't seem as good. What adapter are you using? I'm seeing mostly an Actiontec for around $90 each. I need to check my head end, but this might just be plug-and-play to reach 2 of my hardest to cable locations. I started thinking about just running some fiber along the CATV runs (along the outside of the house), but that'd be more of a pain. Especially getting the terminated ends through the wall. I'd want to run fiber since it'd be exposed to possible lightning, but I guess the CATV cable is similarly exposed. Hell, maybe I should just run some shielded cat6...
Anyway, I might just try this MoCA 2.0, thanks for that pointer.
Single digit milliseconds. Unfortunately anything ethernet over X will have latency due to the underlying medium.
I don't recommend Actiontec, I had a fairly expensive adapter go bad on me after about a year. I replaced the other ones with Motorola MM1000, which are much more reliable in my experience, and I found for $60.
Thanks, I saw some references to the Motorola, and I've had pretty good luck with their gear in the past (15 years of Surfboards after a cable tech recommended it as having fewer issues than the others). But when I searched pretty much all that came up for my string was the ActionTec.
Nah, you're totally normal. Renovating is like opening a can of worms. No matter what happens, the scope of the project will always balloon once you find out what inside the walls.
Stuff like this, water damage and insulation make me think we are putting walls together wrong. I have no idea what the solution is but this is the pits.
Well, old buildings follow old building code or none at all. There are still a lot of 100 year old houses here that were built without insulation, and a void is gonna look damned attractive to wasps, bees, and smaller birds.
Interior finish work is expensive, doing it over to put in insulation is even moreso. And to add insult to injury, the low R value of walls is a data point used by siding installers, to try to talk you out of ripping the old stuff off before installing new. Each layer has an R value, dontchaknow...
MoCA adapters are much more expensive compared to typical Ethernet adapters in general but work in a pinch. I've moved frequently and have lived in older houses so I'm now armed with two different powerline Ethernet adapters, a pair of bonded MoCA 2.0 adapters (one is already included if you have a STB from a cable provider technically but the Ethernet port on it may not be a passthrough or bridge connection), and a Ubiquiti based setup. While the best option outside of direct CAT5+ cabling is the MoCA adapter setup, I'm disappointed that in most older homes with pretty awful wiring I can still only get 600 Mbps maximum. I know it's the house's cabling because in my previous house I used the same adapters and got a clear 850 Mbps for a slightly longer cable distance. Sometimes a signal booster [1] helps but tends to boost the wrong frequencies if you buy the cheaper ones that were meant for older, analog broadcast in a home.
Gah, don't do that if you value your RF spectrum. Those types of devices should be banned as they turn the untwisted electrical lines in your house into a giant set it antennas.
If you have any ham radio neighbors ethernet over power is basically a jammer.
Do they increase the actual power emitted or just create interference? i.e. should one be concerned health-wise if many of those are used? (assuming one believes in RF in these frequencies being harmful).
One time I tried using these. The internet worked fine but connections to other things on my network was very intermittent. I found out that I was switching between my own router and my neighbor's. The signal was traveling out of my house and into my neighbor's over the power lines.
> The signal was traveling out of my house and into my neighbor's over the power lines. I do not recommend Ethernet over Power any more.
For what it's worth, you could turn on the built-in encryption within most Powerline adapters. (Usually a button labeled "Security" or similar). Then you can leak anything you want, and still be reasonably safe.
I don’t understand, what you’re saying doesn’t really make any sense. Were they also using Ethernet over power devices? Even so they typically have a setup process that negotiate the encryption keys between devices in your own home.
Used this for a couple years since the wireless sginal was weak where my room was, and using a 500 MB/s device was decent. Though a couple times a week it would randomly drop out for several minutes which required unplugging/plugging back in the device. Not sure if it was due to the wiring of the house or what; it seems like it'd be a hard problem to troubleshoot. The Powerline standard has only gotten better though with time. As little as ~7 years ago there were only 50 MB/s devices. They now have gigabit-level ones.
In my experience this works worse in newer houses with modern (more complex, less transparent) electrical systems, and at best you get sub-WiFi speeds.
You have little to lose by trying, but there’s no guarantee it will work at all, even less work well.
I think you do best to have just two stations in a powerline network situation because the contention and overhead of contention control is much less. (Compare that to gigabit ethernet which is full duplex and switched so there is little or no contention)
You can connect a "distant" powerline NIC to an ethernet switch and plug multiple devices into that if they are close together, that works better than plugging in multiple powerline NICs.
Unfortunately new houses have circuit breakers that detect broadband noise caused by arcing. These are good for catching fires early, but will blow if you use powerline NICs.
I have an older house that has been expanded several times, with some rather amateurish wiring. One outlet works, the one next to it doesn't. Slow speed, interruptions that require resets...
Google wifi pucks work much better, in spite of the cinder block walls.
Like PaulHoule, I have an old house, but with a new electrical system. It was proudly rewired by yours truly, with GFIs and AFCIs all over -- and an added subpanel.
Anyways, I use a set of powerline adapters with a second wifi router at the other end because the walls in my house are very thick (the studs are covered in shiplap on both sides). The throughput is fast enough to stream video on the other end, but I haven't put it to any measured test.
In the summer, I often take the whole setup to one of the outdoor plugins, so that we have solid wifi on the deck. So it can work quite well. Whether it will in any given circumstance is uncertain I suppose.
The theoretical throughput is lower, but I found the real-world performance for network filesystem access specifically was wildly superior. Never determined exactly why- latency? dropped packets? but 10-20mbps powerline links mopped the floor with 200mbps wifi links, even with only a two foot airgap.
(These weren't label-advertised speeds, these were the negotiated links)
Watch out for latency issues if you use Sonos and perhaps other wireless speaker systems.
I used ethernet over power myself for several years when I rented a house, and it worked well for me. Recently, a friend was having wifi trouble (old house, thick plaster faraday cage walls). I suggested ethernet over power, and it worked great with one exception:
They have Sonos speakers, and Sonos just would not work with it. They apparently are very sensitive to latency. The ping times were roughly 50ms (from memory) between the 2 ethernet over power adapters.. I guess Sonos takes synchronization quite seriously.
What sort of PoE adapters was your friend using? PoE adapters should add minimal latency as they just use a center tapped ethernet transformer to separate the DC power from the AC ethernet signal.
Inside my house, on 1/3 of an acre (so not up against neighbors), the strongest 2.4GHz signals I see are from 2-3 HP printers in houses around me. This includes my own 2 Ubiquity APs that are inside my house.
I even noticed that I have a device screaming on 5GHz .. an Nvidia Shield. AFAICT, I cannot turn that off, even if I disable wifi.
These printers are a bane of my wifi existence too. I live in a suburban neighborhood with overall very little 2.4GHz interference, but there are three HP printers in range which seem to absolutely BLAST their signals.
One of my clients has a Wemo light switch that is stuck blasting its wifi on what must be illegal levels of power, as I can pick it up through two brick walls, across the street, and in another home(!).
Also, none of the other devices that share the same room as the dimmer can use anything on the 2.4GHz spectrum. It's almost impressive how effective a jammer it is.
>Also, none of the other devices that share the same room as the dimmer can use anything on the 2.4GHz spectrum. It's almost impressive how effective a jammer it is.
If it's really as bad as you say it is, you need to report it to the FCC (or other local governing body)
I once worked somewhere that purchased an individual printer for every user. Talk about a nightmare in costs and consumables. The idea was people would gather around shared workgroup copiers/printers like a watercooler and waste time.
Anyways, having hundreds of these in an office lead to some interesting WiFi scans.
The Shield uses WiFi Direct to connect to controllers, so it'd pretty much kill the point of the device if you were to fully disable WiFi Direct on it.
Can't this be fixed with smarter software in the AP that can:
* Dynamically adjust the power level to match the furthest client
* Boost the power level on an interval to check if there are clients further away waiting to connect
Additionally, what do mesh Wi-Fi networks do when clients are holding onto a connection? Are they smart enough to know that another node has a stronger signal to the client and trigger a disconnect from the clients' current node so that the it can associate with the stronger node?
We can do better than that - beamforming. Better 802.11ac routers use a phased antenna array to change their radiation pattern, steering the RF energy in the desired direction.
With MU-MIMO, it technically changes from a hub to a switch - MU-MIMO prevents spatial collisions, allowing the router to transmit to multiple stations simultaneously.
As whatshisface alludes to, a wireless router acts as both an access point and a router.
To be fair a lot of people only have a cable box (one ethernet port) and plug everything into their wifi router, which is also the DHCP server, firewall, etc.
I'd guess this is the most common setup for residences, it requires next to no technical skill other then plugging in the cables. Should definitely count as a router here no?
Not that I lack the technical skills to do otherwise, but my ISP's router works remarkably well (biband 802.11ac, IPv6, no packet loss, 4 ethernet ports, ...), so it's all I need.
And if I have a weird issue, that's a single device that I need to reboot (and I can even do it remotely).
Same situation for me too. I've done a bunch of networking stuff before with pfsense and plan to again, but I just moved and wanted something working. And it works great!
To be honest it's all most residences need, even if you're a technical person. Half of common home devices (xbox, printers, laptops, etc) have wifi, so for most people 4 ethernet ports is all you need(if any). It's almost 0 setup other then changing the passwords, and it works.
The only reason why it wouldn't work for you is if you have many wired devices, or want to get into the networking stuff.
I'm going to go out on a limb and say that most people don't need any ethernet ports. Most people just have a phone and maybe a laptop, tablet, printer, or smart television, all of which function perfectly adequately over wireless.
Even I don't bother with ethernet. There's no point; with <10 Mbps Internet speeds, too slow is too slow. The only thing plugged into the router is a second router for the detached garage/shop. Occasionally, there's need to transfer large files between two computers (games which can take multiple days to download), but temporarily stringing an ethernet cable directly between them does the trick.
Technically that power strategy works but in the real world clients don't want to wait a long interval to connect and boosting the power on a short interval creates more RF chaos than you are trying to solve.
There are many standards to assist in roaming such as .11k, v, and r. Particularly for mesh nodes s. In the end the strategy used in Wi-Fi is that the end station is the one that decides when to roam and why. Again it comes down to the client knows more information about what it could connect to and when it makes sense for it to roam than the AP could. The AP can just send hints that it'd like it off of it for other clients benefits.
Many Enterprise systems come with the ability to tune power the same way they can automatically tune channels. Usually they tune against other APs rather than clients since AP positions and emissions are much more static and controlled by the same system (normally).
The Ubiquiti access points and I assume some other brands use the following method to implement roaming:
All of the access points broadcast a certain SSID. When a client tries to connect, they coordinate with each other to choose which one will reply to that particular client. That is, the client things it is connected to one AP and doesn't know anything special is happening.
If the system wants to move your client to another AP it just disconnects you from the first AP and when you try to reconnect the second AP will reply to you.
In a case like that, assignment is driven as much as "having a clear channel" as "having a better connection on the channel". If you had a choice between two channels, once of which was shared and slightly "better" and another you can have to yourself, you are better off having one to
yourself. (That way you aren't having to wait for other clients to stop sending or receiving, dealing with interference, etc.)
A corollary to that is that if you have both 5GHz and 2.4GHz support on an access point you do best distributing clients between both sides, even if people think 5GHz is better or that 2.4GHz performs better in real life.
I am amazed that instead of all the silly gimmicks that APs have been marketed with, nobody has come out with one that has a lot of radios working on different channels and just behaves like a large number of APs. Practically I think this would work way better than channel aggregation.
Xirrus sells multi-radio access points to do as you described. The only installs I know of, though, are replacing them with more APs spread around because the Xirrus units just cost too much to cover real world buildings, comparatively.
It's a description of Zero-Handoff in Ubiquiti's 1st-gen APs. Fast Roaming is 802.11r.
Getting good roaming is theoretically not that difficult. Lower transmit power so that the device's RSSI in the locations where it's expected to transition to another AP (or off WiFi) are lower than the device's roam scanning threshold (-70 for iOS). Set the minRSSI on APs to something sensible and enable strict mode so that more troublesome devices aren't able to cling to an AP @ -85.
In practice, tweaking those knobs and figuring out placement and channel planning start to get tough beyond 3 APs.
That type of roaming actually ends up breaking a lot of WiFi implementations — it’s really only super useful for WiFi VoIP phones. I worked with a vendor (Bandspeed) doing that kind of WiFi roaming back in 2005, so it’s not remotely a new concept.
Most other devices will jump to another AP broadcasting on the same SSID if the signal is a lot stronger. It’s not nearly as much of an issue as it used to be, but people expect WiFi to Just Work (tm) so it’s better to let the OS’ network stack manage it.
My understanding is that "fast roaming" and many of the proprietary tricks are just ways to speed up the cryptographic pairing process when you switch AP's. I think the same thing happens, but some steps are streamlined.
Traditionally roaming is left to the client device and most implementations are so bad that they won't give up on an old AP so long as they're seeing beacons every year or so. Who cares that I haven't successfully received a packet in the last hour, it's sure better than the 2 second interruption we'll get if we tried to switch.
I'm only mildly bitter at the useless roaming behaviour of most clients. Turning transmit power down doesn't really help either. Some clients will roam better but other clients at the edge of the coverage now have no coverage.
Using software on my smartphone I can observe that it changes AP it associates with as I walk around the house where I have a “traditional” 802.11r WiFi network with common wired backend. No new DHCP is triggered either.
So it may not be perfect, but it can’t be that bad.
You need to know the MAC address of each of your access points - also known as the BSSID.
It should be available in the admin UI of the access point, and sometimes also printed on a label attached to the AP.
Knowing that, you need something that shows you details about wifi connections, so you can view the BSSID (MAC address of the AP) you're connected to:
Android, iOS: Install the "Network Info II" app
Linux: use the iwconfig command
Windows: Use the command "netsh wlan show interfaces"
802.11r does not fix this. It makes the handoff slightly more efficient but the client still has to decide to actually do the handoff. And that's the part that never seems to happen, as many clients are stupid.
From my testing with iPhones, the stupid behaviour goes away on an 802.11r-enabled network. I noticed the same issues on a legacy network but that all went away after enabling 11r.
I would imagine that I phones has better roaming preference overall. But maybe having 802.11r enabled causes devices to be more optimistic in their roaming as they know they're not switching networks.
I had a few Devolo powerline adapters (which include a WiFi access point) and I reflashed the firmware with OpenWrt as a last resort (the default firmware didn’t support 802.11r and it was hell without it). Works great since then and it saves me from buying a set of enterprise-grade APs.
> The bidirectional connection is symmetrical. It doesn’t matter if the AP has a better antenna or is located higher up. The antennas and amplifiers work symmetrically in both directions.
It does matter if antenna is located higher up. This will allow it to pickup weaker signal from clients. Better antenna is also probably more sensitive.
I think what the author meant to express is that the situation is symmetrical, even if one of the antennas is "better" (eg larger, or located higher up). So, author probably meant: "The bidirectional connection is symmetrical. It doesn’t matter (in terms of symmetry) if the AP has a better antenna or is located higher up".
So, even if the antenna on one side is "better" in some way, it affects both directions the same, and as such it still makes sense to use the same transmit power from both ends.
Not only that but APs usually have more antennas, which makes a huge difference. The antennas can also be spaced out more (for better diversity) and be physically larger (typically you get more gain the closer your antenna length is to the wavelength of the signal).
You can also have better analog RF components with more space (filters, LNAs, PAs, etc).
It's not necessarily true that the "advantage" of the AP is symmetric as well. The tx power is limited by FCC part 15 so the difference in tx power between AP and client is probably less than the difference in Rx sensitivity between the AP and client.
Even if the uplink and dowlink are symmetric, the traffic profiles are probably not - there's probably a lot more data going to the client.
Yeah, that sticks out like a sore thumb in an otherwise accurate article. Antennas have a certain radiation pattern for transmitting, but there is also improved reception in the directions it radiates strongly in.
With wifi, there is usually little you can do since you probably want a spherical radiation pattern, which prevents you from getting any significant gain over a perfect isotropic radiator, but that's a different story.
Probably also giver you higher probability to have line-of-sight, especially in a crowd.
Less walking bags of absorbent human bags of water in the way.
Exacty, that is why we have cell towers, not streetside cabinets. Both indoors and outdoors people and other obstacles try to clutter around ground/floor level more.
While I generally agree with this article, and turned down the transmit power on my AP in my apartment to be neighborly, coordinating this across the many APs present in a residential area is tricky. In my experience, many of your neighbors are going to be running either the stock AP provided by the ISP, or some "gaming class" thing that advertises very optimistic speeds. In either case, you're going to be fighting with stock firmware. And it seems to me that the stock firmware of most consumer-grade APs doesn't expose options like channel assignment or Tx power, not even behind a secret advanced settings page with a big scary "You will void your warranty" header. To compare the stock firmware on most consumer-grade APs to a dumpster fire is to commit a gross and unjustifiable insult to dumpster fires.
When I tried to coordinate better channel assignment and transmit power with my neighbors in my apartment complex, the effort very quickly died for the reasons above. Most neighbors didn't know or care about the settings on the magic box that made their Netflix or Xbox work, and even if they did, the config options simply weren't there.
I wish there were more I could do to change this situation. As it is, all I can do is vote with my wallet and buy less horrible brands like Ubquiti, where advanced options like channel assignment and Tx power are available to me if I want them.
Another few possibilities for this which were not mentioned in the article:
* Check that all your APs are broadcasting the same name. No devices I know of will naturally roam to an AP with a different name.
* Check that all your APs are broadcasting identical security features (e.g. WPA1, WPA2, WPA1 or WPA2 .. as well as the encryption methods like TKIP, AES/CCMP, etc). Many devices will check these too, and roam only to matching APs.
Yeah, same name and security (otherwise it would presumably never work.) The problem is the in-between area where there's bad reception for one and good reception for another.
Decades into the future I will be an old man screaming that wireless will never replace wired networking while some insane people have come up with real-world 10 Gbps wi-fi connections. The realities of wireless communications don't seem to have changed much since the early days of wi-fi at 802.11b and prior and while wired networking has somewhat hit a wall for consumers in the datacenter it's still kicking ass and taking names at 400 Gbps and software is among the bigger limitations than the hardware.
2.4ghz also has 3 non overlapping channels (iirc). Most people use the default channel of their WiFi so if you choose a different channel you can get a better signal at a lower power.
Edit
If you want low coverage area it would also not be such a bad idea to use the 5ghz frequency if your router supports it. Much less crowded shorter range by default and higher speeds
WiFi routers will hop to less congested channels automatically these days; the odds that your neighbourhood has lots of WiFi base stations you can see, but there is still a frequency that's almost entirely free is _very_ low.
There are effectively three 2.4 GHz channels and about six 5 GHz channels, but because the range of 5 GHz is so much smaller you should be able to find on open 5 GHz channel even in the densest, tech heavy, apartment building.
As the author said in the comment section: "I usually suggest turning 2.4 GHz off altogether so none of your users will ever connect to it by accident."
As for AP's sharing channels, this is great in concept but our (3 unit) building became so overcrowded in the 2.4 GHz spectrum that devices couldn't connect which caused people to add more routers and repeaters to "boost" the signal making it even worse. If you can't connect, you can't share.
You are correct. Most router these days select channels automatically.
But the automatic channel selection is not very accurate in my personal observation as well as that of a few others on the internet [1]. Wish it were better though. I feel that there are numerous problems in automating the channel selection
1. We have to make sure that this happens without any downtime otherwise there is poor QOS. If the router has to reboot to choose a channel you're out of internet for about 5-10 minutes. It can be very frustrating if it happens often.
2. As if that was not enough you have to draw extra power to scan nearby access points and constantly react to the changing signals. (This would also defeat the point of using low power signals)
In my opinion it is better to tune the routers manually than to set it on automatic channel selection.
> the odds that your [...] is _very_ low
Yes unfortunately that is true. There is little we can do about it. But there is one redeeming factor. As per my understanding interference depends not only on the base access point but also on the client. In other words the channels will deteriorate if the communication link that is the pairing of client with the access point interferes with the signals of another access point.
On low ranges you can create mini line of sight networks that will happily co exist with any number of signals. Example wifi on laptop working with bluetooth mouse.
[Router] --------- [laptop]
|
|
[mouse]
It is not such a good example because signals at such a small distance are strong enough to overcome any interference but the principle stands in other conditions as well.
Another solution for the 2.4GHz range and if you have a slightly more advanced router: switch to channels 12 and 13 in low-powered mode. In the US you'll be operating on channels that I would suspect 99% of APs don't operate in by default. Channel 14 is only allowed in Japan - illegal in USA.
I was just thinking about this. My idea was to have the power low enough so that WIFI is restricted to a very small area of the house. It would be like a smoking area, except it would be for WIFI usage. For personal application, I wanted to create a WIFI-free zone for the family in the living room and other common area, so that we're less distracted, while we have WIFI in the other parts of the house. Of course, we still get 4G signal, but it is very weak where I live, and we are also on pre-pay plan on Tracfone so we are discouraged from using data on 4G.
The article proposes reasons such as interference and generally being nice to neighbours. I actually do this just to reduce RF radiation at microwave frequency in my environment due to potential long term health consequences. Research on this has lot of contradiction with most governments saying there is no impact but some saying there is adverse health effects due to long term RF exposure. Why to take chance when you don't have to?
Someone should create an app that detects wifi APs that are transmitting at high power so you can track them down.
It would be super useful for someone like me where there are dozens of different APs in an apartment building all competing with each other. Then I could kindly go to my neighbor and ask/assist them in turning theirs down.
I've got 4 Unifi AP's spread around the office and turning the transmit power down has solved the issue of clingy MacBooks not migrating to the closest AP as people move between rooms.
You couldn't pay me enough to live in an HoA. They should be illegal. Land is scarce enough already without those mobs ruining so much of it. If my neighbor wants to cover their lawn in rusted cars and paint their house pink so be it.
I've heard if you install a giant ham radio tower they're not legally allowed to do anything about it. Any little bit of sticking it to those organizations is a good thing.
A good reason why, at least in Houston, is no zoning. I had a friend who had every area outside the HOA taken over and covered with office buildings, apartment complexes, etc. by stupid MetroNational. There's now a very abrupt line between the all residential pocket and the towers, mall, etc.
In a residential building with no central authority on how AP are distributed, it seems to be a impossible task. I don't see myself explaining this kind of technical stuff to my neighbors and going door by door configuring each of their AP.
But this is a very good reminder for people who set up wireless network at events, schools, ...
My school had huge thick concrete wall that basically killed any Wi-Fi signal. But for some reason, instead of having more cheap AP distributed more evenly, the guys handling it insisted on putting 1-2 big expensive AP in each building, which meant that the Wi-Fi was unusable in most of the classroom. Not really cleaver when most of your classes require you to be connected to the internet.
You're right in that, your neighbours likely won't do the same thing... But you loose nothing by turning the AP power down to a level that covers the necessary area...
In a sense, it's lose-lose, but you can choose the higher ground...
All that said - if you have multiple APs, then you absolutely want to tune the power levels, or you're only making yourself a looser!
*Loser not looser, lose not loose. Lose/loser is to not winning as loose/looser is to not tight. This mistake is prevalent even among HNers. Sorry for being pedantic.
I've noticed this one quite a bit recently on HN and reddit. I'm seeing it far more often than "they're, there, and their" and "your and you're" mistakes, to the point where I wondered if it were something localized (like color / colour or aluminum / aluminium).
One that I've noticed everywhere throughout the tech community is "it's" instead of "its". It seems unrelated to whether the article author is from a majority-English speaking country. And I see it on the blogs of companies big enough that I would assume proof read all of their posts.
It sticks out like a sore thumb to me because my brain reads it as "it is", which does not fit.
It could be in the process of regularizing. There's no particular reason why "it's" can't be both things. Language changes.
One I've witnessed in my lifetime is "different than" vs. "different to". I was taught the first one. The latter sounds wrong to me, which is why I notice I'm hearing it more and more. (Note I have phrased this subjectively; I'm not saying it is wrong. I'm generally a descriptive grammarian. But it does sound wrong to me.) It may be a dialect thing, in which case my dialect seems to be shifting. Languages change.
This (and others who pointed out the regularisation effect) might actually help me to accept it more. It's more constructive to think "grammar is slowly changing due to a weird rule that was difficult for even native speakers to remember" vs "people keep spelling this word wrong and they should just be less sloppy about it". I think once you internally accept a change in language, it starts looking/sounding less wrong over time.
When I was in high school, I read a lot of the "classics" of my own free will. I could read pretty comfortably into the 18th century, and I could understand most 17th century English.
I remember thinking at the time that I live to be an average 70, then at the rate of change I could see in books from various centuries, I should be able to witness some changes myself in my lifetime. And I mean changes in "real", core language, not merely the rapid churn of local dialects and slang. (Which the Internet has greatly accelerated. I suspect in general it pulls us all together towards a more "core" English, but it also means an incredible proliferation of local slang communities.)
Now that I'm 40, I can say that I'm definitely starting to notice it. It's not a fast process on a day-by-day basis, and it's hard to notice the small differences at first, or assume they're dialect differences (and in some cases they are, after all). But the change is definitely happening. I would be unsurprised within my lifetime that there's only "it's" and it has two distinct meanings, and that it will be officially recognized by dictionaries as such.
The "hacker" style of quoting [1] also seems to be generally accepted. I've on several occasions done something like 'Have you ever said "It's not a tumor!"?' and I'm yet to get jumped by a grammar nazi for the two punctuation marks like that, too. I suspect that'll never quite become the official style (a bit on the complicated side), but it doesn't seem to bother people much.
I think people can accept change in language as it pertains to new contexts, but I really don't like it when people just don't bother (especially with misuse by native speakers). This is especially true for those of us who don't correctly parse poorly-written language. I normally read very quickly, but when something isn't right, it's jarring and I have to go back, scan it several times, and figure out what it means.
I've always found it odd that in a place like HN, most are pedantic in their use of every language but English.
That reminds me of how in the Northeast they say "Quarter of [hour]" whereas in the Midwest it's "Quarter after [hour]". The former confuses me every time, with my immediate thought being "is 'of' before or after"? My wife, who is from the Northeast, just confirmed it's "before"
Edit: I mistyped my comment as "after". The replies are justified.
I'm pretty sure that "quarter of" means 15 minutes before, not after. But confusingly, I've also heard people day just "quarter" (no "of") to mean 15 minutes after.
The rule for possessive its is backwards from all other possessives. As a native English speaker, I still have to look up “its vs it’s” because it’s confusing. I think we should drop use of its without an apostrophe, and always use it’s, and let people figure it out by context. It’s happening anyway, and it’ll become de facto accepted English grammar when enough people use “it’s” differently than the old rule.
Ah, that is a great point. Of course you know I meant that the rule of not apostrophizing its is backwards from the more common case of personal pronouns. Okay, so the possessive pronouns don’t use apostrophes and personal pronouns do. And its and whose are possessive pronouns, or possessive determiners. Is that the whole apostrophe rule? Are there any other cases or exceptions for possession? Do all indefinite pronouns use apostrophes, e.g., it’s anybody’s guess?
I've also learned to read contractions in their extracted form in order to help me when editing. It's a similar trick to remembering spelling versions of every word, like "Wed-nes-day", or "Col-o-nel". Sometimes I think it's more bothersome than it's worth. I could really use my own environment variables for `IGNORE_GRAMMAR` and `IGNORE_SPELLING`.
Does it also bother you that a lot of people tend to say "amount of people" instead of "number of people"? I think of cannibals talking about their food whenever someone says "amount of people".
For me, I understand the difference quite well. However when typing it feels like I occasionally have this blindness to the words I typed and occasionally my brain sends the other version to my hands. If I look directly at the word after typing it, I dont even recognize the word as incorrect.
In this case, though, note that if you turned power way up, it might cause your neighbors to do the same, and you could be said to loose a power war upon the neighborhood (in which everyone would probably lose...).
Turning power down, on the other hand, probably will loose nothing, just like he said.
I love this[1] website which illustrates some of the nuances of the prisoners dilemma in a quite frankly great way. In fact I've deleted the rest of my post as I believe the website does a much better job of explaining how the correct answer to the prisoners dilemma changes depending on how the population behaves and how often they make mistakes.
The commons typically concerns a shared public resource. PD normally concerns a decision that affects 2 players in a direct and immediate way. In PD both parties see every outcome as an immediate consequence of their decision.
People may want to look at the "Criticism" section of that article, as the commons wasn't as big of a tragedy as people think it was when Garrett Hardin popularized the notion:
>Hardin blamed the welfare state for allowing the tragedy of the commons; where the state provides for children and supports over-breeding as a fundamental human right, Malthusian catastrophe is inevitable. Hardin stated in his analysis of the tragedy of the commons that "Freedom in a commons brings ruin to all."[1]:1244 Environmental historians Joachim Radkau, Alfred Thomas Grove and Oliver Rackham criticized Hardin "as an American with no notion at all how Commons actually work".[8]
>In addition, Hardin's pessimistic outlook was subsequently contradicted by Elinor Ostrom's later work on success of co-operative structures like the management of common land,[9] for which she shared the 2009 Nobel Memorial Prize in Economic Sciences with Oliver E. Williamson. In contrast to Hardin, they stated neither commons or "Allmende" in the generic nor classical meaning are bound to fail; to the contrary "the wealth of the commons" has gained renewed interest in the scientific community.[10] Hardin's work was also criticized[11] as historically inaccurate in failing to account for the demographic transition, and for failing to distinguish between common property and open access resources.[12][13]
>Despite the criticisms, the theory has nonetheless been influential.[14]
The meaning of commons is here as described in the last sentence of the first paragraph:
In this modern economic context, commons is taken to mean any shared and unregulated resource such as atmosphere, oceans, rivers, fish stocks, roads and highways, or even an office refrigerator.
Hardin was concerned primarily with overpopulation and the welfare state. His intention was to demonstrate that the welfare state contributes to overpopulation, which will doom us all. The more general concept of "tragedy of the commons" outgrew his incorrect assumption, and so his original theory and its criticisms are of little consequence outside of a historical curiosity.
I've seen some videos of people blocking wifi signals (both ways) with a cheap metal mesh. If you don't own the place you can put it on the inside. Only real issue is the image: it's like a tinfoil hat for your house! But you could put something more aesthetic over it again :) eg. embedding it in the wallpaper.
Exactly, and they all have the crappy Comcast combo which advertises the extra XFINITY AP that I don't think you can turn off. As a Comcast customer, I have NEVER been able to connect to that and get data. 2.4 ghz is a sh!tshow and everyone is just spamming it with more and more noise.
Don't pay the $10 monthly fee to rent a Comcast cable modem and instead buy your own cable modem and router for $110. Side benefit: no extraneous Xfinity AP clogging up your precious wifi bandwidth.
I've actually had pretty decent luck getting connectivity on the Xfinity networks. If my phone was better at quick handoffs between wifi and LTE while moving, it would be quite helpful in lowering my mobile data usage.
Less an issue with 5 GHz, because drywall does a fairly effective job of blocking signal. Not entirely, but quite a lot. Which is why I have 4 mesh APs in our 1700 sqft house: One in each of the rooms where we use wireless most, and one in the middle.
We went from having spotty Internet to it being really solid. I think we have some neighbors that have devices that interfere with 2.4G (cordless phone? Microwave? Baby monitor?)
He said "apartment 5 is often more crowded than suburb 2.4"
2.4 is unusable in an apartment building because the range is so great. I live in a 3 unit so I am happy with my 5 Ghz but in a very dense building I could see problems.
Aren't there also possible health considerations with running 5G as well? I feel like there is some preliminary data that its not good to be sitting in a bath of 5G all day long. Can someone comment?
but first I'd like to identify the mode of harm. How would this low-energy, non-ionising radiation present a biological impact?
The theories I've heard (induced currents in the dna base stack? magnetic stimulation of cryptochrome that modulates intracellular reactive oxygen species?) describe very rare, unlikely and low (negligible?) impact.
I am open to the theory of hypersensitivity, however I've never found anyone who suffers it.
These studies indicate a mechanism of EMF stimulation on voltage-gated ca++ channels.
Personally, I've never noticed issues with RF-EMF until I tested my sleep using the oura ring. WiFi didn't seem to bother my sleep but shutting off the power to my bedroom at the breaker improved my sleep tremendously. Even if their effects on sleep are small, the negative, non-linear health impact of sleep loss is significant...to me.
Maybe in 20 years we'll view EMFs akin to cigarettes, maybe not. I'm not wearing a tinfoil hat just yet but it has changed the way I intreact with these non-native frequencies. It seems prudent to take simple precautions like avoiding microwave use, utilizing airplane mode and turning off/unplugging appliances, lights and WiFi when not in use.
I'm not sure, but it's worth noting that we will all be exposed to much higher levels than anyone dealt with in the past. It's possible we just haven't had enough non-ionizing radiation to notice it, but a lifetime of exposure to 5G could cause heretofore unknown problems.
I'd also set the APs on the tables, rather than trying to elevate them, so that the bodies would limit propagation.
Whenever someone asked, which happened at least a couple times each conference, I explained it like this: Imagine you have 10 groups in the ballroom all trying to have discussions. Turning the power up is like giving each group a bullhorn: the whole room becomes a noisy mess. Lower power is like each group gathering close and talking at normal volume.
It worked well, mostly because the venue-provided wireless at the time were all trying the "few, big AP" solution.
More details are in a series of blog posts I did, starting with: https://www.tummy.com/articles/pycon2007-network/