Honest question: how many years do you think it'll be until embedded device manufacturers (routers, various TV boxes, wifi hard drives [1]) ship recent kernels? A $200 modem/router bought 6 months ago ships with a hacked up version of 2.6.36.4 that's barely buildable - mainly because the wifi chipset vendor refuses to open source their code and refuses to update the BSP [2].
You have to look at it from their perspective. Why spend money updating something that works? The number of people who care that their device firmware uses a recent Linux kernel version is so infinitesimally small satisfying them wouldn't even move the needle on sales and you apparently bought one any ways so why should they care?
If you want hardware you can tinker with build your own from parts you know work with recent kernels or buy a product specifically marketed at that segment.
It's a mistake from the manufacturers perspective too.
Because they don't have a proper QA procedure at the software level, they fear change and have to keep maintaining an in-house snapshot of a archaic ecosystem. They spend and more keeping the obsolete system going, because they have in past years made the cost of upgrading higher and higher.
Eventually they get forced into ground-up rewrites, with all their predictable problems.
I've been told I bought the wrong device repeatedly. Annoyingly a day after I bought it, I didn't need the ADSL modem any more. For years, I needed Broadcom ADSL chipsets to get >5 megabit downstream at home, but my next upgrade will probably be a separate router and WiFi AP.
The telcos and cable companies don't want to provision subscriber provided modems so they come up with the BS excuse about unsupported equipment to screw their customers over.
I have had good luck with that separation. I use an Apple Airport Extreme for my wifi (running in bridge mode), and a PC Engines "apu" [1] running OpenWRT for my router. I figure the wireless part is a detail I don't care too much about—it just gets my laptop onto the wired ethernet. The router is totally open source and hackable and I like that.
> but my next upgrade will probably be a separate router and WiFi AP
Personally I go the "other" way. Router + WiFi (I use an Airport, but pick your poison), and then the most basic modem possible for the location - this might be a provider-supplied DOCSIS Modem/Router in bridge mode, right now it's a provider-supplied "1 port" ADSL2+ modem/router in bridge mode. With an ADSL setup, you'll probably need to use PPPoE from the router, with a DOCSIS setup, the auth is usually on the MAC address so your router doesn't need to know anything, it just connects via IP to the bridged cable modem.
If the modem ever gives me issues, I'll just replace it with a plain-jane ADSL2+ modem. If the ISP's expand their docsis/fibre network past our new house, I'll just swap the ADSL modem for a DOCSIS modem/router in bridge mode.
This setup has given me the least headaches in every scenario across several houses in both Australia and Thailand. The only hard part really has been trying to express "I want a DOCSIS router that i can turn off the router" in a way that a Thai-speaking technician or my non-technical english&thai speaking wife can all understand.
With adsl, I'd just discard the isp device and use a cheap modem-only device.
As I said docsis is often MAC address controlled, so if you can't get the admin password to unlock the devices (http://portforward.com has lots of isp configs/auth details) you could buy your own docsis modem/router that supports configurable MAC address and simply spoof the isp provided device.
For fibre based devices usually you just have a ONT with an Ethernet port, so no modem required just plug in your router.
If you can give more info on the specific last-mile tech you're dealing with, we can probably make some suggestions.
Fibre to the home. I think your ONT is my NTD. Box on my wall that I'm not really allowed to touch, then Ethernet straight to my router which handles PPPoE.
I've never seen/used this type of thing before, but then again I've never found my need for mobile data that massive that regular 3g/4g wasn't a reasonable option.
Maybe thats just me though. My Thai phone provider gives me global roaming with unlimited data capped at ฿300 a day (about $10 a day). The last time my wife and I came back to Australia together, we had pre-paid Telstra sims. The time after when I came alone for work, I didn't bother, as the roaming cost worked out better than Telstra's shitty pricing, and I still got to use their network.
As for an actual solution, I doubt there is one an ideal solution in this case. Honestly, I would want to need access to a lot of other people's Wifi before I signed up for this type of deal - how sure are you this doesn't give the other Telstra members carte-blanche access to your local network, and how sure are you a judge will accept "but your honour, that could have been anyone downloading those pictures, I have Telstra Air".
I haven't had access to the Thomson gateway device for long enough to see how the traffic is segregated. If I do end up changing to Telstra I'll have a look, although I'm not sure I'd be allowed to solder up the serial-out on their device ;)
$10/day is $300+ a month - I pay $50/month for 7GB of data on Telstra's 3G/4G networks. I haven't actually needed to use Telstra Air because the middle of the city has free wifi by iiNet/Internode (RIP).
> I haven't had access to the Thomson gateway device for long enough to see how the traffic is segregated
Given the generally atrocious default security of most home router vendors, I'd assume its equally terrible, and maybe you get a nice surprise.
> $10/day is $300+ a month - I pay $50/month for 7GB of data on Telstra's 3G/4G networks
As I said, I was using it when travelling from home (Thailand), so it was $10 a day (max, if i didn't use it or used less than the cap amount, no/less cost) for unlimited data (still using Telstra's network). The alternative was to use a prepaid telstra sim, which from memory at the time gave me maybe 4 gb of data for.. $59 or $69?
I wasn't suggesting its an option for you, just explaining that those "free wifi" things always seem like a shitty deal to me, compared to the cost of your own data service.
> I haven't actually needed to use Telstra Air because the middle of the city has free wifi by iiNet/Internode (RIP)
Which ones? Sometimes the configuration is confusing, but I haven't found one I can't disable yet (my Ariss/Moto SBG5580 doesn't have a simple 'bridge mode' toggle, I have to disable NAT/NAPT to put it into bridge mode).
You also have to keep in mind that, sadly, most of the toolchain for embedded is... quite old and proprietary. There is a whole ecosystem that force you to stay stucked at old stuff.
I know a ton of embedded company that still have windows 98 or XP machine, because they need them to drive their flash tools or their debugger.
The whole industry stopped updating when the software part tried super hard to sell them Java... and it did not work that well...
I did see this, has it shipped? At the time I needed a new modem as I was stuck on ADSL, but I've since (luckily) been upgraded to a fibre connection that only requires something that can speak PPPoE.
Do you get to plug the fibre strands directly into your own equipment?? Most places telco equipment must be installed and you just get an Ethernet connection.
The Omnia has not shipped yet, but production is starting soon. Their kickstarter was wildly successful.
I don't want to deal with blinding light at all. NBNco has contractors come put holes in your walls to install a termination box, then you get Ethernet out.
Around these parts you get an optical SC outlet like this http://i.imgur.com/4n9m0q6.jpg into which you plug the operator-provided equipment. Which means you ought to be able to replace it, but I honestly don't know if you there is any third party equipment for sale...
I'm a big fan of both of those - but their backlogs are massive.
I'm hoping there's another donation matching campaign at LCA2017 for Conservancy next year - I was able to donate somewhere close to $300 via that last year.
Broadcom has a noted history with its support for Wi-Fi devices regarding GNU/Linux. For a good portion of its initial history, Broadcom devices were either entirely unsupported or required the user to tinker with the firmware. The limited set of wireless devices that were supported were done so by a reverse-engineered driver. The reverse-engineered b43 driver was introduced in the 2.6.24 kernel.
In August 2008, Broadcom released the 802.11 Linux STA driver officially supporting Broadcom wireless devices on GNU/Linux. This is a restrictively licensed driver and it does not work with hidden ESSIDs, but Broadcom promised to work towards a more open approach in the future.
In September 2010, Broadcom released a fully open source driver. The brcm80211 driver was introduced in the 2.6.37 kernel and in the 2.6.39 kernel it was sub-divided into the brcmsmac and brcmfmac drivers.
> Trying to get vendors to open source anything is a hilarious exercise in futility.
I hate that this is true. Could anyone explain to me why though? Would it not make life easier for them if they just open sourced their firmware and let other people update/maintain it? (Sorry for the noob question, I haven't looked into this in detail)
Curiously timed question because when the 4.7 kernel release announcement was sent out I googled one of the contributs on a whim and found that they worked for a company that builds embedded systems. And this guy is contributing to the 4.7 kernel. ;)
But that's just one guy, one company. I believe most companies like the stability of using something old and well proven.
> But that's just one guy, one company. I believe most companies like the stability of using something old and well proven.
Isn't it the opposite with the kernel, though? Doesn't its stability improve over time? That's at least the feeling I get. I make some embedded "devices" as a hobby (none of it is mass-produced), and I feel like things are better now than they used to be ~3 years ago.
> I make some embedded "devices" as a hobby (none of it is mass-produced), and I feel like things are better now than they used to be ~3 years ago.
I guess what they meant was that sticking to old versions gives you a stable target to develop against, not that the resulting product will be stable.
Upgrading to 4.whatever would give you all the improvements, absolutely, but it would also give you all the new bugs, and mean that you have to keep up with all the refactoring that have been done since (and will be done in the future, if you plan to stay up to date).
That's a pretty interesting idea... I suppose you would make a guess at first to allocate containers and then adjust over time automatically? But in a monitored environment you're already going to know those stats so actually I'm not sure how this helps.
I was thinking more of a use case like CI system where we can use these values to figure out how many builds to run on a machine based on past history.
I see there's support for the new Radeon RX480. I've been thinking about picking up a Radeon card. Can anyone speak to their experience of Radeon support on Linux and whether or not you think it's a good idea?
The kernel driver for newer AMD cards is developed and open sources by AMD itself . For the user land, you have choice between the open source driver, Mesa, or the proprietary one, AMDGPU-PRO.
If you want to follow the advancement of Linux graphics stack (while having reasonable graphic performance), AMD is your choice. This is where the exciting new stuff, like Wayland, DRI3 etc. happens. The NVIDIA proprietary driver indeed has amazing performance, but is falling behind on this regard.
The RX480 w/AMDGPU-PRO works pretty well in most cases for desktop/gaming/OpenCL use.
AMD's Linux team is in the middle of a big push to modernize the driver and software stack. It seems they consider it a mostly working preliminary version.
You can choose-your-own-adventure with the totally open amdgpu+mesa stack or use the proprietary AMDGPU-PRO. You can follow development of the open source parts on this mailing list:
From my experience using AMD cards, the linux drivers have some serious issues, to the point where on-board intel graphics would give me much more stability. With AMD cards I would get excessive screen tearing just from dragging or resizing windows in a desktop environment. The advantage of AMD is that their open source drivers are acceptable, where nvidia's open source drivers are not great.
In terms of stability, the proprietary nvidia drivers are simply the best, and gave me significantly less issues. For my daily workstation I even ended up replacing my (expensive) AMD card for an nvidia one because the desktop environment felt like it was running less than 10 fps, and it triggered me to no end.
Even if AMD gets equal or even better performance than nvidia in gaming environments, I am not willing to compromise the desktop for that.
If you don't mind using the closed-souce nvidia driver, I strongly suggest nvidia.
The open source drivers/cards were solid for a good while on my old computer but I bought a new one in May and I got actual hardware level PCI errors with two different AMD cards on the Intel X99 chipset (within seconds of booting I would start getting error correction messages in my dmesg and the computer would hang within hours). I managed to stop the errors with kernel flags (pcie_aspm=off or pci=nommconf, either seemed to stop them) but my system was still completely unstable as the drivers/cards eventually went nuts regardless.
I bought an nVidia card and didn't have to touch anything and it's completely stable (in the same exact PCI slot).
I might try AMD again in a year or two as I prefer their model of actually supporting and writing open source drivers in contrast to the scummy nVidia whose cards now require a signed firmware that they won't release to the open source community after we got the reverse engineered open source drivers (well, they did release one now for the 9xx series after they released the 10xx series).
Depends on the hardware and the amount of effort you put in. If you stick to purely Intel chips (cpu, gpu and wifi) if you spend the 5 minutes it takes to get powertop happy then you end up with equivalent battery life to windows. (Basing this on thinkpad t440 and thinkpad t450).
On my AMD APU laptop, I have been unable to get battery life to be similar to windows.
Sadly this isn't quite the case with skylake IGPU, there's a ton of issues with power management options rendering the system very unstable or down right unusable.
Also worth noting is that NVME ssd power management is flat out unsupported resulting in additional 3W power draw on DELL XPS 13 pcs out of the 6W that's currently possible with stable power options enabled.
Just to put those words in context - I use Dell XPS 13 with Linux 4.7 and my power usage drops to 4.1 Watt discharge rate and holds an average of 6 W when I'm on average use.
That means that I usually have around 7-8 hours of battery on my average use, and if I'm in a battery-sensitive environment like a long flight, I can reduce screen brightness and turn off wifi and get up to 14 hours of battery.
Right at this moment I have Fx with 2 windows, 10 tabs each, half-brightness, wifi on, and I'm at 5.78 W, 87% battery with 9h 28m left estimated.
I believe Windows may have even better, and I see my NVMe not dropping to the lowest power saving state, but it's not that bad IMO :)
Thanks. Any mailing list threads / bugs I should follow for this? I haven't checked the state of Linux on the XPS 9350 since mjg59 posted that thing about everything being broken.
I am treating Skylake as a generation of intel chips to completely avoid.
I do have a Surface pro 4 with a skylake chip in it and even Microsoft appears to have struggled with power management on that chip. The Skylake NUCs have also been plagued by multiple issues. I briefly looked into getting one, but the issues scared me away.
I'm a bit out of the loop on kdbus, why didn't it get merged? It was my understanding that it had a brief stint in linux-next, so it must have been getting close...
It was so big not many people wanted to review it properly. There weren't many steps of implementation (essentially a single-patch dump) and it had some high level features which don't necessarily belong to the kernel. Developers of kdbus also brought up performance gains a few times, even though it's been pointed out the userland is not the problem. It was the actual implementation that was bad.
It's being reworked into bus1 which is a slim, generic bus implementation. Full dbus can be implemented over it, but it's not forced on people who just want a nice, light bus interface with some security guarantees.
No, because it's an overcomplicated system that's putting things into the kernel that definitely don't belong there, because the userspace implementation was slow, which, according to Linus, was not an inherent problem with the idea of putting it in userspace, but was because the code was written by a bunch of monkeys at typewriters.