Hacker News new | past | comments | ask | show | jobs | submit login
Before you Dual Boot – MS, OEMs and Linux (eightforums.com)
149 points by dopkew on March 16, 2014 | hide | past | favorite | 233 comments



I'm now happier than ever that I bought one of the last pure-BIOS motherboards of the i7 era, and a laptop (Thinkpad X60) that can run completely free software (https://www.fsf.org/resources/hw/endorsement/gluglug , although I didn't know that when I bought it several years ago ), and I plan on holding on to these machines and making the most of them for as long as I can, because the future of the PC is looking more and more closed with each new change.

It was only a few decades ago when IBM released the PC AT, complete with a full set of documentation, even the full source code of the BIOS. Now you're lucky if you can even get a datasheet for the SuperI/O chip or processor VRM controller. I've observed this decrease in openness through the years, but it's only most recently (post-Nehalem?) I've noticed this happening at an accelerated pace.

Throughout this time, all the changes that have been made for "security" appear to me as more and more like attempts to secure systems against their users, taking control away from them and forcing them to trust some authority instead. Code signing, secure boot, TPM, patch protection, etc. It is becoming more difficult to stay in control of the hardware you bought.

It is true that several years ago, it was probably much easier to infect systems with malware, but at the same time, the users had more freedom; including the freedom to explore, use, and modify the OSes of their choosing. Is this freedom something we should really be sacrificing? The well-known quote comes to mind: "Those who give up freedom for security deserve neither."


> Throughout this time, all the changes that have been made for "security" appear to me as more and more like attempts to secure systems against their users,

"The right to Read": http://www.gnu.org/philosophy/right-to-read.html RMS was right all along.

"The Coming War on General Purpose Computing" http://boingboing.net/2012/01/10/lockdown.html, Cory Doctorow, 17 years from now most will realize that Cory was also just as right as RMS was 17 years ago.


I don't think it's fair to group measures that have a valid security justification along with those that do not. Code signing, secure boot (including TPM when used as a part of that), and patch protection do, and can be disabled. Linux distros really should probably have more rigorous protections along those lines, but manage to avoid it because nobody targets Linux (not because it's that much more secure); Chrome OS has them anyway, despite almost everything being open source, including Chromebook bootloaders, and I like that fact.

Meanwhile, proprietary software and closed hardware specifications have nothing to do with security and should be criticized more harshly.


The idea of secure boot could theoretically be a no-tradeoff security positive, but the implementation would be quite different from what it is currently. For starters, any non-user-hostile solution is going to have something like a usb device header on the motherboard as the highest-privileged interface to the system (with something like mandatory time delays to prevent against evil maid attacks). But to large manufacturers run by business people, the attraction to a hardcoded manufacturer-administered signing certificate for the trust root (which obviously erodes the owner's control) is just too strong for us to see a proper implementation any time soon.


You realize that with UEFI (ed: on x86... yes, Windows RT devices are different), you can either disable Secure Boot or (at least on some systems) replace the root key with your own, without any hardware modification?

On Chromebooks, which I'm a bit more familiar, you cannot, and disabling verification involves a 30 second delay on boot with beep, to prevent against evil maid attacks. However, apparently it's not too hard [1] to flash the ROM on some devices if you can open the thing.

[1] http://krblogs.com/post/63809988096/bootloader-unlock-on-sam...


I once proposed handing the keys to an independent standard body, particularly after FSF complained about the process.


But why forge a centralized ring of power in the first place? Even completely above-reproach independent bodies can still be co-opted by subpoenas and the like. And the mere existence of that possibility for control is blood in the water for governments.


Gluglug is cool but is not a solution for the future - there's only so many working X60's they can salvage. I think the sustainable way is to embrace Linux-friendly companies that cater to general computing.

When my Sandy Bridge laptop dies, I will buy my next computer from System76 who sell Ubuntu laptops. My next laptop may not even be an Intel laptop - I don't need x86, when most programs I use are open source and can be compiled to run natively on anything. I hope we will have a healthy number of Linux ARM laptops to choose from in the near future.


You might not care about free software, but all System76 laptops use proprietary firmware for their Intel wireless cards. The laptops from ThinkPenguin do not.

If you're going to support "Linux-friendly" computing, you should also support free software. If not, you might as well just get a Dell XPS Developer Edition or Asus Zenbook and slap Ubuntu on it.


I think it all comes down to the death of what was, in the 90s, called the "power user."

There have always been two kinds of computer users: Administrators--those who can be trusted to administer their own systems; and Users--those who cannot, who need someone else to Administer their system for them.

The Power User emerged as a gross hybrid because of the limitations of Personal Computer OSes. In Windows 98, for example, everyone was implicitly considered an Administrator, even if they had no idea how to Administer anything. (They were Users with the full Power of an Administrator.)

What would frequently happen is that one person in an extended family would have such knowledge as to be capable of actually Administering computers, and would frequently have to drop in to perform Administrative maintenance for relatives, who had no idea they had to do such things themselves. Nobody would be looking out for the computer in the interstice.

This era is thankfully over. What do we have now? No greater number of people capable of Administering their own computers, certainly.

Instead, what happens now is that the hardware manufacturer provides hooks (the Secure Boot certificate store et al.) such that the computer's Owner can delegate Administrative power to the OS distributor (Microsoft, Apple, Canonical, etc.), leaving the Owner as a plain User. "That guy in the family who knows computers" no longer has to be involved--effectively, the OS distributor is "that guy."

This is only really a problem for people who want to Administer their own computers (though in any form-factor where you can buy a motherboard separately, all these protections can be disabled.)

Frankly, though, I see less and less of a need for people to Administer personal computers. PCs (I'm including both desktops and mobiles in this set), these days, are basically the equivalent of VT-100 terminals: fancy screens you buy, take out of the box, and plug directly into the internet.

This is what I would call the "modern tinkerer mindset":

• "Real Software" runs on Impersonal Computers: servers, either racked in your office, or existing nebulously in "the cloud." When you are developing this software, you keep a VM that simulates such an Impersonal Computer running on your Personal Computer, and interact with it as if it were a network resource.

• Development of new Operating Systems occurs using a hardware emulator (really a more precise VM), like qemu. It then moves to ordinary VM software. Everyone else who tries your hobbyist OS will only want to run it in a VM themselves. Getting it running on Real Hardware is a 1.0 feature.

• If you want to fiddle with hardware yourself, you get a Raspberry Pi, or any of innumerable other devices aimed squarely at tinkerers. You could develop an OS using these, too, although it's a much greater hassle than just using emulation software. These devices are open in ways Personal Computers never were, having everything from schematics to JTAG pins available.

Given that mindset, what do you gain by having Administration rights to your new-age VT-100 Personal Computer?

You might answer "the surety that Microsoft/Apple/etc. will never ban VM software from its platform", but no OS distributor would dream of doing this: after all, how, then, would its first- and third-party developers do their jobs? They rely on VM software just like the rest of us.


> Given that mindset, what do you gain by having Administration rights to your new-age VT-100 Personal Computer?

The freedom and power to control what's yours. A chance to explore, to learn more about the system you're using and possibly adapt it to fit your needs. IMHO giving users that chance is very important: how many experienced developers started out as power users - who also were originally only users? By taking away these chances, keeping regular users relatively oblivious, fewer of them will want to go through the increasing hassle of "becoming a developer"; it becomes an abrupt decision instead of the continuum of knowledge it once was. This divide between "users" and "developers/administrators" only makes it harder for users to cross that gap, and takes control away from them.

> If you want to fiddle with hardware yourself, you get a Raspberry Pi [...] These devices are open in ways Personal Computers never were, having everything from schematics to JTAG pins available.

Funny you mention the RPi, as it's nowhere close to being as open as the PC/AT was.


You seem to be conflating "developer" with "administrator", which I just don't understand.

Want to be a developer? Download https://love2d.org/, open Notepad and write some Lua, save it in the same directory and run the executable. Or write some text that starts "<script>", save it as HTML, and double-click it. Or download Racket and follow the tutorials. Or, if you really insist on writing low-level code, download {Visual Studio Express, XCode Command Line Tools, Debian build-essential} and open up a shell. Learning to program has only gotten easier and easier over the decades.

But wanting to be an Administrator? What for?

Do you want to Administer your DVD player? It's an appliance. It plays DVDs. It works well because it only plays DVDs.

PCs are appliances. They run, in sandboxed environments, "application software", which are binary packages code-signed-by-proxy by the OS developer. They work well because they only do this. And within the framework of these restrictions, you can still program new software, or an entirely new OS. These are tasks this appliance lets you perform.

You can build your own computer. It won't be an appliance. You can also build your own DVD player. It won't be an appliance either. Is it so bad, then, that DVD players exist for playing DVDs? Is it so bad, then, that PCs exist for running sandboxed code-signed applications?


<sarcasm> What do you have a kitchen for? You can microwave TV dinners all you want. If you want to make your own recipe, say because you want to be healthy, or don't like the flavor of TV dinners, you can just make a simulated Real Recipe out of pieces of TV dinners. It's almost as good.

Just let someone else more qualified than you, or well maybe not more qualified, but more in charge than you, decide what you should eat. Selecting a restaurant or frozen dinner has only gotten easier and easier over the decades. But wanting to be a cook? What for?

And why do you need a camera? People should be able to sell you pictures of whatever you want a picture of. There should be enough pictures of everyone on Facebook already. Or, if you really insist on having a picture of something or someone that a Big Corporation doesn't have, just get out some crayons and sketch them. Buying crayons and coloring books has only gotten easier and easier over the decades. But why take your own pictures, what for?

And what do you need musical instruments for? Want to be a musician? Just download some songs that have already been written and play them in a different order on your DRM-closed player. Or, if you really insist on having your own choice in music, just call a radio station and request something. Calling and requesting songs has only gotten easier and easier over the decades. But wanting to make your own music or start your own band? What for?

And that address bar on your web browser? Just go to yahoo.com, and click all the links there. They already know what's good for you. If you really insist on having your own, post to a social network. Starting a web page on Facebook/google has only gotten easier and easier over the decades.

But wanting to go to whatever websites you want - or make your own websites? What for? </sarcasm>


Er, did you miss the part where I said "You can build your own computer. It won't be an appliance"?

Most kitchens have microwaves in them. Sometimes a microwave is the best tool for a job. (For defrosting red meat, for example.) But chefs don't insist on forcing their microwave to somehow do all their kitchen-related tasks; they use other tools, each with its own affordances, each with a best-suited task, together, to get done what they want done.

Likewise, a home-entertainment system usually has a DVD player in it. You don't try to use your DVD player as a PVR, or a video-game machine. You use separate devices for each of those tasks.

A capital-c-Computer is an amazing machine that can do many things equally well--it's sort of like an undifferentiated stem-cell. I completely understand wanting to use a capital-c-Computer, just for the sheer neatness-factor of it.

But a PC? The modern PC isn't a capital-c-Computer. The modern PC is a microwave, a DVD player. Like the microwave, like the DVD player, you likely have one in your toolkit--in fact, you likely use it for a lot of things. But it'd be silly to expect it to do everything.

You can have your capital-C-Computer and a PC too. Neither of them is going anywhere.


My new-age VT-100 PC is still a general purpose computer, and if I want to use it as such I should be able to. I see the advantage of shipping it as a VT-100, but I don't see any advantage in blocking power users from unlocking its full potential.

Nobody wants to work on hobby projects that have no chance of ever running on real hardware. What's the point then? Even if you'll never get to run your hobby OS on real hardware, that possibility still serves to motivate the developer. Nobody likes to work on things they know beforehand nobody will ever use for anything.

As an example, coding a toy shell in JavaScript that runs in a web browser and prints some text to a canvas is not the same as coding a toy shell that can touch the actual file system and be used for real work.

Life is too short to write toy software for emulators, I want to write software that people use and get something out of. I'm an engineer, if I buy a computer it's because I want to make cool things with it and for it, not because I want a dumb terminal to connect to some rented server.


> Nobody wants to work on hobby projects that have no chance of ever running on real hardware.

I never said hobbyist OS projects won't run on real hardware; I said running on real hardware is a "1.0 feature." Which is to say, around the 0.9, you start taking some real test machines, disabling their boot protections, and putting your OS on them. You get it polished.

And then, only after that, you go and apply for a boot-signing certificate from the hardware makers. Which, because you now have a real, polished OS instead of a hobbyist project, they will give you. It's sensible enough: if they gave a cert to every hobbyist developer, one of them might just be a virus-writer; but there are only so many groups who actually end up with real, working, polished Operating Systems that Users will want to run. And it's easy enough to tell when a group has made one of those, and to sign their bootloader.

This is, if you'll note, pretty much the same logic behind the iOS development workflow. First, you deploy to test devices that are specifically configured with an "I am an Administrator, obey me" flag. Then, once you've proven out your software on those devices, you get it signed with a "this is allowed to run on regular User PCs" cert.

---

Also: if a "toy shell in Javascript" can both connect to the network, and use the browser's Javascript data-storage APIs to store data, then what makes it incapable of getting Real Work done? It's basically a (really slow) VM.


So what happens if the people in charge of giving certificates, approving marketplace apps, etc. decide they don't like me and give me the boot even if my product is perfectly fine? They're human, and humans screw each other for petty reasons and out of self interest all the time.

If someone buys a computer, why do there have to be barriers from getting and using software directly from the developer? This has everything to do with milking money from developers and making users' alternative choices as difficult and unpractical as possible. It's discouraging choice and diversity.

--

> Also: if a "toy shell in Javascript" can both connect to the network, and use the browser's Javascript data-storage APIs to store data, then what makes it incapable of getting Real Work done? It's basically a (really slow) VM.

So who wants to play with "basically a (really slow) VM"? Why should hobbyist developers be content with only having access to second rate capabilities? No thank you.


> So what happens if the people in charge of giving certificates ... decide they don't like me

Then the developers get angry and throw out the people in charge. Imagine, for example, what would happen if IANA arbitrarily stopped just a few people from getting domain names. We'd get a new IANA. (We can't do this with the "apps marketplace" vendors, though, and that is a problem. It's probably something that should be solved with an anti-trust suit or two.)

> If someone buys a computer, why do there have to be barriers from getting and using software directly from the developer?

Because--and this is the whole point of the Users/Administrators distinction--Users don't know enough about computers to distrust malicious software.

We put barriers between people and phishing sites, so they can't be tricked into giving their money away. We put barriers between children and in-app purchases, because they don't understand the consequences.

This is the same idea. Most people would go through whatever series of scary dialog boxes it took to run "cat_videos.exe". If you, an Administrator, were standing right there beside them, you'd grab the mouse from their hand and stop them, for their own good. We can't be there all the time. We want the OS to grab the mouse from their hand for their own good.


> Users don't know enough about computers to distrust malicious software.

Then the solution is to educate them, not mollycoddle them and keep them locked up. But knowledge is power, and educated users are difficult to control and deceive, so the "developers", the ones who want to remain in control, don't want that happening.

> We put barriers between people and phishing sites, so they can't be tricked into giving their money away.

The same people who would then fall for a different scam, before even more barriers are erected, and would think "security software X doesn't think this is a phishing site, so it must be safe."

> We put barriers between children and in-app purchases, because they don't understand the consequences.

A "think of the children" argument? I agree a lot of young ones haven't developed to that point yet, but if you make it so they never experience any bad consequences, they'll never learn from them too.

> If you, an Administrator, were standing right there beside them, you'd grab the mouse from their hand and stop them, for their own good.

No, I'd just tell them "you're very likely not going to like the outcome, but the ultimate choice is yours."

"We do not truly have freedom if we do not have the freedom to make the wrong choices."


> Then the developers get angry and throw out the people in charge.

I'm talking about small everyday people, not rich Silicon Valley rock stars bitching to their Twitter followers to trigger a social wave of justice.

> Users don't know enough about computers to distrust malicious software.

I'm not arguing against secure defaults, I'm arguing against removing the option for power users to use their computing devices as they see fit.


> We want the OS to grab the mouse from their hand for their own good.

Solutions which protect the user from cat_videos.exe can also "protect" them from subversive_essay.pdf (or competitor.com). It's maybe not that practical a vector, at least where there's democracy and competition, but still something to consider.


> Then the developers get angry and throw out the people in charge.

The developers aren't the ones giving these companies money. The users are. The developers aren't their customers. The users are. The developers have little power to "throw out the people in charge" as long as the users keep giving the companies money.


I never dual boot. It's a PITA with UEFI or not and has been since the dawn of time. It's Linux -or- Windows. UEFI is not a problem though - people need to stop badmouthing something they really don't understand.

I settled on using Windows as a host OS[1] and use Linux on VMs because to be fair, Windows power management, suspend/resume and hibernate and driver support is miles better i.e. it actually works more than once. Oh and they really don't fuck up the kernel every 2 minutes like on Ubuntu and don't throw out buggy shit like Apple do.

I used a 2011 MBP for the last 6 months or so however (with virtualbox) and I had to go back to an older and slower T400 as it was more reliable as well.

So virtualbox on windows 7 it is. And it works really well. I'm pretty happy and I'm a picky as they come when it comes to hardware and software.

On my desktop (a Dell T3500 with piles of RAM), it's 8.1 with Hyper-V with Linux in it as that works pretty damn well too.

Is suspect the problem here is users rather than hardware and vendors.

[1] On my Lenovo T400.


I've reached a similar conclusion but have wildly different experiences.

I've never had issues setting up dual boot. It works fine and I can read/write NTFS if needed from Linux or just rely on Dropbox for simple stuff. The problem is it is too inconvenient to reboot so I usually just run Win 7 from a VM inside Linux. I also had a number of issues with Windows being glitchy (this persisted on three different Thinkpads with Windows 7 and, back in the day, Windows XP) and a pain to maintain, which is why I use Linux as my primary desktop OS.

I also don't have a problem with power management and suspend on Linux, though I know it probably isn't as good as Windows. I don't use hibernate so I'm not sure if it is awful. For my next PC upgrade I'm not getting a laptop, however, since I hardly ever travel with it...

I like OSX fine enough, too, but I rely on a lot of Linux tools so I have no real reason to have an OSX machine.


What Linux tools that you use can you not get for OS X?


It would probably be better to say OSX provides nothing I need outside of Linux but adds a few restrictions (choice of hardware, desktop behavior, etc).

If I were forced to use OSX or someone gave me an OSX machine I think I would get along fine and be productive, though. I can actually get along fine in Windows, too, but it is somewhat frustrating and slower for me to get things done.

Not that Linux is perfect, but I'm used to it.


You could ask "What Linux tools that you use can you not get for Windows?" too, since Windows has Cygwin, Mingw, etc., but it doesn't change that doing a lot of things on Windows or OS X is a lot more painful than on Linux, especially as it relates to development, debugging, etc.

It's exactly the same the other way around for e.g. audio and video editing on OS X vs. other platforms.


Interestingly video was where Linux worked for me when iTunes failed miserably. I was trying to import videos taken on my Panasonic Lumix and I simply couldn't import them using iTunes (I tried various options including trying to repair them but iTunes simply didn't see them). Granted this is not the entire OSX and only iTunes but considering the user is supposed to do all interfacing with their iPhone/iPod using iTunes, it was quite disappointing. Upon a whim, I tried to open them using Handbrake on Opensuse 12.2 and basically saved them back and that fixed whatever was wrong with them. iTunes could see and import them (though it is possible that Handbrake is available for OSX as well).


I switched to OS X simply because it's as easy to develop for as Linux, for my use cases at least.


Not the parent commenter, but for me Apt-get is irreplaceable, as is the ability to apt-get source any package on the system.


Homebrew always builds from source.

I have different machines running different linux distros, so I'm familiar with all the popular package managers (apt-get, yum, pacman), and I must say, Homebrew is really my favorite.


If homebrew is your favorite then you are probably a person who also loves to run Gentoo. For me, I moved away from gentoo in ~2005 because it just wasn't practical to be compiling everything from source all the time.

Binary packages are a wonderful thing and a package manager that doesn't support them out of the box isn't a usable solution for the things I do.


I tried Homebrew on OS X, but it didn't meet my needs. I'm not a fan of having to compile a new version of GCC from source every time I want to install a new library.

Homebrew also doesn't manage OS X itself, which is something I really prefer about apt-get.

The benefit of apt-get source is being able to read the correct version of the source code of any part of the operating system to debug a problem, not necessarily building from source.


> Homebrew always builds from source.

Wrong. Homebrew has so-called "bottled" binaries that get "poured in" unless you tell it to build from source or use a flag mandating a custom build.


Homebrew or MacPorts?

I'm using the former and it works pretty well for me. Though I wish either of those would use apt-get like Fink did.


have you tried using brew?


> Oh and they really don't fuck up the kernel every 2 minutes like on Ubuntu and don't throw out buggy shit like Apple do.

It seems my experience with computers has been radically different from yours.

The only kernel I've ever had problems with is the NT kernel, and one of the reasons I stick with Apple portables is that they're generally the least buggy devices I can find.


Apple's been getting slightly worse of late, though. They're spreading themselves too thin.


I've been using GNU/Linux as my only OS for years, and honestly, I have never had any issues with it; of course I may have other use cases than you, I don't play video games or anything - but when working on a project, that for instance relies on gstreamer, it's better to use Linux natively. VM's have always been a pain for me, they're slow, IO issues, and you'd waste more battery having to run a VM anyway, so the trade off doesn't make sense. Also, how could you say the problem is from users? Why should installing another OS be so difficult? You should give GNU/Linux another try, and try fixing your kernel panics rather than giving up and switching to windows.


One issue though is that people consistently try Linux on a computer built for Windows and say "what the hell, it doesn't work out of the box flawlessly, fuck this".

If you really want to try Linux, you need to buy a computer designed to run it, or at least vet your hardware before a purchase. I haven't bought a computer with a Windows license attached in over a decade because that isn't buying a Linux capable machine - its buying a Windows machine you might be able to run Linux on.

My most recent system was a build I made last year and I vetted every part for LInux support (and boy, did it take a while to verify Asus z87 motherboards had a working EFI that could boot a linux kernel, albeit they have a busted EFI shell and can only have one EFI boot table entry).


Why should I vet my system before buying it? I've seen Linux advertised as "it runs on everything" plenty of times, and for well over a decade. It's rare to see someone who evangelizes Linux to say that hardware support on Linux is inadequate for a non-technical user to just make the switch.

If that is indeed the case I think it would be tremendously important to fix that.


It used to be the case that Linux just ran on everything, until Microsoft started throwing their monopoly weight around again, and insisting upon UEFI (better called "Restricted Boot"). http://www.FSF.org/campaigns/secure-boot-vs-restricted-boot http://TechRights.org/wiki/index.php/UEFI


Meh :/ I started with the techrights link, and first looked at the 3rd link, "Installing GNU/Linux is Still Hard Due to UEFI" to learn, and the source article it was based on actually had the writer saying there was no problem at all with uefi+secure boot on, his linux just installed and worked fine on his new laptop. The other two february links weren't much better, at worst an already fixed bug, that did not originate at Microsoft... The FSF link seems more technically accurate as far as I can tell as a non-linux, non-uefi user, but most of their problems are hypothetical and not so much practical problems for now.

Are there better sources to read up on this, or is the controversy a bit over blown?


It isn't UEFI, it is peripheral manufacturers who hate Linux for whatever reason. Broadcom, Creative, Nvidia, and others all have legacies of horrible device support in the kernel.

You can't blame Linus for that. If a company doesn't want to push what is often only a few hundred lines of C to make their devices work under Linux, thats their right. But you can't blame the ecosystem for the companies choices not to support it. It is like buying Nexus 7 and bitching about how Windows doesn't run on it.


All my computers run UEFI if possible, and all of them run Linux.

I have no problems with UEFI whatsoever; in fact, I think it's a nice improvement to the dated BIOS technology. The Microsoft thing called "Secure Boot" might pose a problem, but I never activate that anyway.

People need to stop confounding UEFI with Secure Boot.


> "it runs on everything"

There is a distinction here - Linux, the kernel, runs on pretty much every CPU in the universe. If it is presented with any CPU and chipset ever made, it can run on that.

Your PCI devices, your USB devices, etc are not guaranteed to have Linux drivers for that hardware. And if the producers of said hardware don't release driver documentation or support a Linux driver directly, you can't blame the Linux community for not being magicians that can force private companies to bend to their will.

Hell, Broadcom - one of the worst FOSS companies, in the same class as Nvidia for the longest time - is finally producing scant upstream NIC drivers. They support a tiny fraction of their product range, and they have another 2 proprietary drivers on top of those for Linux and those don't work either, but the situation is improving.

But that is all you can do. There is no "sit down and code" answer for undocumented motherboards, bad EFI implementations, and a 15 button mouse with a 50MB proprietary driver on Windows. Well, the latter actually you can just wireshark the usb bus and get all the signaling for the buttons, but that is a lot of work to do what the company itself could have done in minutes (publish the opcode manual they obviously have on the thing).


> If that is indeed the case I think it would be tremendously important to fix that.

You can't "fix" people to stop them from talking out of their ass.


There are very few laptops that don't come with Windows or OSX preinstalled. They are out there (like Chromebooks) but there's not a huge selection of them, and you have to hunt to find them.


https://www.system76.com/

https://www.thinkpenguin.com/

https://www.dell.com/us/business/p/xps-13-linux/pd

There are a few others. Its not really hunt, you just have to use a Linux laptop brand or find an OEM that sells bare notebooks.


Depends where you are. Last summer I ordered a laptop from Germany and there was plenty of choice, at least Lenovo and Acer machines were widely available without Windows (variably shipping with FreeDOS, or a console-only Linux env).

Saved a tidy sum compared to shopping locally. Just pay attention to KB layout.


Also, I don't get the impression Chromebooks are much easier to install your own linux distro on.


Chrome OS uses Gentoo as its upstream, if it runs ChromeOS it will almost certainly run any Linux distro with at least the same kernel version.


I've been using linux as my desktop OS for more that 8 years. It has ups and downs but there is no comparison in user experience to windows and thats why I have it changed. That being said, drivers in linux are mostly painful, for instance I've spent several days and nights fighting with this ridiculous bluetooth stack and I'm still nowhere to go. It is not only linux fault because they are doing wonderful job but hardware manufacturers just don't give a shit. Almost every piece of hardware out there has 'Compatible with Windows XP,Vista,7,8!' and that's it...


I also find dual boot painful. On my old machine I switched main OS several times (back and forth from Windows to Linux). I couldn't dual-boot because it just wouldn't be fun. There's no real way to share data properly between the two OSes, I wouldn't have the exact same set of available apps, etc.

Now I have a MacBook with OS X. It combines the things I want from Windows and Linux, so I have no need to change OS. If I really need Linux or Windows for something, hardware virtualisation makes things easy.


A FAT32 partition doesn't do it for you? That's where I have all my music.


That helps but invariably you want to use both OS's at the same time. It sucks to have to reboot every time you want to shift data about. Much easier to have both machines running at the same time. VM's also have tools for the guest to share clipboards.


FAT32 does not support file permissions.


Generalization that Windows has better "power management, suspend/resume and hibernate and driver support" is grossly wrong. Noob friendly Linux distributions like Ubuntu provide excellent support of all the points you mention.


They really don't. Suspend doesn't always wake up all devices, different kernel releases break hibernate completely. Also Ubuntu just disabled hibernate for a vast chunk of time. It's never worked properly. And don't get me started on all the playing around you have to do with powertop to get usable battery life.

And that is Ubuntu (12.04 LTS) noob friendly edition on standard Intel Centrino hardware.


Hibernate is a good point. But on my Samsung Series 5 ultrabook, Ubuntu 13.04 got better battery life, out of the box, than Windows 8 did.


So maybe you can explain to me why Ubuntu 12.04 LTS hangs on hibernate on my Asus Netbook.


Yeah, it doesn't work for me, so it must not work for everyone.


Quote:

> Noob friendly Linux distributions like Ubuntu provide excellent support of all the points you mention.


When was the last time you heard a Windows user complain that hibernate hung for them every time?


Windows has its share of problems[1].

[1] https://www.youtube.com/watch?v=IW7Rqwwth84


Red Hat 5.1 (May 1998) probably had a lot more problems than modern Linux as well.


Well! Growing popularity of Linux based consumer products and entry of Steam, GOG, Cryengine, etc. on traditional Linux systems must be an indicator of something. Let's leave at that.


> suspend/resume and hibernate and driver support is miles better

While it is not apple to apple reason, this part is why I actually moved away from Windows -- yes, resume and hibernate (mostly) works on Windows, but with a great pain of waiting forever for that process to complete.


I have an SSD. Windows resume from hibernate in about 8 seconds. Complete cold start takes 12 seconds (less than OSX). That's windows 7. On windows 8.1 it's even faster.


Well, I don't have SSD, but same machine Linux was a way better experience (at least for what I had) -- and then there was this culprit of it taking forever to scan datastore.edb, too after recovery.


Same here.

I have a small Asus netbook for traveling with Ubuntu LTS, bought from Amazon with Linux already installed.

My home system, a more beefy laptop, has Windows with VMWare for Linux related stuff.

Since 1995, my first Linux experience, there are things that hardly have changed in terms of hardware support.

My latest issue is trying to make MTP support work properly.


You do know that there is life beyond Linux and Windows, do you?


Are you using any desktop environment with the Linux VMs you run ?


No. I run windows as a desktop. I wouldn't run any Unix derivative as a desktop - they've all been horrible to me (that includes OSX). Server-side though, it wins a lot of the time.


windows as a desktop? you reminded me of that torture :) all installers and updaters popping out in front of you, taskbar icons filling up for no reason, quickly degrading file system/launch performance, no package management, no workspace or tiling window managers, no sane ruby, python dev environments.. i gave up dual boot long time ago, and erase completely any remnants of windows. when i need windows (seldom), i open it on aws.


No installers and updaters popping up for me (other than the usual windows update one just like OSX/Ubuntu), tray has 6 icons in it (less than my Mac's bar at the top right), the install is 2 years old - no degradation, package manager - nope never I mean we don't use MSI's and MSU packages at all, alt-tab?, ruby I agree with, python is nice on windows (.chm help)...

I can't open windows on AWS when someone digs up my Internet connection...


Hear my fucked up story of UEFI, Secure Boot and Windows 8:

I have a Windows 8 laptop. It had 2 HDDs, so I easily installed Ubuntu 13.04 to second HDD. However, during a Win8 update process, it hanged for hours and I force closed. Then even my recovery partition was corrupted. So, I wiped my first HDD, and since the OEMs don't give DVDs anymore, I could not install it. I live in Turkey, and there is a special version of Win8 Single Language, not found on internet. You cannot confirm checksum of them because it is not widely used, and depends on the region I think (maybe wrong). Therefore, even If I found a iso on Turkish forum I could not verify it, but tried to install Single Language version anyways. But It could not verify my key, that is burried in my motherboard. I tried to install Windows 8 Pro from MSDNAA, and it said it cannot because of the internal license on my pc. I tried dozens of way to disable it. When I got Windows 8.1 Pro from MSDNAA as well, It installed without problems. That is really f*cked up. I did not like flatness of even windows, I disabled Secure Boot, removed my partioning from GPT to MBR, installed my Ubuntu with Windows 7 as it is used to be, before these dark times.

Now I'm a teaching assitant of operating systems course, students need to use Linux for assignments, and almost all of them uses VirtualBox, which is not a very good replacement for native experience. It is fast, but not fast enough.


I think on a OEM Win8 system with an embedded product key, you are supposed to install Win8 Core first, then enter the Win8 Pro product key after setup. Do you mean that Win8 setup still prompted for a product key? I believe there are utilities to find the embedded ACPI MDSM key if you need to. On the solution you found, your computer comes with only Win8 Core right, so even if you used ei.cfg etc you'd get only Win8.1 Core, not Pro as your MSDNAA subscription allows.


Problem is my installation was Windows 8: Single Language, not Core. I tried ei.cfg methods, not helpful. I found my key through a tool. But it does not even allow me to enter. With ei.cfg tricks I could enter but it did not accept. In Pro, it did not even ask for a key and said that I just can't install. Even supplying it with ei.cfg does not help.


Ah, I am not an expert on single language versions of Win8. It is not surprising that they won't let you install or activate any other version of Win8 using this key, but the installer just failing instead of prompting for a key looks like a bug. But as I said above the solution you found isn't too bad anyway.


What has this got to do with Secure Boot?


> What has this got to do with Secure Boot?

It's got nothing to do with Secure Boot. If he had gone into the UEFI and disabled Secure Boot, the problem would still have occurred.

Secure Boot has become this scapegoat for everything that could possibly go wrong.

As its name suggests, Secure Boot prevents the OS from booting. If the installer is complaining about a product key mismatch, then it's got everything to do with the installer and nothing to do with Secure Boot.


I did not say Secure Boot causes this, the UEFI contains product key, and it does not allow to be deleted, modified or extracted. Since it is permanent there, Windows installer fails to accept another key, or even a legitimate higher edition Windows installation.


> I did not say Secure Boot causes this,

Here's what you said:

> Hear my fucked up story of UEFI, Secure Boot and Windows 8:

Why mention Secure Boot at all? It's got absolutely nothing to do with your situation.


It is a story about all of them, I did not say Secure Boot caused this problem, I said I disabled it to be able to install Windows 7.


> It is a story about all of them, I did not say Secure Boot caused this problem, I said I disabled it to be able to install Windows 7.

Suppose that a user buys a Windows 8 machine, goes to a website, and downloads malware that slows down the machine and makes it unusable. Finally, a local teenager wipes the machine, disables Secure Boot, and installs Windows 7.

One would hardly say that this is a "f-cked up story of" malware, Secure Boot, and Windows 8. Secure Boot is merely incidental, and has nothing to do with the issue of malware on Windows 8.


Note that it is version-specific. I just flattened my Surface Pro 2 and used a non-OEM version of Windows, which required its own product key. Also, Windows 8 and 8.1 have different product keys, but if you upgrade OTA to 8.1 it will install the same SKU (OEM) for 8.1. Changing the OS is complicated with UEFI and secure boot, the usability is insanely bad, with a lot of boundaries being blurred. It's the worst experience I have ever had when installing a clean OS.


There are some terrible implementations of UEFI, and Microsoft is copping the blame for a lot of it.


Microsoft deserves the blame. They require UEFI (more accurately called Restricted Boot) for Windows 8.


Bullshit. I can boot Windows 8 from legacy (BIOS) mode on two machines.


Yes, your own installation works, but not OEM installation works with legacy mode


I think what your parent comment meant is that Microsoft requires UEFI to grant OEM licenses, otherwise known as "compatible with Windows 8" stickers.


"But It could not verify my key, that is buried in my motherboard."

Secure Boot systems verify your OS against a key stored on your hardware.


He's talking about Windows product key which has nothing to do with the secure boot key. You could have a properly activated Windows installation get stopped by Secure Boot because of failed boot kernel verification, or you could have Secure Boot verified boot failing Windows Activation.


I like the way you think.


I think he was referencing secure boot checking the keys as being the main problem/cause of all the issues.


Just download one of those Win7 activation-removed torrents. They just work, and loads better than the Win8 cesspool.


No way I'd trust an OS like that.


You're right. I don't.

I use Linux.


Then become a part of botnet, or goodbye bank accounts


Did you try the ei.cfg trick?


Yes, it did not help


Sorry, I was thinking about installing Win8.1 with a Win8 product key.


OpenBSD does not support UEFI, and probably never will. http://permalink.gmane.org/gmane.os.openbsd.misc/196288

Please note that OpenBSD has no problem handling large disks, greater than 2 TB. The problem is with the horrible Restricted Boot (UEFI) system.

The Asus model 1015E is in violation of the requirement that UEFI (Restricted Boot) can be disabled, and that Legacy Boot can be enabled. I am boycotting all future Asus products, because I had to deal with this problem after I had already purchased one. It was difficult and painful to get my money back.

If any manufacturer purposely builds systems without Restricted Boot, I will patronize them (even for non-Arm products). I am hoping that some manufacturer will build a line of Arm-based devices without Restricted Boot. (Yes, I know about BeagleBone, Raspberry Pi, etc. But these are not full-featured laptops. They are embedded machines aimed at embedded uses. I do use them for that purpose. Also note that the Raspberry Pi has the craptastic Broadcom chip, and it is anything but free [as in liberty] and open. http://permalink.gmane.org/gmane.os.openbsd.misc/192942 Also, the recent release of source code from Broadcom is only open source wrappers that call proprietary binary blobs. Fsck Broadcom.)


Please post a blog link to your Asus 1015E and Broadcom experiences. I want to reshare and help build community awareness.


I don't blog.


Ok. I can respect that.


With the locked down antics that manufacturers are pulling these days, the only way to protect yourself is to root your device and install your preferred environment on it immediately after you obtain the device. Don't delay or procrastinate with excuses like wanting to try out the manufacturer's experience, waiting until you need the functionality, or wanting to prepare more. Every day you put it off, the more likely you will end up stuck with and dependent on a user-hostile device that you were tricked into.

You need to run into every possible incompatibility or bricking while you're still well within the return and credit card dispute periods. And if you're actually unsure of how to proceed in making sure that shiny new device actually works for you, please please ask a technical friend for help. The future of society very much depends on it.


Actually, it is not that difficult to dual boot Windows/Linux with UEFI. You just need more understanding about this whole thing. I can recommend Arch wiki article: https://wiki.archlinux.org/index.php/Unified_Extensible_Firm...

And http://www.rodsbooks.com/efi-bootloaders/

Kernels 3.3+ can be loaded from EFI partition directly without additional bootloader, no need to wrestle with grub-efi. This blog explains required configuration: http://wolfwings.dreamwidth.org/224805.html


efibootmgr makes it really simple to set up. When I get a new computer with Arch, once I install the base system, my bootloader is literally just:

efibootmgr -c -l /vmlinuz-linux -L "Archlinux" -u "initrd=/initramfs-linux.img root=<root partition uuid> ro quiet splash security=tomoyo"

Assuming /boot is my efi system partition.

The real problem is that most motherboard manufacturers have absolute garbage bloated proprietary messes of EFI implementations, and we are stuck with Intel jamming this shit down our throats in the first place rather than opening up their chipset documentation so we could have coreboot support on these boards.


We know all about Hacker News "not that dificult"

I smell a simple 26 step process that can't legally be done in some parts of the Middle East and N. Korea, a few assembler and brainf*ck scripts, and a little simple arc-welding.


I learned my lesson long ago. Linux is for VMs. I run vanilla Windows as my boot OS and then use Linux from within a VM, thus getting the benefits of Windows hardware compatibility where I need it. This lets me do things like run nine monitors when my laptop is docked by chaining DisplayLink adapters. Note that Windows with DisplayLink will only support eight monitors so for the ninth I bind the DisplayLink adapter to the VM for dual monitors with my Linux VM. Going above more than two monitors with a Linux host OS is akin to beating one's head into a wall, especially if you move between monitor configurations often, but just works in Windows.

Thus far, with the ability for the VM to bind specific hardware directly and not shared via the host OS I get all the benefits of a Linux host OS without the hassle of broken driver implementations taking me down for long stretches at a time. As an example, I can bind my ALFA WiFi adapter directly to the Linux VM and run aircrack-ng all day long.

Yes, there are drawbacks. You will need a bigger hard-drive (best if you can fit two in your laptop) and more memory, but those are cheap these days. The benefits, however are massive. I keep multiple VM versions and can instantly recover from anything stupid that I might do in the Linux kernel without impacting my ability to actually get my work done.


I have the problem that in my windows os I need to run hyper-v to use the emulator of windowsphone and windowstablets for work purposes.

Why is it a problem? Because if you active hyper-v (and you need a reboot to change from active/deactive) all other VM, if started, will crash your os, show a blue page of death, restart your machine and show a "I am trying to recover your system" for like 20 minutes.

And hyperv is like the shittest VMm ever to run linux over it, also with the "officially supported" linux versions.


Hyper-V user here. It's fine - I run 12-15 VMs (4 of which are Linux) at a time (full production environment with virtual networking) on a 4 year old Dell T3500 Quad Xeon with 32Gb of RAM. Sounds like your hardware is broken/crap.


Hardware problem often means driver problem.

When you have Hyper-V installed, you have to be exceptionally careful about what drivers you have installed. I've even seen a Bluetooth driver bluescreen the management OS when Hyper-V was enabled.

This is just the nature of hypervisors. Most drivers are only tested on an OS that's running directly against the hardware. Run them through a hypervisor, and their behavior becomes unpredictable.

Ultimately, Hyper-V is a hypervisor, and it competes against ESX -- not VMWare Workstation. That's why it's best run on workstation-class or server-class hardware. Basically, situations where the hardware vendor might actually keep the drivers up-to-date. On consumer-level hardware, it's a crapshoot.


Yeah which is why if you're doing serious shit with Windows you buy either:

1. Thinkpad T or W series

2. Dell Precision

3. HP Z-series.

Nothing else is worth throwing any money at all unless it's for facebook or pr0n usage (consumer).


On server, it is fine.

On notebook? Trying to combine Hyper-V (for WP dev) and Intel HAXM (for Android dev) is BSOD nightmare. It stopped after I got rid of Hyper-V...


It's fine on a notebook too.

More than one hypervisor per machine is not a good idea. You'll get the same problem with Intel HAXM and VMware as well.

To be fair the problem with this is that the whole Intel virtualization architecture is hacked on the side (as is every other damn feature since the 80186).


I think there is a BCD option to disable Hyper-V. You could have two boot menu options.


I do the same thing, it works well. Do you use Windows 7 or 8 as your parent OS?

I use VirtualBox and have found it reliable for years.

The Windows host OS can then be kept very clean and it remains fast and relatively secure.

Then in those rare instances when you need to download something insecure or that you don't want cluttering your Windows install you can put it in a virtual machine that acts as a sandbox.

If you organize your files right you can even wipe your windows partition and reinstall with very little downtime and setup (other than the several gigs of updates that will be needed during the first few boots).


I ran into some odd problems with the DisplayLink drivers after a couple years of success in Win 7 so I recently made the jump to Win 8.1. Most of my real work happens in my VM's so I don't mind it. My laptop has two 500 GB SSD's and has room for two more drives via mSATA cards. When I run out of room for VMs I'll go ahead and start populating the mSATA slots. I recently picked up a USB 3.0 256GB thumb drive and use it for VM portability between machines so I can leave my big laptop docked at work & take a lighter one on business trips.

And yeah - you are right about very little downtime when reinstalling stuff. While my peers will be out for a week because of an OS or hardware failure I can be up and running again in no time at all.


9 monitors ? May I ask what you use them for ?


I figured someone would ask. I do a fair amount of pen testing in addition to development. Because I work with a lot of hardware solutions I need to run multiple instances of my tools. In a typical assessment I'll have a VM running a raw packet capture & another running an HTTP interception proxy. These will be "watching" the hardware device. I'll then replicate that same configuration to watch host browser traffic. Right there I'm at four monitors just to watch my instrumentation. I'm at a fifth for the web browser to interact with the environment & a sixth for taking notes. I typically like to have a chat going with other members of the team as well as e-mail so that's seven right there. Many time's I'll need to write rules for on the fly packet manipulation so now I'm at a solid eight. If I must also instrument the server side then I start compromising on having e-mail open.

For years I dealt with alt-tab between windows as well as using various windows managers so I could have multiple desktops but the problem was that I couldn't have eye's on what was going on in the instrumentation. This setup saves me a massive amount of time.

Here's a picture with two additional monitors mounted to the wall next to my desk which are connected to another system I use:

http://www.defaultstore.com/mydesk.jpg


Wow, that's pretty impressive. I take it you are in the Seattle area then, with that Windows 8 screen on your laptop? That's what mine defaulted to and I'm in the area.


I'm in the NW but not Seattle. I think that's just its default. That's an HP ElitePad sitting on a doc not connected to any monitors. That particular day I'd just re-installed the OS to take a look at the TPM.


It may not be of any importance, but I figured I should warn you that the photo you linked earlier contains Exif-data which reveals your location.


Yes, it was taken in a parking lot in Boise.


I'm in Melbourne, and our office's sole win8 machine shows the same thing. I think it's just homage to the home of MS.


Interesting. I wasn't aware of that at all. Good to know, thanks for the heads up. Now I feel goofy for making such an assumption.


How did you get a taskbar on the laptop and just one monitor? I thought cloning the taskbar to secondary monitors was an all-or-nothing affair.


It is all or nothing. That's a Windows 7 VM on the monitor.


On Linux I use dual 4K configuration with a tiling window manager. I'm curious what your total screen estate, and resolutions are.


All nine are running 1920x1200 so the easiest way to calculate it is as if they are lined up in a row 17,280x1200.


My neck would never forgive me if I spent even an hour at a setup like that. How do you deal with craning your neck all day long?


Rock climbing. Seriously. It keeps my neck strong, my back strong, and my hands from getting RSI. Also, it's not like I'm looking at the same monitor all day long so I have a fair amount of head movement during the day.


Great setup!

Judging by the scopes, you're also an EE?


The test equipment comes in handy when going after hardware. JTAG can only get you so far, especially when going after an undocumented board.


Have you considered/tried using hypervisor and putting Windows in VM, too? If so, what were drawbacks?

I ask because, considering popularity of Windows as malware/spyware target, running it as bare-metal host OS doesn't sound like a completely good idea to me, even if owner's really cautious.


If you use Hyper-V, Windows is already running in a VM as well.


out of curiosity what kind of work do you do with 9 monitors?


Security researcher.


I really don't understand all these people having issues booting in UEFI mode. Fine, disable Secure Boot (or boot Fedora/Ubuntu if you care about it), load a recent distro and get on with your work/play.


Same here, I boot Linux from UEFI on several computers and it works fine. It boots faster and uses native display resolution instead of VESA. I dual boot Win 8, chainloading from GRUB.

I think there are just a lot of people who don't know how to install GRUB to the EFI partition, or don't know how to edit their EFI boot menu with efibootmgr. UEFI is more complex than BIOS booting and if the only mental model you have is of a boot sector, you will have trouble understanding why a misbehaving system is acting as it is.

I don't enjoy needing to fix my boot loader when I install Linux or update Windows, but I'm always able to make it work. I always had boot loader issues when I dual-booted BIOS systems too, so although it's still unpleasant, it's not any worse. It's just different and requires learning some new concepts and command-line tools.


From some of the replies below me, it looks like there are some horrible UEFI implementations. Your other option is to shove GRUB (or similar) on a FAT32 drive and boot from that when you want to boot Windows.

FWIW, my key/activation is stored in firmware on my Lenovo X1 and I had no trouble when I disabled Secure Boot.


If you disable UEFI, you need to install your system again, and you cannot use your old license to install it, at least in Win8. So you either buy another Win8. So disabling UEFI is not an option, for dual boot.


The parent poster said, disable UEFI Secure Boot, not UEFI, in which case Windows will continue to boot normally.


Holy cow, how are we just now finding out about this? So Microsoft is forcing you to buy yet another Windows license, after you already paid for the one included with the laptop, if you want to dual-boot?

This should been a scandal when Windows 8 launched.


Not really. Installing Linux on a computer running UEFI is perfectly possible. It's just really hard. I'm typing this on just such a machine and it involved activating BIOS emulation and then when the Ubuntu purple loading disk screen came I needed to do a magic key-combo and set some options I don't remember. Then I needed to run bootrepair and instruct it to make Ubuntu UEFI bootable and finally I could turn off BIOS emulation.

It's hard as fucking hell.


My last two systems (desktops, I don't even have a working notebook right now, waiting for Broadwell) used Asrock and ASUS boards with EFI implementations that worked pretty well out of the box under Linux. One runs Suse 13.1, the other runs Arch. They both have their bugs - the Asrock system wipes the EFI boot table every firmware update, and the ASUS one can only have one entry in the EFI boot table, and neither has a working EFI shell - but I did my research to know they worked.


Anecdotal, but: I have linux installed on two machines with UEFI, and neither was much harder than a BIOS install. An extra step or two, but it didn't require more than a couple additional minutes.

Sounds like the Ubuntu installer might just suck for UEFI.


I installed Debian 7.4 on a Dell XPS 13 using a live hybrid ISO (`debian-live-7.4-amd64-standard+nonfree.iso`) two days ago. After enabling UEFI and adding the USB stick as a boot device, I had to do nothing out of the normal to get it to work. It was incredibly painless.


The linux experience now extends before installation ;)


It was, but the number of people spreading misinformation (like the top of this thread, or the article's author just a few days ago) was huge, and since nobody had firsthand experience with it, most people simply ignored.

Any problem you have with your computer now, you are required to buy another Windows copy.


I'm confused, could you please elaborate on this? I successfully dual-booted a second-hand (wiped hdd, etc) Lenovo Carbon X1 w/ Ubuntu and Win8 from MSDN media. It picked up the pre-loaded Win8 license without issue.


I believe this is a myth. The product key is stored using ACPI, not UEFI. Even MS mentioned that you can switch to BIOS mode in order to install 32-bit Win8.


Since when can you not use a windows license more then once.


The license key gets tied to certain system internals upon first use, and is checked against the existence and value of the same internals in subsequent checks. Apparently, toggling UEFI (and/or Secure Boot) modifies one such internal.

EDIT: license -> license key


And then you call Microsoft on the phone and they issue a reactivation, or whatever they call it, and everything is fine. Unless this works differently with OEM versions of Windows (different, more obtuse support people?) than it does with retail, I can't imagine it's actually a problem.


Most people don't realize this.


Don't confuse the technology with the legalities. "The license" is a contractual right that you have legally purchased. If Microsoft's piracy-detecting hair-triggers are firing in a situation where you still have a license per the contact - oem licenses are usually "on this PC" or the like, so toggling UEFI alone shouldn't breach the terms - that should be their problem. You're entitled to your license.


> at least in Win8. So you either buy another Win8. So disabling UEFI is not an option, for dual boot.

I don't use Windows however that would be the point where I'd just pirate it, if I buy a single user license for a piece of hardware I expect to be able to run it on that hardware.

Fortunately my life and work (web dev) is better on Linux than Windows.


Let me explain this for you then,

I have a Windows 8 Sony Vaio. If I disable Secure Boot and install Linux on a partition, the Bios doesn't work. Period. I've seen that a number of people got it to work for a while by modifying the grub loader and other hackery but then try updating Linux. Why? Buggy UEFI that is not "standards" compliant.

If I try to use UEFI with a Linux distro that supports UEFI, I get a message saying something like "Can't find Windows."


I also have a Sony Vaio and I got UEFI working. The trick is, it is hardcoded to load the bootloader from /EFI/Boot/bootx64.efi - it does not respect the EFI boot menu.

Linux is installing its UEFI bootloader to someplace like /EFI/debian/grubx64.efi. Move that file to /EFI/Boot/bootx64.efi and it will boot GRUB. You can chainload the Windows bootloader from GRUB.


I had a tough time installing Ubuntu on my Vaio Pro 13, but got there in the end. I had to do a lot of messing around to get the shitty UEFI implementation to read Grub. Didn't have to disable SecureBoot or do anything much to Grub, besides a particular boot flat necessary for the SSD.

It took about a day to get it up and running, but I've had no issues at all since.


I have a Sony Vaio for work, and was completely unable to get it to run Linux even with a live USB or CD. My research turned up the issue you've identified--their UEFI software doesn't work right.


There's another insane thing I've recently heard about - that Intel intends to lock the OS to their own (new) chips, and you can't dual-boot or install another OS. If they do this, yes I expect them to say that "the OEM has the choice" to allow for dual-booting or whatever, but I bet you 90 percent of PCs will be locked to Windows, when this arrives on Windows machines:

https://plus.google.com/+GuidoStepken/posts/bD2VHB4LcEU


I like how they use antivirus/antimalware software as an example, because it's what is almost universally considered "a good thing".

The flip side of it is that this security software can also be used as a tool for mass censorship, and I certainly don't want something like that being impossible to remove and running in "ring -1".


Well, once they do that, the same way that one should avoid Windows 8 if he wants to have a chance of running Linux, one will also be recommended to avoid an Intel processor if he wants control of his system.

Except that Intel has less of a monopoly than Microsoft, so that's way easier.


This would be worse on laptops. It's braindead easy to just snag an Intel-based laptop and know everything will work out of the box. (Other than the Poulsbo netbooks generally speaking they've done a great job supporting Linux on their hardware).

With AMD laptops you might end up with a Broadcom wireless card and an AMD GPU.


There's no proof of this, just a Google+ translation of some news article.


The article claims: "One reason all PC's that come preinstalled with a Microsoft operating system is cheaper than regular laptop is that Microsoft subsidizes the cost of the hardware."

I had always thought that PC makers paid Microsoft for Windows. The article claims that, instead, Microsoft pays the PC makers to install Windows. I don't see how this could possibly be true if most copies of Windows are sold pre-installed on PCs.


I've never heard that Microsoft pays OEMs but they certainly do give them a discounted price. Also it is my understanding that they do have some contract terms that either forbid sale of the same hardware with other OSes preinstalled or forbid sale at a lower price with other OS or no OS.

This is all from years ago, I don't know what terms are these days. Obviously Dell, HP, etc. do have a few preinstalled Linux systems in their catalogs now.


I too don't think this is accurate. What Intel and Microsoft have done is contributed marketing support dollars which, from a pure business analysis point, might be considered part of the cost of the system but it isn't what you or I would consider subsidizing the hardware. In marketing support programs Microsoft pays for a television advertisement for your product (very expensive) which points out how great it is because uses Microsoft Windows. Similarly with the Intel inside advertisements.


Microsoft charges for OEM licenses. PC manufacturers, however, will include other software with Windows for a fee (e.g. Imagine Adobe pays Dell for Adobe Reader to be included on Dell consumer systems).

It is also plausible that Microsoft has a patent agreement with PC manufacturers so that they have to pay Microsoft a fee to ship systems with Linux installed. I know we've seen similar situations with companies that run Linux servers or sell Android devices.


What they probably meant is that Windows is a profit center for HP because of the 3rd party crapware/bloatware loaded with Windows. They don't have similar deals for Linux, and that would probably chase off their Linux customers.


Here's my rant on how Secure boot should be done. Currently, you have Microsoft's public key certificates in the firmware, and you can either boot with secure boot on or off. I would rather that the certificates be treated similar to a web browsers, where you have a list of "official" certs belonging to multiple independent entities. In addition, when in secure mode, you shouldn't be able to boot from unsigned media.

Now here's where I think this can be improved. Usually, hitting something like F12 at boot time gives you a list of devices to boot from (internal drive, USB devices, DVD, network, etc). What I'd like to see added to this is an item labeled "Boot from unsigned DVD/USB/Network [for OS installation mode]". Once this is selected, whatever you boot would have access to adding additional certificates to the firmware. Or, more appropriately, you should be able to select 3 boot options: 1) boot from signed media; 2) boot from unsigned media; 3) boot from unsigned media with write access to certificate key store [OS installation mode]. That way, the end user can still maintain complete control over their hardware, yet still get the benefits of protection against boot sector malware / low level root kits.


This sounds like secure boot's "Custom Mode." 'Certified for Win8' requires that all non-ARM machines support either Secure Boot Custom Mode or SB disabled. I would hope they would support custom, but that's the manufacturer's prerogative.


Asus does not support disabling Restricted Boot for their model 1015E.


Cite?


I have a simpler solution: the bootloader is kept on a separate storage device, which can only be written to if the computer is booted up in a special mode. No signing needed, prevents common bootloader viruses, respects user freedom.


In this thread: a bunch of people who know nothing about computers spread fear, uncertainty, and doubt.

The reality is: Windows boots fine with Secure Boot disabled. Linux boots fine from UEFI; the Debian installer sets everything up perfectly.


OpenBSD doesn't support UEFI, and will never boot from it.

UEFI (better called Restricted Boot) takes away the user's freedom.

I want to own my hardware. I don't want the manufacturer to own it. I don't truly own it unless I can run whatever software I want on it. I don't own it if it prevents me from doing anything I want to do -- no matter what that is.


I guess I don't understand how UEFI is inherently less free than BIOS. I assume most of of the opposition is due to secure boot, but every implementation I've used let the user manage keys for secure boot or disable it entirely. Are there other reasons that UEFI is a threat to computing freedom?


The Asus model 1015E does not let you disable Restricted Boot, nor does it allow you to re-enable Legacy Boot.

I don't have direct proof or experience, but I understand that many other models (and manufacturers) are similarly broken.


Buggy BIOS is nothing new.


For me precisely this was the final reason I needed to simply erase Windows 8 64bit Professional (which I paid for half a year earlier) and install Ubuntu 13.10.

I was worried about driver issues and spending too much time adminstering my system instead of using it for being productive. I cannot emphasize enough how unnecessary those worries were - I f love it! As an IT guy using R, Git, Python and further Linux tools this made actually a lot of tasks way easier!

My recommendation - f* dual boot - get rid of Windows and switch to Linux!


If you mention what your system is, then it could help others in making buying decisions. This would then send a message to hardware vendors that linux support gives them visibility and appreciation.


https://www.system76.com/

https://www.thinkpenguin.com/

https://www.dell.com/us/business/p/xps-13-linux/pd

And others. Really, if you are buying a new system for Linux, and it isn't a custom desktop, just buy a notebook from a Linux provider so you know modern kernels support the thing out of the box flawlessly.


"My journey into hell began about two months ago when I purchased a new HP laptop computer..." That'll teach you. Next time don't buy HP. Those super cheap laptops come at the price of quality and decent support. You pay for it one way or another. As for the UEFI being bad or not working well with Linux, I disagree. I think you just don't know what you're doing and HP support is bad with no documentation.


I currently use a couple of recycled core duo 2 thinkpads. BIOS so of course GNU/Linux installs easy either whole disk or dual boot with windows.

In the future, I suppose I'll need to look to a Chromebook as a cheap platform for my GNU/Linux.

Unless people actually start making open hardware...


No need to fear. I have a recent Thinkpad (T530) and it boots ArchLinux painlessly from USB or internal 32gig ssd. Also boots into windows8 from the main 120gig ssd if necessary. It's UEFI with secureboot disabled.


Yeah I did something similar on my T530. Booting into Windows is a pain, compared to Mint. Everything wants to update and reboot.


This isn't about UEFI, I've had no issues with it. Primarily because I don't use Windows on it. Microsoft is up to it's old tricks here. Then again, try to run OSX on a Dell. I just wish all these guys would get their heads out of there asses.


> try to run OSX on a Dell

I don't get what you're trying to say here, so pardon me if this response is totally orthogonal to your point.

OS X isn't supposed to be run on PC's (laptop+desktop). Nor is it expected. Desktop Linux, however, is.


Except that OSX does run on PC's. The architectures are identical now. My MacBook Pro can run windows natively because it is an x86/x64 architecture from the ground up.


I am aware of the architecture of Mac hardware. But OS X doesn't have driver (thus hardware) support for all (or even a large range of) hardware that conform to the architecture. Apple develops (espcially when it comes to optimizations) OS X expressly for it's own selection of (as opposed to general purpose) hardware. Apple also goes so far as to actively try to prevent OS X running on non-Mac hardware[0]. Contrast this with Linux, *BSDs that are developed on and for a wide range of (and general purpose) hardware.

[0]: http://en.wikipedia.org/wiki/Apple%E2%80%93Intel_architectur...


If there wasn't a legal constraint, then hardware manufacturers and the community would make those drivers.


Yeah, it's called the Hackintosh


Supposed to because Apple makes it hard, not because of any technical requirements.


> technical requirements.

Except when OS X doesn't have drivers for your hardware...

For a less snarky remark, please read my response to a similar, sibling comment.


True. I have a computer with no OS and UEFI and there were zero problems installing Linux. UEFI!=Secure boot


I have come to the decision just not to buy anything HP. I had two horrid experiences with two HP laptops, and two printers that went too soon--1 less than a year--the other less than a week. Off subject, but when ever I hear HP I cringe.


The key there is to only buy expensive HP stuff.

Their office laser printers are great, and their workstations (e.g. the Z820) are pretty awesome.

Their cheap stuff on the other hand... I agree with you on every point.


Back when I did dual boot, I found I seldom used the Windows partition because, for the reasons I'd need a Windows machine, it wasn't a very good one.

If you boot Windows for games, you are far better off buying or building a rig designed for gaming. Conversely, if you need a fast Linux software development system, swap out the hard disk of any decent Core i7 machine for an SSD, which will usually be smaller than the rotating media, which makes dual booting less attractive. And if you are shopping for a Linux machine that's less likely to be a hassle in any way, look for one that uses Intel graphics and has no 3rd party GPU.


This is why you should buy machines meant to run Linux, or known to run it well... System76, Dell dev laptop, ThinkPads, and custom parts for a desktop.

If you do your homework, you'll always have a perfectly functioning system. My ThinkPad runs Ubuntu (and Suse) like they were meant for each other.

And dual booting is always a PITA... Just delete Windows...


I want to make this very clear:

Dual-booting with UEFI is entirely possible, provided that your hardware allows entering EFI vars manually, in a SIGNED OS ENVIRONMENT. Every comment I've seen thus far doesn't seem to grasp that this is MANDATORY before you have the ability to finish installing a new bootloader. This is how UEFI protects itself from unauthorized OS signing!

I have my own custom EFI vars set up with grub2 running on a GPT partition table, all booting beside windows, as purely as the air you breathe, and I'm loading custom Linux kernels daily and haven't ran into an issue since I originally set it up. Grub2 itself is my primary boot partition, which can then jump into the windows boot manager on the other disk.

Grub2 itself has the capacity to act as a custom-signed EFI boot partition (Not sure if I've got the wording on that correct -- but the gist is there!). This means grub can be what your bios looks for the signature from. You don't absolutely need your kernel to be signed, provided you can get a signature produced from grub, OR you have the ability to write one while in a signed O/S.

IF you want the easiest approach, look no further than any Linux distribution that purchased a key to find such a signed environment that permits the further writing of EFI variables.

Or, even easier -- just write then boot grub 2's EFI-signed bootloader to a usb key to get started. After that, writing a new OS entry to UEFI should be the most straightforward thing to do in the world!

*(You can even boot any kernel in the world with such a signed grub2 USB disk!)


The other part that really bugs me is that I cannot buy laptops without Windows from a lot of vendors.

First thing I do is wipe any Windows/Recovery partition from the drive and install Linux. Yet, I know that Microsoft got their cut and there is nothing I can do about it.

How this passes any reasonable antitrust test is beyond my comprehension; on the other hand what we call capitalism these days has not much to do with actual capitalism, so maybe I should not be surprised.


All of this faff is the reason why my next laptop will be purchased with Linux already on it. System76, most likely, although I want the Dell XPS 13 Developer edition--you can't buy one from Australia. Which annoys me to no end (and if any lovely soul in the US feels like helping me...)

For me, day to day, Linux makes an amazing desktop and development machine. It's been 7 years since I ran a Windows computer daily (although my iMac has a Bootcamp partition for DayZ), and I don't miss it, so I'm voting with my wallet. If you'd told me I could do that a decade ago, I would have thought you were nuts, but I'm happy that's the case now. I understand the Secure boot frustration, but without needing to dual boot it's a lot easier.

We've ceded a lot of control, and yet I wonder if we ever had a real say in the matter. Oh well, I can buy Linux laptops, and that's what matters to me. Heck, it might be easier to get them to dual boot than a Windows laptop ;)


Girlfriends laptop is an HP pavilion g7 and no number of tutorials can get grub to load up on boot. She wants to boot to Ubuntu? Gotta remember to F12 into the bios and manually select it...

But it seems like this is less of a UEFI issue and more of an HP shitty UEFI issue, since plenty of other manufacturers don't cause any problems.


This seems to be an explanation of how to install Ubuntu on an Acer laptop with Windows 8 - http://ubuntuforums.org/showthread.php?t=2176273&s=3a5c2ecb6...

I can't believe how convoluted and snafu-prone the process appears to be. I bought my Aspire V7 laptop in December and have never gotten around to installing Ubuntu on it simply because I dread the almost-guaranteed loss of 1-2 working days trying to undo whatever screwups happen.

I can't imagine even one lay, non-Linux lover even thinking of attempting something like this.


I guess you get what you pay for; http://i.imgur.com/wZcxCGZ.jpg


Or more appropriately, you get what you don't pay for! :-)


Strange.

When I built my current PC, I specifically set it up to boot in UEFI mode with secureboot disabled, so that windows won't thrash the mbr. Then I happily installed arch, everything worked fine (even though the double-bootloader, i.e. UEFI->grub->linux still makes me queasy).

The only trouble was when afterwards I tried installing windows. At least at the time, windows 7 DVDs weren't able to boot in UEFI mode, therefore weren't able to use the GPT-partitioned hard-disk. The fix was easy enough at least - just copy the bootloader to the correct location on the DVD and it boots just fine.


I plan to buy Microsoft Surface Pro 2 http://www.microsoft.com/surface/en-us/products/surface-pro-... and install Ubuntu 14.04 on it, however idiosyncratic that may sound: from reading the reviews, I like this hardware and I want its features (note that 64/12). I choose a model with 256GB SSD because it has 8GB RAM. I'm aware of numerous issues with Surface Pro 2 and Ubuntu http://ubuntuforums.org/showthread.php?t=2183946 but I'm counting on 1) making it all work with community's support, as have some people in that thread 2) things improving in 14.04.

I would use it solely with Ubuntu, but I'm not sure whether I can make Ubuntu run in a usable state right away, so I might dual-boot.

What should I know beforehand about running Linux on UEFI computers before my Surface Pro 2 arrives?


A lot of commentary here and in the article was worrying to me. Directly relevant since I bought a cheap (good value) Lenovo G510 laptop today that came with Win 8.1.

I didn't want to use Windows at all, just boot Clonezilla straightaway, clone the HDD and then install Debian. No go on booting from CD - "not allowed" due to the security setup. Ah, OK .. so it then boots Win 8.1 and I had to go through various Windows setups I wanted to avoid before rebooting and trying again.

It was then I had to figure out the BIOS side of things and saw it was all UEFI. I just turned it off and switched to "legacy" mode (BIOS legacy I assume) and disabled secure boot. Cloned the HDD, booted the Debian Testing CDROM and now have a "Jessie" install on it.


Lenovo laptops will boot the beta/new Clonezilla fine. Just disable Secure Boot. Debian will boot in about 5 seconds with UEFI and an SSD.


So, I have a Lenovo laptop, running dual boot (formerly Gentoo, now Ubuntu) with Windows on UEFI using rEFInd. Zero problems so far. Am I lucky or what?!


You are at least lucky that you didn't run into this issue https://plus.google.com/+MarcMERLIN/posts/4RDPCGYCDWq where booting linux on certain UEFI enabled Thinkpads would reproducibly brick the machine.


I have that machine, and I run Linux on it. His post was quite valuable, and that was obviously a bit of a horror on Lenovo's part, but it looks as if a subsequent BIOS update fixed it -- or at least I got a newer version of the BIOS than he refers to and I didn't get bricked.

The machine has a terrible single-button trackpad arrangement, but is otherwise fabulously good as a Linux development box.

(I do also run Windows, in a VM rather than dual-booting)


You don't need rEFInd with Lenovo. The built in EFI boot manager is good enough.


I'm still using a BIOS-based i7 system with a successful and stable triple-boot setup with XP (for some old music software), W7 (for games), and Linux (for work and everything else). It sounds from comments here I may be in for an adventure when I upgrade to a UEFI system, though the UEFI workstation I have at work seems to handle dual-booting fine.


I don't get how UEFI is still "getting the kinks worked out." Hasn't Apple been using it since they converted to Intel? How many years ago was that? How long does it take to "get the kinks worked out?" Or is this more of an issue between Windows / Linux implementations of UEFI support?


Apple makes both the OS and the UEFI implementation for their hardware. If the UEFI implementation caused any trouble for OS X, they could fix it before shipping the hardware. They also use a non-standard EFI implementation (IIRC the EFI partitions are HFS+ rather than FAT; whoever thought it was a good idea for EFI to read the partition table whatsoever should be fired).

The issue with PC hardware is, as it ever was, hardware vendors not following the standard. Much like Apple and OS X, they build some implementation and only fix bugs if they break Windows. So if you want to build a Linux implementation, you have to implement workarounds for all the quirks in different vendors' EFI implementations, and in the meantime you can't install on their hardware.


It's the classical problem of a standard that nobody actually follows. Every UEFI firmware that I have come across has behaved differently.

My current laptop has an UEFI implementation that only boots from a hardcoded path in the EFI partition. You guessed right, the path of the windows 8 bootloader.

The UEFI in my home server overwrites the UEFI boot manager list every time you save & exit the configuration tool.

If I remember correctly, the UEFI implementation of a MBP that my friend and I tried to make dual boot Ubuntu required a blessed HFS boot partition for every OS.


I'm really biased against Intel processors now because of this - I endure my distaste for how horribly documented and black box their parts are, but the fact their firmware for going on 6 generations has no documentation to enable coreboot on these boards drives me to AMD.

Yea, they don't open up all their stuff, but if you dig around you can usually find a board on most chipsets that works with coreboot. That gets my purchase. Plus they are doing good work with the radeonSI mesa driver, even if they still pack binary power firmware with it (I've read a few articles decompiling and inspecting it to know it is mostly just init command code to start the hardware).


My Acer require setting a BIOS password before modifying secure boot settings. Fortunately I can set it back to blank after modifying them.


I ran into this problem with an Acer laptop, somehow I got it working by disabling something in the bios (think legacy mode) then installed ubuntu from live cd and made that my boot loader. Ended up returning the laptop since I didn't like win8 and got a thinkpad T530 and mostly use ubuntu on it.


No problems here, I just don't dual boot. I recently got a new machine at work and the first thing I did was wipe it and install Slackware. It was my maiden voyage with UEFI which was a pain in the ass but no big deal.


I am forced to dual boot because my colleagues use Skype. Luckily, my laptop has a mSata slot , so a 128 Gb card dedicated to Linux. Never been an issue with dual booting my Thinkpad.


Skype is available on Linux. On Ubuntu, it's in the "partner" repo.


Yep, my only problem with Skype on Linux is that is an older version. Mumble taught me to use Push-to-talk, but it is only available on a newer version of Skype.


Seems like a business opportunity: for companies to make their own linux friendly hardware ... or to modify existing laptops to be linux friendly.


Restricted Boot has been worked around for some time with Shim.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: