Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft disables Spectre mitigations as Intel’s patches cause instability (securityweek.com)
757 points by tomtoise on Jan 29, 2018 | hide | past | favorite | 315 comments



These updates most definitely could've been handled better. I was having a busy week with exams and I get a call about around 10 machines not booting (this was before the announcement). Sure enough, last thing everyone reported was updating. I call the supplier and apparently they have reports of at least 2000 machines (at that moment) that had to be reimaged across the city (from what I could tell, all were older AMD PCs) because of this dumb update. I was used to not being to able to trust software, but if I can't trust hardware now either, farming does suddenly appear much more appealing.


Trust me, if you don't like the idea of your entire livelihood hinging on weather or not a piece of equipment boots up, farming is NOT the career for you.


I like what you did with the typo there.


I like the idea of Internet Weather. Seems probability of rain is currently increasing. And so does anticipation of floods and surges.


Do you think there's a market for "Cloud Meteorologists"?


Such data lake. Such pastoral, idyllic landscape.


There was an old website in the 90's that did Internet Weather, I think it was from one of the BBS magazines.


Holy wow. This is prose. This is poetry.


Looks like a field ripe for AI to introduce random auto-correct induced typos into paragraphs where a change of word order could be perceived as poetic.



Fully support lol you can buy land in Colorado pretty cheap.

Had a friend do that... He bought 3 acres, spent a couple years building a cabin in his free time. Then one day just quit and went off to the woods.

Still visit him and he seems pretty happy with it. He doesn't need much income and makes enough money off odd jobs he seems to be doing pretty well.


One of the attempts at adding to the list "Nobody is ever going to sell you goats as a service." is actually wrong. Companies do rent goats for land clearing[1]. Go for beekeeper then you too can do "bees as a service"[2].

1) http://rentagoat.com/

2) http://americanbeejournal.com/insights-honey-pollination-pac...


But if you look into their eyes, you can still see the progress bars…


Farming is all mechanized these days. Farmers have to deal with OS and firmware updates on their tractors.



Yeah, AWS had scheduled reboots that were supposed to happen around the day this was all announced, so we had to scramble to deal with them manually beforehand so we'd ensure our systems booted up properly.


I lost many hours over this last week. The system was unable to boot and finally a thread on reddit came to the rescue ( https://www.reddit.com/r/techsupport/comments/7sbihd/howto_f... ).

This actually made the system boot but there are some leftovers being installed on first boot that I've been unable to disable that also causes the system to be unable to boot.

So now, the machine is running but as soon as it is restarted we have to re-image the disk, go through the process of manually removing patches, and then pray that we don't have a power shortage as we'd have to do everything yet again on next boot.

I'm not convinced that this patch will solve the issue either, because if this updates requires a reboot the fix won't be installed if we can't boot. I might try to install this update from the recovery console see if that works.

Quite frustrating.


Speaking of which, why do so many things require reboot to update on Windows?


There is a very fundamental difference between how Unix and Windows view open files:

On Windows, once the file is open, it is that filename that is open; You can't rename or delete it; Therefore, if you want to replace a DLL (or any other file) that is in use, you have to kill any program that uses it before you can do that; And if it's a fundamental library everything uses (USER32.DLL COMCTL.DLL etc), the only effectively reliable way to do that is reboot.

On Unix, once you have a handle=descriptor of the file, the name is irrelevant; You can delete or rename the file, The descriptor still refers to the file that was opened. Thus, you can just replace any system file; Existing programs will keep using the old one until they close/restart, new programs will get the new file.

What this means is, that even though you don't NEED to restart anything for most upgrades in Unix/Linux, you're still running the old version until you restart the program that uses it. Most upgrade procedures will restart relevant daemons or user programs, or notify that you should, (e.g. Debian and Ubuntu do).

You always need a reboot to upgrade a kernel (kernel-splice and friends not withstanding), but otherwise it is enough in Unixes to restart the affected programs.


> On Windows, once the file is open, it is that filename that is open; You can't rename or delete it;

This is wrong... there's no clear-cut thing like the "file name" or "file stream" that you can specify as "in-use". It depends on the specifics of how the file is opened; often you can rename but not delete files that are open. Some (but AFAIK not all) in-use DLLs are like this. They can be renamed but not deleted. And then there's FILE_SHARE_DELETE which allows deletion, but then the handle starts returning errors when the file is deleted (as opposed to keeping the file "alive").

To make it even more confusing, you can pretty much always even create hardlinks to files that are "in use", but once you do that the new name cannot be deleted unless the old name can also be deleted (i.e. they follow the same rules). This should also make it clear that it's not the "name" that's in use, but the "actual file" (whatever that means... on NTFS I'd suppose it corresponds to the "file record" in the MFT).

The rule that Windows always abides by is that everything that takes up space on the disk must be reachable via some path. So you can't delete in-use files entirely because then they would be allocated disk space but unreachable via any path.

> What this means is, that even though you don't NEED to restart anything for most upgrades in Unix/Linux, you're still running the old version until you restart the program that uses it.

What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches. Is this correct? Because it seems to me that the Windows method might be less flexible but is likely to be more stable, since there's a single coherent global view of the file system at any given time.


Yes, in principle what you've said about the Unix approach here is correct, if you upgrade one half of a system and not the other half and now they're talking different protocols, that might not work.

But keep in mind that if your system can't cope with this what you've done there is engineer in unreliability, you've made a system that's deliberately not very robust, unless it's very, very tightly integrated (e.g. two sub-routines inside the same running program) the cost savings had better be _enormous_ or what you're doing is just amplifying a problem and giving it to somebody else, like "solving" a city's waste problem by just dumping all the raw sewage into a neighbouring city's rivers.

Now, the "you can't delete things because then the disk space is unreachable" argument makes plenty of sense for, say, FAT, a filesystem from the 1980s.

But (present year argument) this is 2018. Everybody's main file systems are journalled. Sure enough, both systems _can_ write a record to the journal which will cause the blocks to be freed on replay and then remove that journal entry if the blocks actually get freed up before then. The difference is that Windows doesn't bother doing this.


> Now, the "you can't delete things because then the disk space is unreachable" argument makes plenty of sense for, say, FAT, a filesystem from the 1980s.

Unix semantics were IIRC in place as far back as v7 (1979), possibly earlier - granted, a PDP disk from that time was bigger (~10-100MB) than the corresponding PC disk from a few years later (~1-10MB), but an appeal to technological progress in this particular example case is a moot point.


Aaaaaand here comes the Linux defending! OK...

> But keep in mind that if your system can't cope with this what you've done there is engineer in unreliability

It's weird that you're blaming my operating system's problems on me. "My system" is something a ton of other people wrote, and this is the case for pretty much every user of every OS. I'm not engineering anything into (or out of) my system so I don't get the "you've made a system that [basically, sucks]" comments.

> [other arguments]

I wasn't trying to go down this rabbit hole of Linux-bashing (I was just trying to present it as as objective of a flexibility-vs.-reliability trade-off as I could), but given the barrage of comments I've been receiving: I don't know about you, but it happens more often than I would like that I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot. Sometimes the window rendering gets messed up, sometimes I get random error pop-ups, sometimes stuff just doesn't run. I don't get why it happens in every instance, and there might be lots of different reasons in different instances. IPC mismatch is my best guess for a significant fraction of the incidents. All I know is it happens and it's less stable than what you (or I) would hope or expect. Yet from everyone's comments here I'm guessing I must be the only one who encounters this. Sad for me, but I'm happy for you guys I guess.


> ...but it happens more often than I would like that I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot...

Ubuntu developer here. This doesn't happen to me in practice. Most updates don't cause system instability. I rarely reboot.

Firefox is the most noticeable thing. After updating Firefox (usually it's a security update), Firefox often starts misbehaving until restarted. But I am very rarely forced to restart the login session or the entire system. I should, to get updates to actually take effect, but as a developer I'm usually aware of the specifics, so I can afford to be more selective than the average user.

Are you sure you aren't comparing apples to oranges here, and are actually complaining about the stability of updates while running the development release, which involves ABIs changing and so forth?


Development release? Currently I'm on 16.04, and I've never been on a development release of anything on Ubuntu. I'm just describing the behavior I usually see in practice (which it seems someone attributed to "D-BUS" [1]). Obviously the logon session doesn't get messed up if all I'm updating is something irrelevant like Firefox, but if I update stuff that would actually affect system components then there's a good chance I'll have to reboot after the update or I'll start seeing weird behavior. This has generally been my experience ever since... any Ubuntu version, really. It's almost ironic that the most robust thing to update in practice is the OS kernel.

[1] https://news.ycombinator.com/item?id=16257060


All I can say is that, based on everything I know, that's not the current experience of the majority of users, so it doesn't seem fair for you to generalize this to some architectural problem. I don't know if you unknowingly have some edge case setup or what.


Also, FYI, update: apparently I'm not the only person in the world experiencing this [1].

But we are the only 2 people in the world experiencing this, so never mind.

[1] https://news.ycombinator.com/item?id=16257935


Are you reading the same comments I'm writing? I was literally point-by-point saying the opposite of what you seem to have read me writing:

> You: All I can say is that, based on everything I know, that's not the current experience of the majority of users

>> Me: Yet from everyone's comments here I'm guessing I must be the only one who encounters this.

???

> You: It doesn't seem fair for you to generalize this to some architectural problem.

>> Me: I don't get why it happens in every instance, and there might be lots of different reasons in different instances. IPC mismatch is my best guess for a significant fraction of the incidents.

???


I have been running Ubuntu since 2004, and except for Firefox which tends to destabilize on update, I’ve observed this twice in 14 years; I update weekly or more often, and reboot every few months (usually on a kernel update I want to take hold)


Maybe it's because you update often so there are fewer changes in between? I update far less frequently.. it's not my primary OS so it's not like I'm even on it every day (or week). I use it whenever I need to.


Have you filed any bug report or point to one? All i have seen is a lot of handwaving about "IPC mismatch", things will not fix itself unless people actively report/help fix issues.


Here, on Arch, Firefox updates don't cause me any grief. Only time I've ever need to reboot is after a kernel or DKMS module update.

For systemd updates, I can just reload it. For the likes of core components like bash, and major DE updates, I can just lazily use loginctl to terminate all of my sessions, and start fresh.

I'm not sure why Firefox would be causing instability until you restart (reboot?), though.


> I'm not sure why Firefox would be causing instability until you restart (reboot?), though.

I get the impression that the UI loads files from disk dynamically, which start mismatching what was already loaded.


Firefox with e10s enabled (all current releases) detects version differences between parent process and and a child process started at a later point in time. Until recently it aborted the entire browser when that happened. I think now they have some logic that tries to keep running with the already open processes and abandoning the incompatible child.

Ideally they'd just prefork a template process for children and open fds for everything they need, that way such a detection wouldn't be necessary.


For Chrome we had to go through a lot of extra effort to make updates not break us.

http://neugierig.org/software/chromium/notes/2011/08/zygote....


Or you could explicitly design the package so that your pre/postinstall scripts ensure that you install to a separate directory, and rename-replace the old directory, so you can’t get half-finished updates.

Regarding the rest, if your code has incompatible API breaks between two patch or minor version changes, you’ll need to rethink your development model.


Ubuntu user here. Ubuntu is less stable than my second girlfriend, and she tried to stab me once.

Lately, every time my co-worker has updated Ubuntu, it has broken his system. He's like my canary in the coalmine. I wait for his system to not fall over before I will update mine.


Maybe its time to consider an OS with better maintainers, like Debian. I've had less issues on unstable/sid over the past few years than I had on the last Ubuntu LTS release (which was what spurred me to Debian). On my other machine, Debian Stretch (and Jessie prior to upgrading) have treated me well, there just isn't breakage when upgrading to the latest stable release or when applying security patches.


I chose Ubuntu because it was more widely supported by 3rd party software vendors and support companies than Debian. But this doesn't matter, because I still ran into hardware and software compatibility issues, and Ubuntu is more up to date than Debian, meaning Debian would have been even more broken by default.

I don't know of a single Linux distro that works out of the box with my laptops. Maybe if I bought a $2,000 laptop that shipped with Linux it would work. It would still be a pain in the ass to update, though.

I kind of hate Linux as a desktop now. I've been using it as such for 14 years, and it's only gotten worse.


I had very similar reasons for starting with Ubuntu, but when it came right down to it, all the software that I thought would only work on Ubuntu works just fine on Debian.

Hardware support wise, newer kernels generally come to Debian sooner too, as the latest stable kernel generally gets into Sid a week or so after release, then added to backports for Debian Stable after a few weeks. Currently you can nab 4.14 from backports on Debian Stable, and 4.15 should be coming down the pike shortly (seeing as its just a few days old).


Depends what you need ofcourse; a lot of people buy the newest and fastest but do not need it. Most (90%+) of my dev work works fine on an X220 which I can pick up for $80, has stellar linux support and still really good (14+ hour) battery life. Depends on the use case ofcourse, but when I see what most people around me do on their 2k+ laptops, they could have saved most of that. Also, Ubuntu Unity is just not very good; but Ubuntu or Debian with i3 are perfect. Cannot imagine a better desktop.


> I kind of hate Linux as a desktop now

This is unfortunate. I've been using Linux and suffered, for the lack of a better word, with its warts since 2002.

There was a period between 2013-2016 where Linux was great as my main operating system. It was more stable than OS X and was much better for development.

Is hardware support your main issue with desktop Linux?


No, it's mostly software, but hardware is a big problem.

The software (especially Ubuntu's desktop) is lacking basic features from even 10 years ago. Maybe there's a way to get it to do what it used to do, but I can't figure it out, and I'm not going to research for two days to figure it out. I just live with a lack of functionality until I can replace this thing.

Not only that, but things are more complicated, with more subsystems applying more constraints (in the name of compatibility, or security, or whatever) that I never asked for and that constantly gets in my way. Just trying to control sound output and volume gives me headaches. Trying to get a new piece of software to work requires working out why some shitty subsystem is not letting the software work, even though it is installed correctly. Or whining about security problems. You installed the software, Ubuntu, don't fucking whine to me that there's a SELinux violation when I open my browser!

Hardware is a big problem because modern software requires more and more memory and compute cycles. All of my old, stable laptops can no longer perform the web browsing workloads they used to. Browsers just crash from lack of memory, or churn from too much processing. If you don't use modern browsers, pages just won't load.

Aside from the computing power issue, drivers are garbage. Ignoring the fact that some installers simply don't support the most modern hard disks, and UEFI stupidity, I can't get video to work half the time. When I can, there are artifacts everywhere, and I have to research for three days straight to decipher what mystical combination of graphics driver and firmware and display server and display configuration will give me working graphics. Virtually every new laptop for several years uses hybrid graphics, and you can't opt-out or you get artifacts or crashing. Even my wifi card causes corruption and system crashing, which I can barely control if I turn off all the features of the driver and set it to the lowest speed! Wifi!!! How do you screw that up, seriously?

Modern Linux is just a pain in the ass and I'm way too old to spend my life trying to make it work.


[flagged]


Please don't post unsubstantive comments here.


FF usually just CTD after an update.


> I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot

Which is almost true. In fact, you were unable to use programs that changed runtime dependencies or conflicted with current user sessions, init processes or kernel modules. You can often use other programs, but not ones that in any way touched the ones you upgraded, for one reason or another.

If you have to upgrade, say, a command line utility, that almost always doesn't require rebooting. If you have to upgrade a GUI app, or a tool that depends on some bastardized unholy subsystem designed to "secure desktop sessions", that may very well require relinquishing the session and restarting it. If you have to upgrade a tool used by your desktop (and if you have a complex desktop, that is literally thousands of programs), it's the same story, though you may even need to restart your desktop session manager or even your display server.

Then there's system init processes, kernel modules, firmware, system daemons and the like. You can reload those without rebooting, but it's certainly not easy - you will probably have to change to runlevel 1, which kills almost everything running. You can reload the kernel without rebooting, too - very handy for live patching - but really, why the hell would anyone want to do this unless they were afraid to power off their system?

So, technically, rebooting is not required to update in many cases in Linux, just like in Windows. But it is definitely the simplest and most reliable way.


> If you have to upgrade a GUI app, or a tool that depends on some bastardized unholy subsystem designed to "secure desktop sessions", that may very well require relinquishing the session and restarting it. If you have to upgrade a tool used by your desktop (and if you have a complex desktop, that is literally thousands of programs), it's the same story, though you may even need to restart your desktop session manager or even your display server.

Thanks, I'm glad at least one person agrees I'm not hallucinating. The vast majority of people here are telling me I'm basically the only one this happens to.


I've long since abandoned "bare metal" Linux in favor of VirtualBox and Windows for my home machine and VirtualBox and macOS on my laptop.

Monday's I merge last week's snapshot, take a new one, and run all my updates. Then I do my dev work in my VM. Before I head out on trips, I just ship the entire machine over the network to my MacBook Pro.

This is mostly because have you literally ever tried to install any Linux on laptops? It's always a Russian roulette with those $+#&ing Broadcom wireless chipsets. >.<

So you're not hallucinating. Linux as a desktop/laptop had a sweet spot from like...2012-ish till 2016. Then 802.11ac went mainstream so Broadcom released new chipsets and graphics cards had a whole thing with new drivers and Ubuntu's packagers (the people) lost their mind or something.

Nothing feels right, at least in Ubuntu/Arch land right now.


How about just buy stuff that's well supported if you intend to use Linux on it. Been working for me since 2003.


Because I don't buy my laptops. My employer does.


> I don't know about you, but it happens more often than I would like that I update Linux (Ubuntu) and, lo and behold, I can't really use any programs until I reboot. Sometimes the window rendering gets messed up, sometimes I get random error pop-ups, sometimes stuff just doesn't run.

This is less of an issue with Linux, per se, and more to do with proprietary video drivers.

I have multiple systems in my home with various GPUs. The systems running Intel and AMD GPUs with open source drivers don't have this problem. The two desktops with Nvidia GPUs have this problem whenever the Nvidia driver is updated.

I also had the same exact problem with my AMD system back when it was running the proprietary fglrx driver.


>This is less of an issue with Linux, per se, and more to do with proprietary video drivers.

Actually, it IS a problem with Linux. I don't get this behavior on my Windows or OSX machines where NVIDIA has been reliably (modulo obvious caveats) shipping "evil proprietary" drivers for a decade.

Linux is great, but it doesn't need to be coddled.


It's weird that you're blaming my operating system's problems on me.

No one is blaming you specifically, it is a common way of saying, “if you write operating systems, and you do $THING, you will get $RESULT.” Common, but wrong, which is why your high school English teacher will ding you for phrasing something that way.


Why would Linux need ‘defending’ for superior flexibility? The fact that files work like this is an advantage, not a disadvantage. I have never seen the flaw you’ve pointed out actually occurring in practice.


Well, it's not always an advantage. It's just the consequences of a different locking philosophy.

Windows patches are a much bigger pain in the ass to deal with on a month-to-month basis, but Linux patches can really bite you.

Example 1:

Say I have an application 1 that uses shared library X, and a application 2 that spawns an external process every 5 minutes that uses library X and communicates in some way with application 1. Now let's say that library X v2.0 and v2.1 are incompatible, and I need to apply an update.

On Windows, if I update this program, it will keep running until the system is rebooted. Updates, although they take significant time due to restarts, are essentially atomic. The update either applies to the entire system or none of the system. The system will continue to function in the unpatched state until after it reboots.

On Linux, it's possible for application 1 to continue to run with v2.0 of the shared library, while application 2 will load v2.1, and suddenly your applications stop working. You have to know that your security update is going to cause this breaking change and you need to deal with it immediately after applying the update.

Example 2:

A patch is released which, unbeknownst to you, causes your system to be configured in a non-bootable state.

On Windows, you'll find out immediately that your patch broke the system. It's likely (but not certain) to reboot again, roll back the patch, and return to the pre-patched state. In any event, you will know that the breaking patch is one that was in the most recently applied batch.

On Linux, you may not reboot for months. There may be dozens or hundreds of updates applied before you reboot your system and find that it's not in a bootable state, and you'll have no idea which patch has caused your issue. If you want your system in a known-working state, you'll have to restore it prior to the last system reboot. And God help you if you made any configuration changes or updates to applications that are not in your distro's repository.


No lie. After all nothing is stopping you from updating once every tuesday and rebooting after updates. You just wont have to do it 8 times in succession or stop in the middle of doing useful work to do so.

I just don't update nvidia or my kernel automatically and magically I only have to reboot less than once a month and always on my schedule.


I have! We had a log shipping daemon that wasn't always releasing its file handles properly and kept taking out applications due to out of spacing the box. That said, I drastically prefer the Unix behaviour.


It is a common tactic of Linux evangelists to state that they never have the problems you're experiencing and thereby disregard any criticism. You'll probably also get variants of "you're using the wrong distribution".


> What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches. Is this correct?

Only if the libraries that use IPC have changed their wire format between versions, which would be a pretty bad practice, so I wouldn't expect that to happen often (if ever).

If something that's already running has its data files moved around or changed sufficiently, and it later tries to open (that is, the app was running but the data file wasn't open when the upgrade happened) what it thinks is an old data file, but is either new and different or just missing, that could cause problems.

> Because it seems to me that the Windows method might be less flexible but is likely to be more stable, since there's a single coherent global view of the file system at any given time.

In practice I've never had an issue with this (nearly 20 years using various Linux desktop and server distros). Upgrade-in-place is generally the norm, and most people will only reboot if there's a kernel update or an update to the init system.


*nixes have a system for handing off to the newer version such as kpatch and kgraft.

kgraft for example swaps off each syscall for a process while it is not being used. This lets the OS slowly transfer to the new kernel as it is running.

kpatch does it all in one go but locks up the system for a few milliseconds.

The version that is currently merged into 4.0+ kernels is a hybrid of the two developed by the authors of both systems.


Runtime kernel patching is a new thing. "*nixes" do not generally have these systems. Some proprietary technologies exist which enable specific platforms to use runtime kernel patching exist.


> What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches.

In theory, but Linux systems tend to do very little IPC other than X11, pipelines, and IP-based communication, where the protocols tend to support running with different versions.

In practice you can achieve multi-year uptimes with systems until you get a mandatory kernel security update.


How can you ave a multi-year uptime unless you willfully ignore kernel security updates? In this day and age, year-long uptimes are an anti-pattern (if only because you cannot be sure whether your services are actually reboot-safe).


It's easy. You gather information about what the risks and hazards are for each vulnerability and then pragmatically decide whether there are any unacceptable risks after you mitigate with other layers of security.

It's a really common engineering task to do this and I'm not at all surprised that someone trying to maintain uptime would do so. Honestly it's more mature than updating every time because each change also introduces more potential for regression. If your goal is to run a stable system you want to avoid this unless the risk is outweighed.


But with "yum check-update" or the equivalent apt-get incantation saying you have dozens of security updates every week or two, reading the release notes for all of them and deciding which ones can be skipped safely in your environment is too much work. Far easier to just apply all updates every two weeks or monthly or whatever your schedule is, and then reboot.


Fully agree here; a lot (most?) of patches and updates are simply not exploitable in the respective server use case, so why should I incur risk of downtime to apply it?


you willfully ignore kernel security updates.

If my system is closed to the public world, has a tiny amount of external services, and I am aware of the specific bug delta since system release and what mitigations may or may not be required, I can leave it running as long as I choose to accept the risk. Cute phrases like 'pattern' and 'anti-pattern' are rules of thumb, not absolute truths.


Ksplice or KernelCare


Kernel Live Patching (KLP) has been in mainline since 4.4. I've used it to patch various flaws in my Linux distribution since rebooting the running cluster is more tedious.


kexec?


kexec doesn't keep your services running, it just allows the kernel to act as a bootloader for another kernel.


X11 (or other window-related tooling) was exactly what I was thinking of actually, because every time I do a major Linux (Ubuntu) update I can't really launch programs and use my computer normally until I reboot. It always gets finicky and IPC mismatch is the best explanation I can think of.


Are you sure this isn't more the Desktop Environment/Display Manager than X11? Or otherwise something to to with your use case?

I've primarily been using AwesomeWM for the last few years and occasionally XFCE (both on ArchLinux) and I cant recall ever experiencing what you describe.


That's D-BUS being a terribly specified and poorly implemented mess. Everything underneath should be solid.


I mean, OK, but if Windows's GUI (or PowerShell, or whatever) crashed or misbehaved upon update, would you be satisfied with "that's win32k/DWM/whatever being a poorly implemented mess; everything underneath should be solid"?


No. D-BUS is a travesty and blight upon the Linux desktop, and with systemd, every Linux system. It's the most fragile and nasty IPC system I've encountered. There are several better alternatives implemented by competent people, so there's really no excuse for its many defects.


Roger, I think we both know D-Bus has been rock stable at for the last 7 years, or possibly more, and Simon McVittie, its current maintainer, is a highly skilled and competent engineer.

While I find the situation different with systemd maintainers, whose communication style used to be questionable too often, and their software had some nasty bugs, I must admit that despite those problems they've also built software, which is nevertheless reliable, even though it took them a lot of time to come there.

This, honestly, is the most disappointing statement I read from you in the recent couple of years. And I'm saying this as a person who used to respect you a lot. I find your lack of respect together with the willingness to spread lies like this quite appalling.


> What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches. Is this correct?

At the first glance this is true, but you can guard against this in several ways. If your process only forks children then it already inherits the loaded libraries from the parent as part of the forked address space. Alternatively you can pass open file descriptors between processes. Another option is to use file-system snapshots, at least if the filesystem supports them.

Yet another option is to not replace individual files but complete directories and swap them out via RENAME_EXCHANGE (an atomic swap, available since kernel 3.15). As long as the process keeps a handle on its original working directory it can keep working with the old version even if it has been replaced with a new one.

Some of those approaches are tricky, but if you want to guard against such inconsistencies at least it is possible. And if your IPC interfaces provide a stable API it shouldn't be necessary.

> And then there's FILE_SHARE_DELETE which allows deletion

That has some issues when the file is mmaped. If I recall correctly you can't replace it as long as a mapping is open.


> What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches.

While this is true, I've never seen this to be a problem. If two programs use IPC, they usually use either stable, or compatible protocol.

To make things even more complicated, you can have two programs, either each in it's own container, or statically linked, or with their private bundles of libraries, doing IPC and then they are free to have different versions of the underlying libraries, while the users still expect them to work fine.


In principle, yes, IPC can fail between different versions of the same software. However, the chances that communication will fail between new version and some other utility are IMO much higher. A surprise comm failure between two copies of the same software (even different revisions) usually makes developers look pretty bad.

Some versions are known to be incompatible and most Linux distributions do a very good job of recommending and doing a restart of affected services in a way transparent to users. I have been running Linux and home and at work for years, almost never restart those workstations and, as far as I can tell, never had problems from piecemeal upgrades. My 2c.


Here is the simplest way I can put it: When you delete a file in NT, any NtCreateFile() on its name will fail with STATUS_DELETE_PENDING until the last handle is closed.[1] Unix will remove the name for this case and the name is re-usable for any number of unrelated future files.

[1] Note that is not the same as your "must be reachable via some path". It is literally inaccessible by name after delete. Try to access by name and you get STATUS_DELETE_PENDING. This is unrelated to the other misfeature of being able to block deletes by not including FILE_SHARE_DELETE.


"Reachable" doesn't mean "openable". Reachable just means there is a path that the system identifies the file with. There are files you cannot open but which are nevertheless reachable by path. Lots of reasons can exist for this and a pending delete is just one of them. Others can include having wrong permissions or being special files (e.g. hiberfil.sys or even $MFTMirr).


I would be kind of surprised if "the system" cares much about the name of a delete pending file. NT philosophy is to discard the name as soon as possible and work with handles. I was under the impression that ntfs.sys only has this behavior because older filesystems led everybody to expect it.


Well if you look at the scenario you described, I don't believe the parent folder can be deleted while the child is pending deletion. And if the system crashes, I'd expect the file to be there (but haven't tested). So the path components do have to be kept around somewhere...


It's true that NT won't let you remove a directory if a child has a handle open. But I suspect you are getting the reasoning backwards. The directory is not empty as long as that delete pending file is there. Remove this ill-conceived implementation detail (and it is that) then this and other problems go away.

There is also an API that retrieves a filename from a handle. I don't think it guarantees the name be usable though.

It's easy to imagine a system that works the way I would have it, because it exists: Unix. You can unlink and keep descriptors open. NT is very close to being there too, except for these goofy quirks which are kind of artificial.


this has led to some interesting observations for me in linux when I've had really large log files that were still in use and were "deleted" but the file was still in use. (I think cat /dev/nul > file will do this). Tools like du now cannot find where the disk usage actually is. Only on restart of the app does usage show correctly again. Kinda hard to troubleshoot if you were not aware this was what happened.


I agree that this is a drawback or a common gotcha for the Unix behavior which would be more user visible with the NT behavior, but to anyone advocating the Windows way I would ask: is it worth getting this fringe detail "right" by making every unlink(x); open(x, O_CREAT ...); into a risky behavior that may randomly fail depending on what another process is doing to x? On Windows, I have seen this type of pattern, a common one because most people aren't aware of this corner case, be the cause of seemingly random failures that would be rather inexplicable to most programmers. (Often the program holding x open is an AV product scanning it for viruses, meaning that any given user system might have a flurry of race condition causing filesystem activities that may or may not conflict with your process.)


>So you can't delete in-use files entirely because then they would be allocated disk space but unreachable via any path.

Ah. This must be why I can't permanently delete files in use by a program but can sometimes "delete" them and send them to the recycle bin.


> So you can't delete in-use files entirely because then they would be allocated disk space but unreachable via any path.

Isn't there a $MFT\\{file entry number} virtual directory that gives an alternate access path to each file? Wouldn't that qualify as "a way to access the file?"

Also, you might say that in practice Linux abides by the same rule - the old file can be referenced through some /proc/$pid/fd/$fd entry.


>What I expect it also means is that you'll get inconsistencies when doing inter-process communication, since they'll be using different libraries with potential mismatches.

That's why you should restart those programs that were using the library. You can find this out via `lsof`.


Really? Somehow procure a list of all libraries that were updated in a system update, go through each one, find out which program was using it, and kill that program? Every single time I update? You can't be serious.


Apt does this automatically for you on uphrade


"checkrestart" also exists (on some distributions) for exactly this same purpose.


What you're describing is trivially done on any *nix system with a mature package manager, if it isn't doing this already:

1. Do the upgrade, this changes the files.

2. You have a log of what packages got upgraded.

3. Run the package system's introspection commands to see what files belonged to that package before & after

4. Run the package system's introspection commands to see what reverse depends on those packages, or use the lsof trick mentioned upthread.

5. For each of those things restart any running instance of the application / daemon.

Even if something didn't implement this already (most mature systems to) this is at most a rather trivial 30 line shellscript.


In a case where one could get some sort of inconsistency because of different library versions, you restart the applications. That’s the point, that this can be handled by restarting the applications and not the entire operating system.


Is there a way to get a nice helpful popup telling me which applications and services to restart?


Yes, there are many ways to determine that on the command line. As far as a GUI, I couldn’t say as that isn’t how I do system administration.

I’m not sure what scenario you are envisioning. Usually upgrades are handled via the distribution and its package manager, and the maintainers take care of library issues. It’s not like windows where you go all over downloading packages from websites and installing them over each other.


I was responding to your own comment

>In a case where one could get some sort of inconsistency because of different library versions, you restart the applications.

I am envisioning the same scenario you replied to!


Yes, but for some reason you want a GUI alert to tell you important things, which is a foreign concept to me. What I envision is knowing what is running on your server and updating programs purposefully, with knowledge of how they interact and what problems version inconsistency could cause.


You're the one proposing the enumeration of affected programs and services and restarting them as a solution!! How is using a GUI a foreign concept? Are you debating the merits of a GUI in 2018?

In any case, my assumption here is we're trying to help the user and give them an easy way to know what to do, instead of leaving their software in an undefined state.


Great. For one, I was initially confused because I thought you were the author of the post I replied to you. Next, yes. I think we should do that.


Heck, on Windows I couldn't even rename audio/video files when playing them in an app, but I can on macOS, without anything crashing (except some stubborn Windows-logic apps like VLC which will fail to replay something from its playlist that has since been renamed, but at least on macOS it will still allow you to rename or move the files while they're being played.)

It's small details like these that make so much difference in daily convenience.


I think a general purpose UX should err on the side of "make it difficult for non-technical users to make mistakes"

Renaming/Deleting files in use is one of those things that us nerds like to complain about, but it makes sense when you think of an accountant that has an open spreadsheet and accidentally deletes a folder with that file. For average non-technical people (on any OS) I would say it makes sense to block that file from being deleted.


I see you’ve never actually experienced it? It is actually more intuitive for the average user, as the file name is updated across every application immediately. In fact, you can actually change the file name from the top of the window directly.


I have experienced this many times in MacOS after deleting a file/folder and replacing it with some other version, like one downloaded from an email. After a while, I realize that I'm working from ~/.Trash and the attachment I just sent didn't include the changes I had been making for the last hour.

I've had this happen in bash also, where I modify some script in an external editor then try to run it, only to realize that I'm running from the trash, even though the bash prompt naively tells me I'm in ~/SomethingNotTrashFolder.

Intuitive would be "Hey, this file you're working on was just moved to the trash. There's a 99.9999% chance you don't want to do this." rather than hoping the filepath is visible, and noticed by dumb chance, in the title bar, since not many people periodically glance at the file they have open to verify it's still the file they want open.


How does the user know that the file is open? Also, Is it consistent across network folder renames too?


No idea, if the OS design plays into this or if it's just a application design convention, but on desktop Linux how it often works (for example with KDE programs) is that if a program has a file open which was moved (including moved to the trash), then the program will pop open a little persistent notification with a button offering to save the content that you still have in the program to the location where the file used to be, effectively allowing you to recover from such mistakes without hindering you from moving/deleting the file.


> On Windows, once the file is open, it is that filename that is open; You can't rename or delete it;

I am not sure about deletion, but one of my programs has been using the ability to rename itself to do updates for the last 15 years.


This doesn't fully explain why a reboot is not required on Linux. If a *nix operating system updates sysfile1.so and sysfile2.so in the way you describe, then there will be some time where the filename sysfile1.so refers to the new version of that file while sysfile2.so refers to the old version. A program that is started in this brief window will get mixed versions of these libraries. It is unlikely that all combinations of versions of libraries have been tested together, so you could end up running with untested and possibly incompatible versions of libraries.


> This doesn't fully explain why a reboot is not required on Linux.

Of course there is a theoretical possibility that this will happen; however, in practice, updates (especially security updates) on Linux happen with ABI compatible libraries. E.g. on debian/ubuntu

apt-get update && apt-get upgrade

Will generally only do ABI compatible updates, without installing additional packages (you need 'dist-upgrade' or 'full-upgrade' for that).

Some updates will go as far as to prevent a program restart while updating (by temporarily making the executable unavailable).

Firefox on Ubuntu is an outlier - an update will replace it with one that isn't ABI compatible. It detects this and encourages you to restart it.

All in all, it's not that a reboot is never required for linux theoretically - it is that practically, you MUST reboot only for a kernel update, and may occasionally need to restart other programs that have been updated (but are rarely forced too).


This generally should never happen, Linux distributions don’t wholesale replace shared objects with ABI incompatible versions - soname’s exist to protect against this very issue.


I had a program with a rarely reported bug that turned out to be lazy loading of .so files that was this bug. Switched to eager loading and it went away.


> On Windows, once the file is open, it is that filename that is open; You can't rename or delete it

It's simple for any application to open a file in Windows such that it will allow a rename or delete while open - set the FILE_SHARE_DELETE bit on the dwShareMode arg of the win32 CreateFile() function. In .NET, the same behaviour is exposed by File.Open / FileShare.Delete.


That is incorrect. It just depends on Windows how you call the Win32 API and what parameters you specify. Many options there - in the end it's just an object in the NT kernel space.


> On Windows, once the file is open, it is that filename that is open; You can't rename or delete it

You can rename a file when it's open. And once it's renamed, you can delete it.


Maybe this is also why 'rm -rf /' is so effective in destroying your system, isn't it?


What is really irritating is when for example an update that only changes mshtml.dll requires a reboot because a program unnecessarily depends on it. These are not as common as it used to be though.


> Speaking of which, why do so many things require reboot to update on Windows?

Can't speak for everyone else, but Windows fully supports shared file-access which prevents the kind of file-locks which causes reboot requirements.

The problem is that the default file-share permissions in the common Windows APIs (unless you want to get verbose in your code) is that the opening process demands exclusive access and locking to the underlying file for the lifetime of that file-handle.

So unless the programmer takes the time to research that 1. these file-share permissions exists, 2. which permissions are appropriate for the use-cases they have in their code, and 3. how to apply these more lenient permissions in their code...

Unless all that, you get Windows programs which creates exclusive file-locks, which again causes reboot-requirements upon upgrades. Not surprising really.

In Linux/UNIX, the default seems to be the other way around: Full-sharing, unless locked down, and people seem prepared to write defensive code to lock-down only upon need, or have code prepare for worst-case scenarios.


Windows executables are opened with mandatory exclusive locking. So you can't overwrite a program or its DLLs while any instances of it are running. If a DLL is widely used, that makes it essentially impossible to update while the system is in use.

There is a registry key which allows an update to schedule a set of rename operations on boot to drop in replacement file(s). https://blogs.technet.microsoft.com/brad_rutkowski/2007/06/2...


> Windows executables are opened with mandatory exclusive locking. So you can't overwrite a program or its DLLs while any instances of it are running.

This is not always correct; see: https://news.ycombinator.com/item?id=16256483


> Speaking of which, why do so many things require reboot to update on Windows?

We are getting there on Linux too - with atomic or image based updates of the underlying system. On servers you will (or already) have A/B partitions (or ostrees), on mobiles and IoT too, some desktops (looking at Fedora) also prefer reboot-update-reboot cycle, to prevent things like killing X while doing your update and leaving your machine in inconsistent state.

macOS also does system updates with reboot, for the same reasons.


It's a miracle that reboot-less updates mostly work in Linux. You need to restart services, etc. to make sure they have the latest libs. Gnome3+systemd does that now:

https://blogs.gnome.org/hughsie/2012/06/04/offline-os-update...

https://www.freedesktop.org/wiki/Software/systemd/SystemUpda...

https://fedoraproject.org/wiki/Features/OfflineSystemUpdates

It's still much faster than a Windows update (at least on my SSD system).


>macOS also does system updates with reboot, for the same reasons.

And speaking from experience, the recent macOS way of applying an update is absolutely insane - on my Macbook Pro (with the stock SSD with 3GB/sec read and 2GB/sec of write) a small system update can take 10+ minutes to install


I used to joke that Windows was alone in this issue, my work laptop being a prime example, but even Apple tends to towards reboots more often than not and especially as of late. Fortunately both can do it during slow periods; as in over night; and make updates nearly invisible to users.


So that's a fine question to ask, and you've received many fascinating answers, but can I just suggest that this case - that is, applying patches that relate to the security of your processor cache - is a very fine reason for requiring a reboot, since it will ensure that your processor cache starts out fresh and all behaviors that cause data to be placed there are correctly following the patched behavior.


The main reason is that in Windows executable files and dynamic libraries (.exe and .dll) are locked while a process is using them, while in other systems, e.g. Linux, you can delete them from disk. The only absolute need for reboot should be an OS kernel update (there are cases where a kernel could be updated/patched without a reboot).


I think a better question would be: why does Windows need multiple successive reboots? Too often my experience can be resumed as: update-reboot-reboot-update continuing-reboot... ad nauseam.

At least on *nix, even when you need a reboot, once is enough.


To force a reinitialisation of all security contexts. Same reason that many websites make you log in again immediately after changing your password (which interestingly Windows doesn’t)

Historically (Windows 95 and earlier) reboots were required to reload DLLs and so on but that’s not really true anymore. Still a lot of installers and docs say to reboot when it’s not really necessary as a holdover from then


I was under the impression that the reason is what u/beagle3 mentions (in a sibling comment to yours): open system files. I'm curious to see your comment on what he describes, as what you mention (reloading some security context) does not seem to be the whole truth. That websites make one log in again after changing your password has nothing to do with this.


That websites make one log in again after changing your password has nothing to do with this.

No it is exactly the same principle: something has changed therefore invalidate all existing contexts. Far less error prone than trying to recompute them, what happens e.g. if a resource has already been accessed in a context that is now denied? Security 101.


I don't see how changing my password changes a "security context". I don't suddenly get more or fewer permissions.

As for logging other places out, that's a design choice. People change password either because they routinely change theirs (they either need to or choose to), or because of a (suspected) compromise. In the latter case you'll probably want to log everyone else out (though, who says you're logging out the attacker and not the legitimate user?) and in the former case you shouldn't (otherwise changing your password becomes annoying and avoided). The interface for changing the password could have a "log out all sessions" checkbox or it could just be a feature separate from changing your password.

No, it's not as simple as you put it. No need to condescendingly pass it off as "security 101".


Same thing happened to me this week-end and we are not alone [1]. The worst is that it's actually the second time on this computer (Kaby Lake i3 on a MB with Intel B250 Chipset). I had the same issue in December last year (exact same behaviour with probably an earlier version of that hotfix).

I'm running with Windows Update service disabled till this is fixed for good !

[1] https://answers.microsoft.com/en-us/windows/forum/windows_10...


I have not been able to disable the update service. I'm supposed to be able to, but damn if I don't open my computer in the morning and see everything closed (and lock files all over the place) and all kinds of annoying shit like this.

I actually like Win 10, but it's shit like this that keeps me from becoming a true convert. Oh, for $X00 I can get enterprise update, but IMO that's just Win 10 home being used as ransomware. /rant


[flagged]


Would you please stop posting unsubstantive comments to Hacker News? You're welcome here if you want to use the site as intended, but comments need to be better than this. If you'd read https://news.ycombinator.com/newsguidelines.html and https://news.ycombinator.com/newswelcome.html and take that spirit to heart, we'd appreciate it.


In a related development, there are proposed patches to the Linux kernel (not yet merged) to blacklist the broken microcode updates: https://www.spinics.net/lists/kernel/msg2707159.html

That patch disables the use by the kernel of the new IBPB/IBRS features provided by the updated microcode, when it's of a "known bad" revision. Since Linux prefers the "retpoline" mitigation instead of IBRS, and AFAIK so far the upstream kernel (and most of the backports to stable kernels) doesn't use IBPB yet, that might explain why Linux seems to have been less affected by the microcode update instabilities than Windows.

Also interesting: that patch has a link to an official Intel list of broken microcode versions.


> In a related development, there are proposed patches to the Linux kernel (not yet merged) to blacklist the broken microcode updates

Linus probably won't pull it until it's truly known to be stable, because of his attitude towards having decent quality code and not causing needless system instability.

Without Linus... who knows what would have happened by now.


They are on the "tip" tree (https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/...), so they'll probably be sent to Linus as soon as the merge window opens (Linux 4.15 has just been released, so the merge window should open soon). I expect these patches to be on 4.16, and also to be backported to the stable releases (4.15.x and others).

But yeah, upstream Linux kernel development is taking it slow. As far as I can see, variant 3 mitigations (PTI) are already in, variant 2 mitigations are partially in (retpoline) and partially not (the microcode dependent ones), and variant 1 mitigations are still under discussion.


"But yeah, upstream Linux kernel development is taking it slow."

Taking it slow seems very appropriate to me. This seems to me to have been a case of everybody grossly overestimating the short-term portion of the catastrophe, and underestimating the long term.

In the short term, the only people who were going to be plausibly affected in the next three to six months are people on shared hosting of some sort where you may share a server with somebody else's untrusted code, where an accelerated fix is in order, but also something that can be centrally handled. I'm not that worried in the next three to six months that my personal desktop is somehow going to be compromised by either Meltdown or Spectre, and personally, if I see a noticeable performance issue I may well revert the fixes (I'm on Linux), because first you have to penetrate my defenses to deliver anything anyhow, then you have to be in a situation where you're not going to just use a root exploit, which probably means you're in a sandbox or something which means it's that much more difficult to figure out how to exploit this. For most users, uses, and systems, spectre and meltdown aren't that immediately pressing.

Meanwhile, in the long term this may require basically redesigning CPUs to a very significant degree; there is no software patch that can fix the underlying issues. It is difficult to overstate the long term impact of this class of bugs. IMHO the real problem from the jousting match with Linus and Intel last week isn't that Intel's patches today aren't quality code, but that it makes me concerned that they're just going to sweep this fundamental problem under the rug. As I said in another post on HN, I fully understand that remediating this is going to be years, and I don't expect Intel to have an answer overnight, or a full solution in their next "tock". But if they're not taking this seriously, we have a very large long-term problem. We're only going to see more leaks in the long term.


I read somewhere that people have developed POCs of these using JavaScript. At minimum, you'll want to keep your browser up to date as there are mitigations happening there too. Who knew that exposing high precision timers to untrusted JavaScript would be a bad idea?

Apart from browsers, it's fortunately pretty easy to avoid running code you don't trust on your devices.


What I've seen that the POCs can actually do is not worth running around with your hair on fire, from what I've seen.

Note I did not say there is no reason to be concerned about Meltdown and Spectre... just that for most users, uses, and systems, it's not that important. In the next three-to-six months, if you care about security at all, unless you are already running a tip-top tight operation, your money and effort is better spent defending against the many already-realistic threats, rather than worrying about the vector that may someday be converted into a realistic threat. Meltdown isn't what is going to drag your business to a halt next week; it's that ransomware that one of your less-savvy employees opened while mapped to the unbacked-up world-writable corporate share that has all the spreadsheets your business runs on. At the moment, the net risk of applying the Meltdown fix comfortably exceeds by several orders of magnitude the risk that Meltdown itself poses.

And my point is precisely that for most users and uses, that panic was not justified. Those for whom that is not true (VM hosting companies) already know they need to be more aggressive. There was no point in pushing out patches that nearly bricked some computers.


I agree with your points. In fact, I made the same argument about removing a Heartbleed/Spectre-related patch that caused issues for one of our applications - "the machine doesn't execute any untrusted code, so this patch isn't strictly necessary."


Update: the merge window has started, and that blacklist has just been merged: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


Everyone would have politely installed their "fixes" . This situation shows why linus was swearing .


Exactly. He's very intelligent about how he manages the kernel, which is precisely why it is preferred by a majority of businesses throughout the world, and is the absolute #1 in the supercomputer world, for the top 500:

https://www.top500.org/statistics/details/osfam/1


I was just pointing out why politeness gets you nowhere when dealing with eejits . Linus improved that Capone's saying about kind word and a gun .


>However, Intel does not appear too concerned that the incident will affect its bottom line - the company expects 2018 to be a record year in terms of revenue

There is an interesting paradox in our industry. If you pay enough attention (read: money) to security, you will be late to the market, your costs will be high and you lose profit. If you don't pay enough attention, you take the market, get your profits, but your product (be it hardware or software) and reputation will be screwed later. And worst of all: there's never enough attention to security.

So by simple logic, an optimal strategy is to forge your product quickly, take your profits within a [relatively] short period and vanish from the market. I guess we'll see this strategy executed from IoT vendors when market start to punish them for their bad sec.

For Intel, that "long period" just happened to be REALLY long.


I doubt Intel will see serious punishment in the market. As usual, there will be a lot of wailing and gnashing of teeth but when push comes to shove most people will prioritize nearly everything over security.

All markets work like this. People bitch about the quality of products, but still buy the cheap stuff.


This will be true until something uses Meltdown in the wild to cause massive damage. When a digital superflu comes, businesses and individuals will be faced with a choice: continue to use Intel and be vulnerable to a flu that is literally wiping out businesses, exchanges, hospitals, etc or replace ALL of their hardware with AMD.

Interestingly, I think AMD has a lot of motive to create such a superflu, or at least encourage it's creation.


If AMD were proved to have been involved in the creation of such malware, to what extent could they be litigated against?


Well, it would be highly illegal (and immoral) to create such a program, regardless of who did it.


Intel is TBTF. Like the banks but in via different mechanism, they appear to have reached consequence-immunity by becoming critical infrastructure.


The same could've been said about Microsoft 10-15 years ago. Now, we could probably get by without them.

Intel may be too big for an abrupt failure, but they can absolutely fail in a decade-long slide into obscurity.


Feature of the industry. It would take a new corp entering into the chip industry hundreds of billions and decades to get where Intel is today.


You conclusion isn't in agreement with the section you quoted, so are you saying that Intel will be punished by the market in the mid to distant future (after 2018)?


"Here's a patch" - "Here's a patch to disable that other patch" - ...

What's next? Repeat? Sounds like this could turn into a maintainance nightmare quickly. Also because I've introduced things like that myself in the past, and that was for normal applications and not a kernel or OS. Somewhere, someday, there's usually this one exception for which none of your rules hold true and the thing blows up in your face. Anyway, I'd love to see the actual code for this. Not a chance probably?


Im really wanding they had more than 6months to do these patches and they did not bother testing on a good number of systems. Its not like MS + Intel dont have enough money to buy a few 1000 testing machines and get some testers on it.


Have you released a bugfix to a large application? I've had one line fixes break some use case I hadn't even heard of before, and it doesn't always show up right away, either. Intel's fix has to work on every application in every version of Windows, macOS, Linux, for multiple versions of processors with multiple different chipsets. And it has to be done yesterday. That's a nightmare scenario.


I came here to ask the same thing. How did these folks squander six months?


I think Spectre may have appeared later, after Meltdown? Remember the investigations into what's possible were proceeding in parallel with the attempted fixes.

Also, CPU design changes take a long time. 6 months may seem a long time from the perspective of HackerNews node.js type hackers, but it's a bit harder to patch decades worth of CPU microcode than a website.


Reading over googles project0 page it reads as if they told AMD about the issues on 2017-06-01 why would they do this if it were meltdown only?

also look at the exploit numbering:

Variant 1: bounds check bypass (CVE-2017-5753) Variant 2: branch target injection (CVE-2017-5715) Variant 3: rogue data cache load (CVE-2017-5754)

according to https://cve.mitre.org/cve/identifiers/ this is sequence based so `Variant 2` was recorded to CVE before v1 and v3.

I get it may take a long time (that is fine even if the patches took a few more days), what I don't get is that they released it to production (server) envs seemingly without testing. Surely even rudimentary testing (deploying on a few 1000 different server platforms for a few hours at least should be something that Intel does for all microcode updates, after all they are rather more important than js Node packages as you point out)


I haven't heard of microcode updates that hurt stability before. Presumably the collapse of the embargo caused them to do an accelerated release, skipping their usual long testing cycle.


Currently, they are not patching a decade worth of CPU microcodes since we have 0 working microcode. And, previously, released microcodes were only down to Ivy Bridge EP (~2014).


Only reason I can think of is that they didn't immediately realize how much of a headache it would be.


I'm amazed on how Intel's stock price still keeps going UP, despite all these problems... just WOW.


The recent jump is because they released a good earnings report.

https://www.cnbc.com/2018/01/26/intc-intel-stock-jumps-to-hi...


I have a feeling there's going to be a lot of demand for future Intel hardware that's immune to spectre and meltdown. I think it might cause _more_ sales of Intel chips in the future, not fewer.


Regardless of sales, I would think damage to their image would be a concern. Maybe it's not a concern to investors because their "PR nightmare" turned out to be softball for them, and it's been hard to pin anything on Intel when they keep pointing fingers in all directions.

I think it's our responsibility as technology literate folks and decision makers to explicitly highlight their failures so that mistakes and poor handling like this are not normalized.


It would be a concern if real competition was allowed in the x86_64 CPU space. Monopolistic patents, in this case, create the opposite incentives


What, are you going to stop buying Intel processors?


I will, but I am just a guy who builds his own computer every few years.


I tend to think it's because the folk who trade with stock very well know that these "issues" are actually features. I can also bet that that "grilling" they get from US government is not about the mess the chip flaws are doing , it's about why the "flaws" were publicly announced.


I think we're just in a bull market so people ignore bad news


Do you know someone who is going to stop buying Intel's CPUs?


I know in my company this has indeed resulted in a full stop in buying Intel and going AMD instead. There is also an active "project" to replace the Intel servers.

I work in a bank and they are terrified of the possibility of user processes reading privileged memory. Not necessarily out of actual fear but out of the insane amount of paperwork this will require to satisfy the auditors that it is still safe.

Anecdotally, but you asked for "someone" and here is someone :)


Wasn’t AMD also affected?


Not by Meltdown which enabled user processes to read kernel memory. And as we've seen in the aftermath, they have not nearly as much trouble with their patches for Spectre as Intel has had.


Well , Linus already said perhaps linux should look at the arm folk . Should that happen , well guess what ? From my very small knowledge , 90+% of internet infrastructure runs on linux .


Then ARM "folk" will find the equivalent of Spectre and Meltdown... then the cycle repeats, ad infinitum.


ARM already did.

https://developer.arm.com/support/security-update

The Meltdown/Spectre class of attacks affect certain CPUs. Spectre is a microarchitectural attack on any CPU that does speculation and uses data caches, regardless of architecture.

Arguably, it has been handling Meltdown and Spectre in a much much more humble and transparent way. See https://developer.arm.com/support/security-update/compiler-s... for work they are doing with the compiler communities to address the Variant 1 both on current and future chips.


Spectre affects AMD, too, so there's no competitor to run to... and CERT was saying at one point that only new processors would fully fix it. They're looking at everyone needing to buy a bunch of replacement products, aren't they?


Spectre v2 affects AMD less drastically than it does with Intel, because of the architectural differences between Zen and Intel's processors.


So much for the embargo period. I guess the bsd people were not so wrong after all. They might as well just had published it as soon as it was found.


Intel has had literally months upon months to test and stabilize their microcode and kernel-side patches for Meltdown/Spectre... but it seems like Intel just doesn't give half a shit, if they're having major issues now.

Intel's actions seem to shout that they have cared far more about releasing Kaby/Skylake X and Coffee Lake in short order, as a response to Ryzen/ThreadRipper, than actually really digging into fixing their major security flaws. Their actions speak of them preferring to keep their market and mindshare over actually fixing any security issues.

Intel is still so deeply entrenched that they likely believe that they can get away with their lazy approach. They make millions upon millions, if not billions of dollars ~ why should they give a shit, when their monopoly and half-hearted attempt at a solution will get them by? Intel is being strangled by their shitty management, seemingly...


By my reading of the article, Microsoft is disabling some mitigations for Spectre due to instabilities that Intel's microcode update have been causing.

Intel certainly isn't making any friends these days...


I would love to see the message log between MS Apple and Intel on this.


It is not only Microsoft reverting this patch. HP, Dell or Red Hat are doing that as well.

https://www.bleepingcomputer.com/news/microsoft/microsoft-is...


So what can I do for my next self-built pc? Get some AMD equipment, or is that not enough?


AMD have released patches (to reduce the near-zero risk of exploit to 0 risk) and they are not having any instability issues! so yes go with AMD

for server, Epyic is now finally available to purchase for desktop workstation Threadripper for desktop RyZen

for mobile... not many newer cpus out yet.. need to wait :(


I'm eyeing Threadripper for my next build but beyond that I'm going to choose a motherboard vendor based on the level of support they offer in this scenario. Some observations that I'm making:

* How promptly did they address the issue via official channels, i.e. did they leave users in the dark as they appealed to vendors in their forums (hint: most of them seem to have gone down this route) or did they share updates directly on their official sites, social media accounts, etc.

* Did they provide some estimates as to when users could expect patches?

* How much of their product catalogue were they willing to cover with security updates? Since this is a unique security issue with high impact I would have expected them to cover motherboards at least 4-5 years old.


For those in the market for an X399/TR4 motherboard, have you come to any conclusions?


MSI seems [1] to be fairly proactive right now with patches going back to several X99 motherboards. Asus for example has so far only committed to provide updates for two X99 motherboards.

[1] https://www.msi.com/news/detail/7yJ7XCklfBXt8mFhG8nkfSurJUz3...


AMD equipment should be fine, current-gen Ryzen/Threadripper is more than adept at workstation tasks and next-gen Ryzen (named Ryzen 2 and Threadripper 2) will edge out any advantage that Intel's CPUs have.


I'll be getting a few high-end Intel CPU's that will soon flood the market on the cheap for home machines running arch.


Depends on what you are building it for.


I have my macbook for work and programming, my PC has windows on it (much to my dislike) and I use it for occasional gaming. I will probably not get new parts anytime soon though as performance is currently fine. Just wondering for when someone asks me to build them a PC.


Well, I built my PC with gaming in mind and chose the i7 8700k. But I got it a week before the spectre/meltdown spectacle. I decided to keep it because of its superior singlecore performance.


I've not been impressed with my Windows 10 installations of late. All my machines that don't have the Long Term Servicing Branch have had wild instabilities and performance issues the past few months - crazy things, like the task manager taking minutes to launch, and the whole shell periodically crashing. The Fall Creators Update was so bad I had to wipe and start over on some boxes. It's not engendering a lot of confidence in their competence of late.


Creators Updates (1703 / 1709) are the culprits. LTSB (1607) has not been upgraded to Creators Update yet and it runs like butter.


Gee, if only they had a Linus Torvalds type around to block irresponsible commits.



I had to read that billg-review thing one more time. It's such a wonderful story.


Didn't Torvalds already accept Intel's "garbage" Linux kernel patches?


I don't think so, they're still under discussion on LKML, including an alternative proposal by Ingo Molnar.


Nah, it's just the well-known issue with the microcode being buggy. There's nothing new really.


Microcode is not reviewed by anyone outside of Intel.


I never got them. The last update in my windows is from Dec 2017. My antivirus is compliant, the registry key correctly set up and yet it refuses to update.

I still haven't had the time to debug it, but I wonder how many people are out there with their OS silently refusing to update.


I had huge problems with Win 10. Updates wold fail and install again and again without actually getting installed. Sometimes I would get an opaque error number but web searches revealed nothing for that number, and it was rare that I would be able to find even an error number. I don't do Windows, and just installed it for VR, and didn't spend that much time in Windows, so I would spend 15-30 minutes looking a month, before realizing I had spent more debugging time than VR time that week.

After probably 9 months of this, and with Windows doing ever more intrusive pop overs whenever I launched it for updates that don't take, I wiped all boot sectors everywhere and installed from scratch. That seemed to work, but it was incredibly frustrating that the boot process was so buggy as was error reporting. I've never encountered a situation like it in the past 15 years of heavy Linux use. Problems there are usually solvable with a couple web searches, even for extremely obscure kernel bugs with obscure packages. Windows refused to tell me anything as did the web.


I built a Windows machine for 3D work and VR just over a year ago, after being a Mac only user for 15+ years. Honestly my Win 10 experience has been the total opposite, it's been stable, fast, minimum update nagging. Overall I've actually been shocked how stable and hassle free the experience has been.

Maybe I just got lucky with the right combination of hardware.


So , Linus was right for cursing at Intel ?


UTTER GARBAGE




It’s ironic that for all the praises Linus gets on HN, he would have been banned within minutes by these guidelines.


Sry , my cognitive abilities have been at real low of lately . Took me a while to comprehend your comment. Spot on m8 ;)


My brother was hit by a recent update to Windows 7 that prevented the machine from booting. He went to Microcenter to buy a hard drive. There were a lot of people doing the same thing for the same reason when he was there.


I don't use the windows side of my machine very often, but decided to update it last night. Booted fine (OS on SSD), but one of the HDDs with all of the windows files was corrupted. No go with ntfsfix, chkdsk, partition table destroyed. Reformatted it as ext4 and windows doesn't get to touch it anymore. Haven't tested it too much yet but seems to be working fine.

Remember to use backups!


My brother did have major data loss. And no real backup strategy.


Amount of fuck-up in this whole issue is mind blowing. I am getting more surprised with every new I get


Intel has been called out by Linus Torvalds several days ago for the crappy fixes they delivered for GNU/Linux. I would be very surprised if Intel actually shipped proper fixes for Windows. It's a shame, really.


And then another developer kindly explained why he's wrong. It's fun to listen to Linus' rants, sure, but he's not always correct.


And then yet another developer showed there are way better fixes than what intel proposed [1]. So he was correct after all.

[1] https://lkml.org/lkml/2018/1/23/25


It's almost as if software development isn't an exact science and there are multiple approaches to any given problem.


And that Intel don't have a monopoly on smart software developers. Truly tragic.


Those were not delivered fixes, that was work in progress that is still work in progress. And the dude who was "called out" works for Amazon.


FYI - The “dude” from Amazon worked for Intel for 8 years before joining Amazon UK just over a year ago.


The “dude” is also probably working under an insane amount of pressure and being made to feel like he is somehow responsible or at fault for the whole situation. Best not to make it personal from the peanut gallery.


> Best not to make it personal from the peanut gallery.

That's a very uncharitable interpretation of your parent comment, which was simply pointing out his connection and history with Intel.


Are you sure it is not you that has made the uncharitable interpretation?

I read the peanut gallery comment as an agreement that the guy knows what he’s talking about.


FWIW - I took it as a somewhat rude reply but hey it’s the internet so no big deal.


I did not, and do not lay any blame, I only provided context to the prior comment.


So the person supposedly single-handedly working on this fix doesn't even work for Intel? That strikes me as... Odd?


Where did you got single-handedly from? Of course multiple people cooperate on those patches. That is it was possible for Linus Torwalds to join their discussion, it would all be much less public if only one institution or person worked on it.


So what you are saying is that Intel didn't bother to support Linux even with a crappy fix, like they did for Microsoft? Good to hear the Chinese are well supported though.


That is not what I said and not what situation implied. In situation where you have like zero factual information you went out of your way to make up most damaging possibility you could have.



I doubt some the best/most popular players of the USA-tech-industry dream team will get any real punishment at their own soil. Any fines will give a sense of justice to the public but it will just be peanuts.


These won't be the same peanuts from the peanut gallery right? I didn't get many to begin with and if we have to share...


They aren't going to get fined. There's nothing they can be fined for.


I wonder how much all this cleanup will cost in hours, downstream, for all the installed users? Judging by all the grief on this thread it's substantial.


>Intel, AMD and Apple face class action lawsuits over the Spectre and Meltdown vulnerabilities.

I sure hope Intel will face a class action suit over this botched update. Many professionals have wasted countless hours dealing with this junk.


Not what I wanted to wake up to this morning. I suspect we’re in for a rough ride for a long time thanks to this mess.


Your comment is even more fitting today (politics)


Just checked for Updates, but there don't seem to be any?


“If you are running an impacted device, this update can be applied by downloading it from the Microsoft Update Catalog website”

https://support.microsoft.com/en-us/help/4078130/update-to-d...


Thanks


Just in case you, like me, missed the memo where Microsoft said they'd stop supplying security updates if you have no AV / AV incompatible with the patches installed. The fix to the former is creating the registry entry manually.

https://support.microsoft.com/en-us/help/4072699/january-3-2...


Bizarre.

Customers without Antivirus

In cases where customers can’t install or run antivirus software, Microsoft recommends manually setting the registry key as described below in order to receive the January 2018 security updates.


Sounds like Microsoft can't tell the difference between "has AV installed that will break" and "has no AV installed", which makes sense. It's probably infeasible to reliably fingerprint all existing AV software.


> Sounds like Microsoft can't tell the difference between "has AV installed that will break" and "has no AV installed", which makes sense. It's probably infeasible to reliably fingerprint all existing AV software.

For something like this, I think best-effort bad-AV detection would have been best. Seems pretty insane to disable security patching because they can't be 100% certain that you have a compatibly AV.


Incompatibility here means unbootable state.


But it also means that people with perfectly acceptable configurations are left in an insecure state, without an unexpected magic incantation (a registry hack) that most probably will never know about.

Disabling security patches is not acceptable in current year without A LOT of nasty and annoying warnings.


It makes sense though. Only AV programs that comply may set the setting. Without a compliant AV program, there's nothing to do that set - unless you do it manually.


Microsoft won't supply updates even if you have no AV installed, including builtin Defender disabled??

I thought stopping updates was only for the case of unpatched AVs that did not set the registry key...


Microsoft does not have any way of knowing whether you have an antivirus or not and because the Spectre patch causes a bluescreen on boot if you have an antivirus that's not updated, they require the antivirus set the registry key to say "hey, it's safe to update". Absence of AV means that registry key doesn't get set.

MS doesn't provide an easy, GUI way of disabling built-in Defender by the way. If you 'disable' defender by using the control panel on windows 10, it only stops its activity temporarily and it can reactivate itself after 24 hours or something like that. You can permanently disable it through registry keys but it's not an officially supported, accepted method to edit the registry by yourself. There's a group policy for 10 Pro and other corp editions though.

For a normal home user, Defender is never fully disabled. It will deactivate itself if you install a third party antivirus, and reenable itself when you uninstall them. Bottom line, the average user is not supposed to be AV-less.


If you have no patched AV, who's going to set the registry key?


If your AV is not patched, the kernel patches should not be installed because you might get a repeating bluescreen.

So get a patched AV. If you haven't installed another AV, then Defender is there and counts.


Linus was right


Well he is known for calling a spade a spade. No matter how blunt that may be.


except for when he is wrong.

Why he gets a pass for being the 'nice guy' and DeRaadt gets the bad rep is still a mystery to me


I thought he was angry about the mitigation being disabled by default, not being unstable.


Nah, it was more than that: The patches do things like add the garbage MSR writes to the kernel entry/exit points. That's insane. That says "we're trying to protect the kernel". We already have retpoline there, with less overhead.


That was about patches to the linux kernel, not the microcode patches.


Yes, but as far as I know Linus has made no comment on the microcode patches, so mm-vorticesoft is probably referring to the Spectre patches in general.


The microcode patches are binary blobs against a proprietary and secret ISA, how can anyone comment on their quality?


When I read it I believed that Linus was implying that the suggested mitigation was so insane that it seemed like Intel MIGHT be hiding how broken they believed their hardware was with such over-the-top reactions. As well as indirectly asking if they believed the currently accepted mitigation method (retpoline) was considered ineffective.


His overall point was a bewilderment at the incompetent and non-sensical patches that were being given as "fixes" in this issue. Linus was pointing out a particular instance of that, but this news and other behaviour from Intel seems to indicate this is part of an endemic, cultural, administrative issue inside the company.


Aren't we talking about two different things? Linux vs Windows kernel?


OP is probably referencing the 'bullshit patches from Intel' comment from Linus about the patches they were sent, and that Microsoft might have been sent similar obfuscatory patches.


Given that the patches are CPU micro-code delivered by OS drivers, AFAIK, the actual OS won't make much difference.


"Decades old trap/fault software is being replaced by 10-20 operating systems, and there are going to be mistakes made."

- Theo de Raadt


Linus was talking about Linux patches, not these microcode patches. They've been known broken for at least a week


About what? Would you have a link?


Linus Torvalds: “Somebody is pushing complete garbage for unclear reasons.” http://lkml.iu.edu/hypermail/linux/kernel/1801.2/04628.html


Oh, this is the same issue?


It's the same bug, same company pushing patches, but we don't know if it's the same reason.


It's not the same company - David Woodhouse works for Amazon. He used to work for Intel but not for a year or so.

It's also not the same reason. Linus doesn't like the mitigation in the kernel, disagreeing on how Intel intends to implement it. This article is about unstable microcode patches that Intel retracted, and that retraction has been discussed on here a few times. The article is just exceedingly bad at describing the actual issue. It also doesn't help that the kernel mitigation depends on new flags introduced by the faulty microcode update, but the update being faulty is orthogonal to Linus' opinion.


Like always.


He generally is. His rants are almost always spot-on and pure gold. :)


What are you talking about?


I mean, is this an unmitigated disaster on Intel's part? It's like a train wreck in slow motion.

A part of me feels this stories like this are going to keep getting worse until Spectre is finally used in the wild.


What a mess!

The day this blew up we rented our first physical server for the express purpose of running secure critical workloads in unpatched environments. Yes, I know that there is nothing secure, but not everything we do is running a chunk of logic uploaded by an attacker, so we will take our chances.


What does the Spectre bug mean for a person planning to buy a new windows computer? Should I buy an AMD CPU based computer instead of an Intel based computer?


I'm pretty sure these patches were responsible for my notebook crashing everytime i hooked it up to our Thunderbolt 3 Dockingstations.


Anybody know what is the status of FreeBSD? I googled a bit and found nothing except "we wait", is it still the case?


Anyone know if people who had disabled the mitigations via FeatureSettingsOverride = 3 were still affected?


A more accurate title would be that Microsoft disabled the specter mitigation’s due to a flawed Intel update, right? I thought this was all Microsoft’s fault until getting half way through the article.


Even more accurate: Microsoft joins HP, Dell, Lenovo, VMware and Red Hat in disabling Intel's buggy Spectre patch.

> HP, Dell, Lenovo, VMware, Red Hat and others had paused the patches and now Microsoft has done the same.


It's really telling that even Linus Torvalds was not happy with their "fixes" and now Microsoft. Intel needs to start taking the situation completely seriously cause their actions don't imply they are.


Intel doesn't care. What choice do we have?


Vote with your wallet. That's really the only thing that you can do. Intel is too comfortable in their position as market leader. Until they start to feel some pressure, they have shown they don't really care. I know AMD is not a perfect company either, but I elected to buy a Ryzen processor for my upcoming build. People need to at least consider the competition without defaulting to "I need a processor, I buy the latest Intel chip."

Yes, I know that most people don't build new PCs or upgrade their processors that regularly. Yes, I know that many people don't have much of a choice because they have some requirement that currently ties them to Intel. However, those that do have that choice, should remember this debacle the next time they are buying a CPU/system (even if they are not doing it for awhile). Intel is hoping they can sweep this under the rug, we can't let them until they make amends. Do not buy an Intel chip until they've proven they will do better. I am not endorsing AMD either. You can still vote with your wallet by not buying anything at all. If enough people put off their upgrade, it would put a dent in Intel's bottom line. Things won't change until it hurts them financially.


I understand your point, and agree with the spirit of it, but a few consumers buying Ryzen chips isn't going to make one bit of difference. A couple data centers buying dozens of racks of them, however, would be more measurable. Hit em in the B2B not the B2C.


I agree with you, and you are right. However, most people aren't making decisions about what types of chips to use in a data center. It would be wonderful if the people in those positions explored non-Intel options. For the average consumer, all we can do is choose which company we buy a CPU from every few years. People buying Ryzen chips incentivizes AMD to keep making chips and stay in the market. Competition is good for consumers. I totally get that it is a lot more complex than that, but I personally feel like it's the best we can do as the average consumer.


For sure. Like I said, I agree with the spirit, and you can obviously only do things within your own sphere of influence. I'm also planning on doing a Ryzen build for my next PC. I'm just trying to be realistic to say that even thousands of consumers switching won't make a huge dent in their bottom line. B2B is really the only way to influence a company as large as Intel, unfortunately.


You're definitely right. I'm just feeling idealistic this morning :). I hope you enjoy your build and it goes well. I'm getting my 1600 this week.


Yep, nobody gets fired for picking Intel, so to speak.


Before Intel Core processors became the standard hot processor I was an AMD guy. I'm heading back towards that route. This means no Macbook or Surfacebook for my next development laptop for me. If anyone wants my money they better build a developer worthy laptop with an AMD processor and a sweet AMD graphics card. Also AMD is working on providing open source GPU Vulkan drivers.


> Also AMD is working on providing open source GPU Vulkan drivers.

They have already provided them:

https://github.com/GPUOpen-Drivers/AMDVLK

https://www.phoronix.com/scan.php?page=article&item=amdvlk-r...


This one [1] seems like a good choice in terms of performance, but certainly no MBP in terms of portability.

[1] - https://www.asus.com/uk/Laptops/ROG-Strix-GL702ZC


I already have an ASUS ROG laptop so this sounds like I'm sticking to them! Thank you, was hoping they'd add in AMD.


For a portable laptop with AMD, this one will soon be available: https://www3.lenovo.com/us/en/consumer-notebook/ideapad/700s...


For the longest time there was no practical alternative, but nowadays... AMD is back! The whole Ryzen lineup has turned out to be pretty good. You may even save some money in the process.


Has AMD processors been confirmed as not vulnerable? As I recall the original investigation only covered Intel processors, but hypothesized that AMD would be affected as well, as they more or less have the same fundamentals around branch prediction.


CPUs from AMD are not vulnerable to Meltdown, but are vulnerable to both versions of Spectre.

https://www.amd.com/en/corporate/speculative-execution


Zen is not anywhere quite as vulnerable to Spectre v2 as Intel's CPUs are, due to architectural differences, according to AMD, so that's something.


Damn! So, practically, no modern processor (or consumer laptop, or enterprise server) is safe from Spectre?

Time to go off the beaten path.


The earliest Intel Atom chips (Nxxx series) are supposedly safe, but they were only ever used in woefully underpowered netbooks and nettops, and they had perhaps half the performance of a similarly clocked (much older) Pentium M. That performance metric is documented, and I've felt it myself when I owned a Pentium M laptop and Atom N450 netbook at the same time a few years ago.

A few ARM SoCs -- including the entire line used in Raspberry Pi boards -- are safe, but the vast majority of recent ARM devices are affected by one or more of the attack vectors. This means virtually any flagship and most if not all midrange smartphones and tablets, even iPhones and iPads, are vulnerable.

This is the most complete list of affected CPUs and SoCs I've found, and they appear to be keeping it updated:

https://www.techarp.com/guides/complete-meltdown-spectre-cpu...


I think it's safe to assume that pratical mitigations will eventually surface, the biggest issue is probably around the cost in performance. Shaving 30% (or whatever) of the worlds computing power in one fell swoop is kind of a big deal.


Arm (starting with Cortex-R7 and higher) and PowerPC are vulnerable to Spectre too.


This is why I always preferred AMD. They gave you either more or the same bang for your buck. I hope they don't slack off on pushing to innovate ahead of Intel now that they're "even" in a sense.


I was specking out a new machine for my wife just as all this news broke. At this point, I’m obviously going AMD.

I mean, I’m not putting Intel out of business, but I have a -choice-.


Meltdown and Spectre exploits are rooted in the nature of current CPU design. If you want to be safe, go build a RISC-V computer.


I don't think that's true. Meltdown and Spectre are sub-ISA issues, so you could have a RISC-V implementation with them if it handled caching and speculation similarly.


Thanks. We've edited the title above.


Ugh, is this the cause of the weird bugchecks I've been having this week? Just gave myself 64gb page file and enabled full memory dumps so I could track it down in WinDbg. I always forget something on fresh installs...


Meanwhile Intel stock is at 5 year highest




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: