In order to exploit vulnerability, an attacker needs to:
- gain local access on the machine
- break kASLR
- find gadgets in the running kernel in order to use them in the exploit
- potentially create and pin an additional workload on the sibling
thread, depending on the microarchitecture (not necessary on fam 0x19)
- run the exploit
That fact that breaking kASLR is required for this and the considerably complex exploit chain compared to others makes me worry about this a lot less compared to the exploitable-from-JS ones.
I'll wait for some benchmarks to come out from Phoronix or similar and depending on how bad the perf hit is, I will consider disabling mitigations for this on my personal computer.
If someone has gotten a malicious binary running on my machine - even without root permissions - it's already over anyway.
Breaking KASLR is also required for the original Spectre attack (and Inception can break it by itself). In fact, the threat model of Inception is identical to that of original Spectre. Inception may be more complicated, but the requirements are the same.
What does 'not applicable' mean in this context? No problems or we don't have plans to fix it? Zen1/1+ doesn't seem present in these charts at all.
Edit: It is actually mentioned if you do something clever like read.
> No µcode patch or BIOS update, which includes the µcode patch, is necessary for products based on “Zen” or “Zen 2” CPU architectures because these architectures are already designed to flush branch type predictions from the branch predictor.
The original video from the authors of inception runs on a zen4. But, the authors also claim that zen2 is vulnerable to the exploit.
However, in zen2, KASLR cannot be broken by inception, in a reasonable time. So, the authors proposed to install a kernel module to get the kernel address. Then, pass this address to the exploit:
https://github.com/comsec-group/inception/tree/master/incept...
From the practical point of view, it does not make sense to require the installation of a kernel module to get some information needed by the exploit. It's kind of a cheat. If I can install a kernel module, it means that I already have root access.
So, in my opinion, from the practical point of view KASLR is enough to mitigate inception in zen1,2. The authors claim that this architectures can be exploited it's an overstatement.
> By reducing all returns in the kernel to a single one, it becomes possible to ensure a safe (but still incorrect) branch predictor state each time this return is executed
I'm way out of my depth here, what does this mean? My naive guess is that they ensure there is a single assembly instruction in the entire kernel that can do a return, and every place that wants to return jumps to that return?
> Instead of flushing the entire branch predictor state, AMD proposed a different mitigation for the Linux kernel. By reducing all returns in the kernel to a single one, it becomes possible to ensure a safe (but still incorrect) branch predictor state each time this return is executed. Keeping previous mitigations in mind, this effectively means that AMD opted to have all indirect branches forcibly mispredict to a benign location, preventing Inception attacks.
So, There will be no decrease in performance or any other performance impact if you are running Linux kernel.
My understanding (which could be incorrect) is that the single return location clobbers the branch predictor's leaked data with static and (attack) worthless data without completely clearing the rest of the prediction cache.
I'd guess it's not restricted to the /etc/shadow file and could leak the contents of "any" memory in the system - that is just used as an example that can be (relatively) easily searched for in memory due to having a rather unique pattern.
I'm a little confused about the presentation of this - I think a fair bit of work has been put into the presentation and make it look "hacker" rather than actually required for the exploit, which doesn't fill me with confidence, but I guess we'll see when the actual description gets released.
Like why is the shell loop re-building the exploit code? The output seems strangely character rate-limited but extremely even when if it was being rate-limited by the exploit itself I'd expect more noise if it relies on some probability to read the memory contents, and absolutely no description of the technique itself.
> I'm a little confused about the presentation of this - I think a fair bit of work has been put into the presentation and make it look "hacker" rather than actually required for the exploit, which doesn't fill me with confidence, but I guess we'll see when the actual description gets released.
> Like why is the shell loop re-building the exploit code? The output seems strangely character rate-limited but extremely even when if it was being rate-limited by the exploit itself I'd expect more noise if it relies on some probability to read the memory contents, and absolutely no description of the technique itself.
This team has done stuff like this before, they were behind Retbleed[0], for which they released a similar style video[1], last year. I expect more details will follow after the USENIX Security Symposium.
With the root hash you can crack the root password using tools like John The Ripper[0]. More generally, I assume, this exploit can be used to read any arbitrary files on the system, bypassing regular access control, and plenty of other stuff you aren't supposed to be able to do as a non-privileged user.
It seems like this would only allow reading memory resident for some other reason. Still bad, but I do not think it would allow directly reading files that weren't already in memory.
As for /etc/shadow, it would be loaded any time someone attempts to login, and might even stick around in the page cache...
> Whats the security implication of leaking the root hash?
That you can try to crack the hash off-line with something like hashcat.
For a random, salted, 22 character password (roughly log2(62^22)~130 bits of entropy, no rainbow tables because of the salt) - it might not be game over.
On the other hand - if you can make an educated guess at the password - it's now trivial to check a few million variations without having to try to log in.
As other's mentioned - reading arbitrary memory is probably worse (eg keyboard buffer, getting the plaintext password if a user logs in...).
once you have the hash you have to use some rainbow tables if they exist for that hash function or bruteforce it
the authors of yescrypt claim: "Technically, yescrypt is the most scalable password hashing scheme so far, providing near-optimal security from offline password cracking across the whole range from kilobytes to terabytes and beyond. "
in any way, this is a local attack, someone / some software on your local machine would need to execute it so i am not overly stressed, password hashes leak all the time from all different sources
yet, it does worry me because my AMD stock is dropping on value because of this today :D
Rainbow tables are only applicable to unsalted hashes (or possibly to ones with tiny salts). They are so rarely applicable, that I wouldn't even bother mentioning them.
On that list, NT is the only completely unsalted hash, plus DEScrypt and its variants might still be susceptible with its 12 bit salt. Like all decent password hashes, yescrypt is salted.
fwiw, yescrypt uses a salt so it will not be vulnerable to rainbow tables, and it is a slow hash so it won’t be that easy to bruteforce. A good strong password with a good hash function should remain secure even if the hash leaks.
I suspect that in a decade or two we will see speculative execution as we see shared mutable state today: a dangerous trap that should be best avoided in any new designs.
Or the mistake is that we're running different trust domains on the same CPU core.
All the various mitigations have already been very costly (something like a performance loss of 10% on current-gen CPUs?). Getting rid of speculative execution entirely would cost far, far more.
There is state shared between cores too though, at the very least L3, probably IOMMUs, etc. I wouldn’t be entirely surprised if there are issues we haven’t found that exist there.
> Or the mistake is that we're running different trust domains on the same CPU core.
It's very hard to not make that mistake. Think of all the websites running in your browser on your laptop. Isolating all those trust domains is going to be quite costly.
I feel like that became apparent with Spectre and Meltdown - they really opened the floodgates. I do wonder if there is some way to have formally proven correctness for speculation - a sort of analogue to the borrow checker in Rust if you like.
isn’t every attack based on speculative execution actually just an attack over shared mutable state? i.e., if the branch predictions, cache, and so on weren’t both shared and mutable, speculative execution wouldn’t be so problematic?
Indeed. Not just shared, but shared across security domains, e.g. across OS processes, or across the kernel / userspace boundary.
Speculating within one security domain would not be problematic, because anything such speculation allows to glean is already accessible normally. This is sort of similar to having a single reference to a mutable object, which is also safe.
That fact that breaking kASLR is required for this and the considerably complex exploit chain compared to others makes me worry about this a lot less compared to the exploitable-from-JS ones.
I'll wait for some benchmarks to come out from Phoronix or similar and depending on how bad the perf hit is, I will consider disabling mitigations for this on my personal computer.
If someone has gotten a malicious binary running on my machine - even without root permissions - it's already over anyway.