Hacker News new | past | comments | ask | show | jobs | submit login

Would this entire Meltdown/Spectre thing count as the biggest mess-up of computing history? When yesterday the PoC repo was posted here, the spy.mp4 demo video gave me some chills. And now I can't update my OS before making an installation USB because Canonical can't just follow Linus' releases. Thanks.



>Would this entire Meltdown/Spectre thing count as the biggest mess-up of computing history? When yesterday the PoC repo was posted here, the spy.mp4 demo video gave me some chills.

It must be up there amongst the greats, probably with the "halt and catch fire" op code. Normally they just patch this stuff with microcode and never really tell anybody, this time that won't work.

I'm not entirely convinced it was a mistake at all (dons tin foil hat), Intel have been making suspicious design decisions to their chips for a well now (think this, Intel Me, hidden instructions, etc). It seems clear to me that this security by obscurity approach is quite frankly crap.

>And now I can't update my OS before making an installation USB because Canonical can't just follow Linus' releases. Thanks.

Linus' releases need some sort of buffering before they can be considered stable, often distributions will apply their own patches on top. Also consider the scenario where Linus releases a bad kernel and no testing has been performed before rolling out to all Linux users.


I think it's absolutely unreasonable to imply that this was intentional. Besides the massive amount of complexity these systems have, there are plenty of "legitimate" places to hide backdoors, instead of in a performance architecture decision.

Keep in mind that whatever "evil agencies" would have asked for this would most likely find themselves vulnerable, and nobody would sign off on.

I do agree, however, the "security by obscurity approach is quite frankly crap". The fact that even large corporations (not the big 5) can't even get away from ME speaks volumes about why this is a bad idea. Facebook isn't the only company with important data.


> I think it's absolutely unreasonable to imply that this was intentional.

Amen. It blows my mind that some people think clever techniques like speculative or out-of-order execution must've somehow how nefarious intentions behind them. Come on HN...


The Intel Management Engine is a backdoor. Speed variations in speculative execution are an inherent property of the technology. Until recently, few people thought this was exploitable, and it took a lot of work to figure out how to exploit this.


You do realize those are ideal properties for a backdoor, don't you? If you were writing the spec for a dream backdoor, you would write that down. The only way you could improve it would be "everyone thinks it's impossible, and they never figure it out."


This backdoor is too tricky to be a backdoor. A simpler backdoor would be "Call this opcode 45 times, followed by another opcode 20 times, and you will have activated backdoor mode where these opcodes are now available"...


the ideal properties of a backdoor were visualized to me the day i hacked into an author of a largely distributed piece of smtp mail server, only to find sitting in his home directory an unpublished integer overflow exploit written by him years earlier for a version of the software that is currently in wide distribution...


That's close to perfect, indeed. The drawbacks in this scenario are that (1) not everybody runs an SMTP server, (2) if it's open source (and if it's very popular, then it is), some other smart people will look for the bug and publish it for fame. That's quite different from a backdoor built into a processor (although I really doubt Intel was really involved in any shady practices, it looks like they were not smart enough).


Judging from the numerous decades old bugs recently found, the concept of many eyes needs to die.

And in the case of SMTP, it's basically a pinata of bugs for the last 30 years regardless of platform


it is still way more likely a reasonable design decision for performance reasons than it is for a backdoor.

Alone the risk would not be worth to intel. Do you really think, nsa has enough money to compensate for this backslash and newscoverage?


Yes, though it's moderately hard to exploit against a specific target. It's more useful for bulk attacks - getting everyone who visits a specific web site to run a DDOS attack, or ransomware.


If any quantity about what the processor does, outside the intended effect, has a different distribution when X happens versus Y, then the distribution of that quantity is exploitable. Period.

Any nonuniform distribution in any quantity that is not part of the spec is exploitable!


It is only exploitable if one can measure the difference and extract useful information. Until Spectre guys discovered the double read technique, the expectation was that speculative execution did not allow to extract useful information besides extremely artificial theoretical cases.


Adding a backdoor seems unreasonable but they may have chosen performance over security. Even if this individual bug wasn't intentional they are responsible for setting their priorities.


There are CPUs available which choose security over performance. They aren't made by Intel, but you can buy them, and they're even cheaper.

Oh, you don't want to do that?


Well, I read somewhere the other day that this form of error/attack was conceived of in the academic literature back in 1992. I won’t believe it’s intentional without evidence in that direction, but this is conceivably the kind of obscure/complex attack you’d expect of a state actor.


This has been a known issue in xbox 360 hardware since about 2010.

It just keeps popping up, someone finally thought to weaponize it.


>It just keeps popping up, someone finally thought to weaponize it.

Someone published its weaponization, you mean :)


Those undocumented features & byte code? HAP mode - something the NSA doesnt want you to know exists, but that they had put into intelME from Skylake onward.

But yet and still we found out. So yes, this security through obscurity approach is terrible (with a code embargo being the obvious exception).

They only update microcode when they have to. When doing otherwise risks... Well, this kind of mess.

You dont wanna know how many times I've rebuilt my gentoo system chasing after retpoline kernel & gcc builds that just... Break everything.

It should be interesting to see how it all develops


Yeah ME is a scary thing also. WRT Linux, well my Xubuntu 16.04 (Xenial) is on 4.10 and no new kernels are available to ma ATM. So if they're going to patch my OS, that's probably going to be a backport to that version, not the latest release integrated to my OS version. I guess that's what caused this bug too, although I admit I only skimmed the conversation linked.


They've put out updates for 4.4 and 4.13 (HWE) for 16.04, if that helps.

See https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/SpectreAn...


I'm guessing that updates for LTS kernel will come later. I don't know if I can update to 4.13.


LTS for xenial is 4.4, and patches have already been released for it.


Note there won't be fixes for 4.10, as it's reached EOL for Ubuntu, so you'll need to move to the 4.13 patched kernel.


I think that title is currently held (deserved or not) by null pointers.

Tbh it’s not the most meaningful of statements, but it’s food for thought.


A null pointer doesn't hold a reference, though.


It does if you have something at 0x0.

Or, to put it another way, I have no clue to what you're referring--what do references have to do with "The Billion Dollar Mistake"[0]?

[0]: https://en.wikipedia.org/wiki/Tony_Hoare#Apologies_and_retra...

EDIT: my apologies, that joke was actually pretty good.


I think it was supposed to be a joke.


Yes, thank you.


I think it was supposed to be a pun on "hold". As for the word "reference", your own link uses it.


I use it in a later comment! I was confused about the word in context.

However, I completely missed the pun. Cheers :)


Cheers to you :)


What modern systems even map memory to 0x0? Doing so breaks the C standard, among other things.


http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc....

> On system reset, the vector table is fixed at address 0x00000000.

Also, I'm not an expert on the C standard, but in my understanding, it doesn't "break" it. That is:

* Address 0 and the null pointer are distinct

* A 0 literal treated as a pointer is guaranteed to be the null pointer

* The null pointer is not guaranteed to be represented by all zero bits

* If you get a pointer to address zero via pointer math or by other means than a 0 literal, you can still access address zero.


Yeah, the NULL pointer is a pretty weird part of the standard - it makes some sense, but leads to weird situations. That said, I think your last point needs a bit of clarification. What you've described is actually already impossible per the standard - with a few exceptions, it is illegal to use pointer arithmetic to address past the size of an allocated object (Because those pointer values may not even be valid for the architecture), so it is technically impossible to use pointer arithmetic on a valid pointer to end-up with the NULL pointer - it would require calculating an address outside of the current object.

So the question of what happens when you actually do that is purely up to your compiler and architecture. In most cases, if you manage to get the NULL pointer value through pointer arithmetic, it will still compare equal to the 'actual' NULL pointer and treated as if it was a literal 0, so that doesn't allow you to get around NULL pointer checks. The only situation where it really matters if the NULL is only known at runtime, since that may have implications on optimizations. Since dereferencing the NULL pointer being undefined behavior, the compiler can remove such dereferences, but it can't remove the dereference completely if it can't prove the pointer is always NULL. There is nothing preventing the compiler from adding extra NULL checks in that aren't in your code however, which would foil the plan of generating a NULL pointer at runtime to dereference it. So unless your compiler explicitly allows otherwise, you cannot reliably access the memory located at the value of the NULL pointer - as far as the standard is concerned, there is no such thing.

Talking specifically about the ARM vector table, that largely works ok because only the CPU ever has to actually read that structure, normally you C code won't have to touch it (If you even define it in your C code. The example ARM programs define the vector table in assembly instead). If you did ever have a reason to read the first entry of that table from C though, you could potentially run into issues (Though I would consider it unlikely, since the location of the vector table isn't decided until link-time, at which point your code is already compiled).

On that note, it's worth adding that POSIX requires NULL to be represented by all zero bits, which is useful. Lots/most programs actually rely on this behavior, since it is pretty ubiquitous to use `memset` to clear structures, and that only writes zero bits.

(Sorry for the long comment, I've just always found this particular part of the standard to be very interesting)


Oh not at all! Thanks for this; I also find it very interesting, and was glad for the correction.


Again, I am unsure how this relates to "The Billion Dollar Mistake" I linked above and was referring to.

I am not sure your point. The reason modern systems don't map memory to 0x0 is because NULL pointers exist. It is a reflection of a leaky abstraction equating pointers to references. That leaky abstraction has (or so the argument goes) caused >$1B in software bugs.

The other mindset would be "malloc always has to allocate memory or otherwise indicate failure; you cannot cast from a integer to a pointer; you cannot perform arithmetic on a pointer to get a new one; you must demonstrate there are no hanging references when freeing". This is essentially what rust did for safe code.

The reason why I indicate so much skepticism is that rust is the first time I've seen the problem solved well in the same problem space as C. Ada has problems of its own. It's more about how small assumptions can have massive economic (and health, and safety, and ethical) consequences. Certainly comparable to a speculative execution bug leaking memory in an unprotected fashion--in both cases the bugs find their way through human error in evaluating enormously complex systems for incorrect assumptions :)


Web assembly.

Llvm won't use it for anything. (I think it starts putting things at 8). Trying to access it explicitly in C will generate `unreachable` instructions.



"biggest" by number of affected CPUs? very possibly, yes. the march of time has that effect: there are more cpus potentially and actually affected, worldwide, than at any other time in history.

"biggest" by net financial loss to a single entity? I dunno. How much did that failed NSA launch cost the state again?


It's unclear whether it failed or if Northrop Grumman want us to think it failed; since the second stage actually did one full orbit with nominal performance, they might be trying to slip one past us. We'll know in a few weeks time I suppose, every satellite tracking enthusiast will be looking for it.


> failed NSA launch

I didn't hear about that one



To which no government agency is officially attached and the failed thing is more a rumor. SpaceX said that on Falcon 9 side everything worked like it should and NG says they cannot comment on classified payloads. So there's literally no information.



I watched that spy demo. How does it know what memory location contains the password being typed?


I'm assuming if you know either common byte patterns or string patterns you might be able to figure out where the password string is being allocated and watch that area of memory for changes.


Not sure if Meltdown is the same, but I read that Spectre can recover memory at about 10kb/sec. So it wouldn't be very efficient to scan the entire memory for a known pattern.

I suppose if there was an exploit targeted at a specific program, it would be possible to work out what location the secrets are stored in?


I leave my machine on for weeks at a time. If something was scanning the memory even if it failed to find the location of my password 99.9% before it is erased eventually it will be lucky and get it.


Good point. I was only thinking about a single run but that makes sense.


According to the paper, Meltdown can recover memory at about 500 kb/s


It's still "only" 1.7gb/hour. If programs follow reasonable security practices, it shouldn't be possible to stumble upon secrets in the memory. This underlines the importance of things such as ASLR and not holding your key in memory longer than needed and rotating them as well.


Once you know the location, if the process is not randomized, you can extract from that location. You may assume some things about implementation (e.g. libstdc++ or libc++, glibc memory allocator, general compiler version)

Additionally some hardening methods like stack protector make stack allocated objects stand out a lot from register values.


Meltdown is fast enough to learn everything about layout of data structures in kernel or other programs and then use it to extract information from particular areas holding the keys.


It appears to be known to the exploit. I feel that this is being so overblown and that the exploits we are seeing require more info that something in the wild would have.


Code in the wild would have access to all memory (slowly) so could eventually find the correct location.

Given that whoever writes is would also have access to the other program, they would have a lot more information on where to look in memory.


I would think it would be. The strange thing is the markets didn't react at all. They actually went up on January 4th.


Because this has largely remained theoretical, unlike Maersk or Equifax.


What are you talking about? We've seen working POCs since last week. This isn't "largely theoretical", this is an actively exploitable hole.


Meh, it's not really very serious in the average case. It's a lot of sky-is-falling rhetoric from the infosec community. Remember Heartbleed and how it was end-of-times bad? Yeah, turned out to be a non-event. Information disclosure bugs like this are difficult to glean useful information from in widely targeted attacks.

(Obviously if you have nation states or serious criminal organizations trying to breach you regularly, this is more serious)


You clearly haven't been paying attention or reading about how this works.

Heartbleed was touted as being bad by those that didn't read too far into it. You could scrape memory, sure. But it was always random fragments. This lets you make targeted address attacks. Force a process to use that memory space through a NOOP and now you can start scraping at will. Or you can just do an entire memory dump and pull things out in plaintext (like scraping Firefox passwords, which we've seen done already).

The only reason this isn't worse is it requires the ability to execute code on the machine. It has high (near absolute) impact, but low-to-moderate on the ease of execution.


"Would this entire Meltdown/Spectre thing count as the biggest mess-up of computing history?"

This title is held by autorun.inf which has caused over 20 years of broken, vulnerable behavior and, AFAIK, is still going strong.


Link to the demo?



YouTube mirror for mobile users anyone?



I think Y2K had more practical impact across the business world. There was genuine fear that it could cause an actual apocalypse with all major computerized systems failing, medical machines killing people, banks being affected and all money and debts disappearing overnight.

It wasn't that bad, because people took it seriously. But there were still tons of practical systems affected and billions of corporate dollars associated with fixing it.

So when you say "biggest mess up" you gotta define specific qualifiers. Because Meltdown/Specter is going to be solved by simply... buying a new CPU. (And retrofitting the old ones). So it consist of mostly a patch.

A BIG important patch, granted. But it's still just a patch. But some ATM's aren't going to start spewing money like they did on Y2K.


I didn't know Y2K was that big of a deal! I guess I'll have to read a bit more about it, as it seems to be an interesting topic. Thanks!


I can't find any reports on ATM's spewing money after Y2K, or was that a figure of speech?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: