Next, we reduced the response hash to one hex digit and authentication still worked. Continuing to dig, we used a NULL/empty response hash (response="" in the HTTP Authorization header).
Authentication still worked. We had discovered a complete bypass of the authentication scheme.
What. the. fuck.
This is not the kind of bug you should ship in anything if you have the barest bit of testing in place, much less a large company like Intel, in an enterprise feature which has a lot of security ramifications, and which has apparently existed for a long time (years?).
Edit: Also, this is really good evidence for short and hard disclosure deadlines. What's the chance something as simple as this wasn't known by someone else? All they had to do was decide to look and they found something within minutes. It's not like this is obscure or doesn't get your much, it's about as juicy as they come.
After reading about this over several days I would have never guessed it was such a gaping hole. This reminds me of a software bug I encountered in the 90s with a version of the Renegade BBS software (a Telegard hack). In one minor revision, you could login to any account by just not bothering to enter a password. Though the "sysop" (admin) account often had a custom name, for convenience, you could login by your user number and the sysop user number was always zero. A good friend of mine had his entire system wiped out by a malicious user with this little problem.
It doesn't sound like this is a matter of leaving the password field blank, but rather sending a request with a tool like cURL and setting the header to an empty/NULL response, but it's about as close to just as bad as you can get. Sheesh.
In the case of Intel's AMT, I'm not sure what could have possibly gone wrong, but it sounds like this is something that has happened in other applications that use Digest Authentication.
In the case of Renegade's BBS software, I don't exactly recall the specifics and since it's been several years, I may not have this exactly right but I'm fairly certain this is how it happened because I did a lot of work with the code it was based on[0].
The bug occurred only if the user didn't provide a password. So, effectively, you could login by not providing a password or by providing the correct password. In the original Telegard source, the login prompt did nothing if no password was provided, it simply discarded the provided Enter and waited for a password. This was handled by a global function that was used for accepting input. I'm guessing that function was refactored (I ended up doing this ... it was pretty ugly IIRC) in the Renegade code and it became possible to send a blank password. The login routine was an interesting beast, as well. I remember reviewing it in the original code and scratching my head ... it made positively no sense and so I threw it out and replaced it with a much simpler implementation. Rumors had swirled around the release of the source code that this obfuscation was intentional and that there was a back-door built into the login routine. I'm not sure if that's true or not but the sheer amount of code to do a string comparison was a little ridiculous[1] -- passwords were not hashed, they were stored in plain text in a record (struct-like component) dumped to a file and were even viewable in the sysop control panel's User Editor.
It's entirely possible that they didn't refactor this bit of logic but changed something in that mess that caused an empty string to evaluate true along with a matched string. Or it could be that they refactored that like I did and put an OR where an AND belonged. If they refactored that global ReadLine method to allow empty returns, they may have felt a need to check for an empty value at this prompt, though I'm not sure what the motivation for this would have been. An empty or NIL value wouldn't have caused the application to crash -- there's no null reference exceptions when you're dealing in a non-OOP language -- you're not executing methods attached to the String type because such a thing didn't exist in the language.
The developer should have discovered the problem way before it was ever released in the wild, but I get why it was missed. He probably checked a bad value and a good value but never thought to check no value at all. Hell, it was a mystery why all of these Renegade boards were going down and a bunch of folks assumed it was a back-door, as was suspected about the original Telegard code. It was a few weeks before people figured out that it was such a simple screw-up.
[0] The source code wasn't public, it wasn't open source. In fact, it only existed because an older version of Telegard had its source code leaked, which the Renegade developers used to create their product (there were a number of "Telegard Hacks" in those days, including one I wrote and later rewrote from scratch and served as the entire reason I decided to become a software developer).
[1] Sounds suspicious at first glance but there were so many really poorly performing things about Borland Pascal's core libraries that it was not uncommon to write several hundred lines of inline assembly to do things like "write text to the screen quickly". I had done this, myself, on a few occasions -- the specifics were something around writing to memory reserved for use by the display capabilities of the 8088/80286/80386 (BIOS reserved memory IIRC).
[2] NIL was Borland Pascal's "null", and I believe it supported empty strings. Unlike ISO PASCAL at that time, Borland Pascal had a real string type which was stored in memory as a character array with a length field at the 0th index. It didn't use null terminated character arrays like C/C++ which made working with strings in the language elegant by comparison. Strings, however, were 8-bit ASCII ... this being the 90s with the target being DOS and all.
The code compares the correct "hash and the hash response received from the browser, with N set to the length of the response received from the browser". So if the browser returns "", that's compared with the first zero characters in the correct hash, which is also "". Funny.
Well... I agree with you in principle, but in practice I find developers often to forget that code fails the way it is supposed to fail, when it is supposed to fail. In the authentication case, everyone remembers to check that when you're supposed to be logged in, you can access what you should be able to access. But it's really common to not think to test that when you're not logged in you shouldn't be able to access what you should be able to access (when logged in), or that when logged in as user X you shouldn't be able to access user Y's stuff.
The article in question mentions they reported essentially the same bug in IBM solidDB [1], and I recall that the first big break in Nintendo Wii application signing [2] was similar.
Intel decided they have the right to put a whole secret computer inside your computer that only they can access. God knows what it does when no one is watching.
That's the problem you should discuss, not this particular exploit.
Having a "management engine" with direct access to the network and to memory is questionable in itself. Its code being secret indicates there's probably something bad going in. If it only does what Intel says it does, it doesn't need to be secret.
Unfortunately, a "management engine" with some degree of control over the CPU is necessary for, well, management. As in remote management, which is something that big corps with thousands of machines want, and the more control it gives the better.
The code itself is as secret as the code of any proprietary Windows-based remote administration tool they could supply as a poor man's substitute if the ME didn't exist. It's just how this industry works.
This doesn't indicate that there is anything "bad" going on. What is bad is that Intel, being the cheap bastards they are, combined this remote management and DRM, virtualization, TPM, CPU initialization and hell knows what else into one blob running on one MCU with no way to separate and disable the unneeded/unwanted/buggy/vulnerable garbage from actually useful functionality. And that such critical part is closed to third party scrutiny.
This is a bit of a fig leaf. If it was just for enterprise users there would be no reason to impose it on everyone. It would be positioned as an enterprise exclusive with a price premium.
The fact that both AMD and ARM integrated similar technologies at around the same time is too much coincidence.
All the signs point to bad actors but for some the bar of evidence is either another Snowden level sacrifice or Intel providing a signed confession. Both improbable and unrealistic. In many ways the detail, scale and scope of revelations in the past 5-10 years make skepticism and hard questions essential. The benefit of doubt has long moved the other way. This alternative is a kind of forced naiveté and denial.
> The fact that both AMD and ARM integrated similar technologies at around the same time is too much coincidence.
Don't believe the FSF's FUD. TrustZone is really not comparable at all to Intel's Management Engine or AMD's Secure Processor:
* TrustZone is an operating mode of the CPU, not a separate processor. Fundamentally, it's not all that different from supervisor mode; it's just more privileged. (If you really wanted, you could probably write an OS that ran parts of the kernel in TrustZone.)
* You don't have to have anything running under TrustZone. Indeed, most processors which support TrustZone (e.g, most Android phones) aren't using it at all.
* The TrustZone specification is publicly available [1]. You can read about it all you want. (If you're brave enough and have the right development tools, you can even write code to run in it.)
* ARM's reference implementation of a TrustZone OS is also publicly available [2]. If you're curious how it works, you can see for yourself. (This doesn't include the application code which may be present in specific implementations, of course.)
Don't believe anti-FSF FUD. If you think they have an issue with TrustZone itself, as opposed to devices using it without owner's control, I'd love to see the links.
> If it was just for enterprise users there would be no reason to impose it on everyone. It would be positioned as an enterprise exclusive with a price premium.
AMT is positioned as enterprise exclusive with a price premium, that's how Intel's pricing usually works. The underlying hardware (the ME) is not, because it's used for other things too. On old chipsets you can clear the ME firmware and the computer will miss some weird features but otherwise work, on newer chipsets it won't work at all. That's why every chip has the ME even if many chips don't have the AMT.
> The fact that both AMD and ARM integrated similar technologies at around the same time is too much coincidence.
For the sake of argument more than anything, were AMD even remotely competitive with Intel in the enterprise sector at the time they introduced their version of this technology? Sure, they might need it one day (which may be soon) and it'd be nice to have it out there and supported, but I'm not totally convinced by this argument.
Unfortunately though, too many of them still seem to make the connection. eg looking through Shodan quickly just now still shows potentially 1k+ examples. Ugh. :(
But you're right that it's a lot less than is likely present for this AMT problem.
Intel ME has a DRM app called "Protected Audio-Video Path" [1], which obviously has to be secret.
As to whether anything actually uses the PAVP functionality, I have no idea. I wouldn't be surprised if it was something Intel included to try to push Atom-based set top boxes or whatever.
This is incorrect. The management engine is used for a wide variety of tasks, from DRM to providing a TPM to anti-theft code. The AMT functionality (which is where this vulnerability is) is intended for remote management of laptops and workstations. It's usually not present on anything but low-end servers.
Sure, but say you're Intel and pitching the technology to Hollywood. Open source would make the entertainment industry nervous. And Intel isn't in a bargaining position, since Hollywood would be more than happy to just shut x86 PCs out of content.
(Needless to say, I'm offering an explanation as to why the Intel ME is what it is, not defending it. I think it's pointless. After all, it's likely that PCs are just not going to get content in the future, given that consumers use set-top boxes that use embedded architectures instead of x86 PCs and Intel has failed in mobile.)
I'm pretty sure only its keys really need to be secret, but hiding the code may provide some extra security by obscurity if the code happens to have bugs.
Well, even if you completely trust Intel to never be nefarious, never act against your best interest, never do anything shady whatsoever with that power, you can't trust them to keep that power only for their own use. This vulnerability is proof of that.
So you don't have to claim that Intel could have bad faith. You can claim, with proof, that Intel has less than perfect success at security, and that less than perfect security is reason enough why this "secret computer inside" is a horrible idea.
As usual with AMT, there's a lot of noise, but these vulnerabilities to date have only been exploitable with activated AMT. With activation you can patch, etc.
And as I always point out in these stories, if Intel AMT freaks you out, Google "absolute software embedded bios".
Black Hat Briefings 2009 - researchers show that the implementation of the Computrace/LoJack agent embedded in the BIOS has vulnerabilities and that this "available control of the anti-theft agent allows a highly dangerous form of BIOS-enhanced rootkit that can bypass all chipset or installation restrictions and reutilize many existing features offered in this kind of software."
Black Hat 2014 - Kaspersky demonstrates local and remote exploitation of first-stage CompuTrace agent (small agent, it is used only to install full version of rootkit after activation of LoJack or after reinstallation of Windows)
I think they even excluded iLO when they locked down firmware updates to paid customers just after https://lkml.org/lkml/2013/11/11/653 (they are the one who ported UEFI to RISC-V too!)
Yes, AMT is for remote management. Management Engine, the platform on which AMT is an application, hosts other applications, including at least one for DRM.
Hey, at least they didn't read past the submitted buffer.
edit:
Note that this is only pseudocode and rumor has it that ME firmware is actually written mostly in Java. It's not immediately clear to me how to create equivalent bug in Java, the obvious string.equals() method doesn't ignore length mismatch.
> [...] how to create equivalent bug in Java, the obvious string.equals() method [...]
Java Card, the Java version made for smart cards, does not have strings, and thus no String.equals().
Thought I remembered this from a CCC conference about EMV chips or SIM cards (don't remember which) a few years ago. Googling seems to confirm it: https://community.oracle.com/thread/1751610?db=5
From what I remember in one of the deep teardowns it's possibly Java ME (which I think is one step above Java Card in terms of features). The file format is JEFF.
The article discusses an actual partial prefix match.
> we tested out a case in which only a portion of the correct response hash is sent to the AMT web server. To our surprise, authentication succeeded!
> Next, we reduced the response hash to one hex digit and authentication still worked.
This doesn't imply that "no password" - an empty password would still result in a non-empty HTTP Authorization Digest response hash, which would not allow you to login. An empty/truncated digest response hash is not the same thing as an empty/truncated password.
Quite a bit of Java memory safety comes from the JVM. Array bounds checking, for example. If they're embedding an entire oracle JVM then it's probably pretty safe. On the other hand, if they're compiling down to a home made vm with a home made compiler, well. who knows? Dalvik did that and it had some problems.
It seems really hard to test from the point of an outside observer. I'd strongly suspect it's hard to test internally as well, which would indicate there are a bunch of bugs lurking in there.
Oracle JVM is a huge and complicated beast, it's a ridiculous thing to embed in a chip. On the other hand, it's thoroughly tested, and security vulnerabilities are (generally?) fixed in a timely fashion.
A home made, completely un audit able, built in JVM (that apparently can't be updated based on your comments) seems crazy dangerous.
Just saying "it's probably ok, because it's java" as the op alluded to is a very dangerous line of thinking. that only works with one of the public, auditable implementations.
I dunno. it's a the devil you know vs the devil you don't problem. How do you feel about the security of intel software in general?
To put this another way, JVM as a conceptual processor is pretty solid and I doubt there are many massive errors in the design. Your homebrew JVM implementation certainly may contain errors, and the software built on any JVM almost certainly does.
Probably not? Most of the "insecure Java" bugs you hear about are due to exploiting the runtime loader by feeding it fun binaries (jars or whatever). The rest of the problems are things that can occur in any framework and Java is probably safer due to being a memory safe language. The exceptions are when they do things unsafe for speed, like font/image processing, but again, that can happen in any lib.
I'd be surprised to hear that using a Java, say, socket or HTTP lib, exposed you to more risk in general than using any other language/runtime/lib.
So the AMT vuln was related to a lack of security on their web service? Somehow this does not increase my confidence in the rest of their code - if they didn't get this right, what else is wrong?
The Intel note mentioned a local vulnerability that allowed local non root users to provision AMT as well. That sounds like at least one more, different, issue.
That's more of an argument about inauditable, non-disableable, hostile (the CPU shuts itself off if you blank out the appropriate data structures) code running below ring 0
Closed source custom Java ME and ThreadX blob probably maintained by interns, running all the time with unfettered access to every resource in the system even when the machine is turned off, integrated into almost every enterprise computer network in the world.
What are the reasons for having this, I mean good business reasons? I get that designing cpus is expensive and they reuse as much they can, and that businesses would want the benefits or remote management. However when weighed up against the damage to trust in a company is it worth it enough that they do not offer a line of chips that do not have this pseudo back door present?
> What are the reasons for having this, I mean good business reasons?
So, I'll avoid writing anything about NSA collusion or related speculation and talk specifically about what you've asked -- business reasons. The main reason is that customers are already purchasing technology like this through third-party expansion cards and this kind of technology makes those (expensive) adapters less necessary for those customers if its built into the CPU. At a past company I worked at, every HP Proliant server we purchased was purchased with a PCI Express adapter that was a full computer on a card. It came with a special cable that passed the 15-pin VGA output through the card and allowed complete control of the server (and we purchased the "Lights Out" edition which included battery power so that the server could be powered on remotely).
It allows one to do things you can't do through software remote management, like install the operating system, modify the BIOS settings, see POST messages at boot. It sounds like AMT offers a lot of this functionality without the need for that (I think $700) board, which would be appealing to a lot of enterprises, almost all of whom consider folks screaming about possible vulnerabilities in AMT as tin-foil-hat wearing security geeks, or have otherwise convinced themselves that "it'll never happen to me (or Intel, etc)". I, personally, tried in vein to point out the security threats that these third-party boards posed to the organization, being that the software could not be audited, was often used for years past the support date of the hardware (where security patching via firmware updates was no longer provided) and allowed complete and total access to the system in ways that even the worst OS vulnerability wouldn't (that was the point, after all), but was basically told that the benefits outweighed the risks[0].
The argument for using a management board, however, was a little easier to make than the argument for using AMT. Our security standards required that any device with a management interface be segregated to a high-security management network that, while not air-gapped, was protected via additional VPN and two-factor authentication and no access to the internet from within. It's unhelpful if the laptop of the administrator with the appropriate token is infected with something, but at least this allowed for one extra set of firewalls between those interfaces and the internet. Now...whether or not these boards were actually on said management network is anyone's guess and in the case of AMT-style software, there'd be no way to do something similar. These PCI-e boards were complete computers with their own isolated[0] network adapters.
[0] I'm not positive about this one and in all likelihood a sufficiently bad vulnerability might make that separation irrelevant, but AFAIK, the network adapters on these boards were not accessible to the OS and were used by the management interface, only. If that wasn't the case, that would have made a great argument for their elimination since the management network would then be accessible via corporate (which is what the server was plugged into) without the VPN connection.
If AMT provides remote management of devices that are turned off, what program provides authentication of remote management requests when the OS is powered off? Has that interface been audited for authentication vulnerabilities?
If there is an OS-independent, network-accessible AMT management service, when would an admin need to switch from that service to the Windows-hosted web interface? Why can't all AMT operations be performed without an OS?
There is no Windows-hosted web interface, the web interface is provided by AMT. LMS is for accessing AMT from the local machine (AMT is listening to the network hardware, so trying to connect to the web UI locally won't work - the OS will shortcut the network hardware)
So now that there's more info out, this is only a threat if you have remote management provisioned? And to provision it, you first need code execution on the box?
Is the impact of this bug basically nothing to most users? And to provisioned users, it's just as bad as any bug on a remote management system? That is, the fact it's built in to the CPU makes no difference?
Sure but they'd be affected regardless, due to enabling a remote management system. All I'm asking is if there's any real damage because this is built in by Intel. If Intel didn't ship this, then OEMs would, just like e.g. Dell DRAC, right? And that'd have the same attack surface.
I'm not familiar with DRAC, but if it's something added by an OEM wouldn't it have to be in the UEFI/BIOS layer or higher?
AMT/ME and its ilk are a physical coprocessor built into the CPU, whether it's "enabled" or not, not something that can be added or removed after the fact.
For now it's just that, remote management authentication bypass. Whether the ability to power up your machine at night and install Windows Millenium Edition for the lulz qualifies as "real damage" is up to you I guess :)
Anyway, you can't do anything Intel's management software doesn't normally support because this would require gaining arbitrary code execution on the ME and it's not what this exploit is about.
So does this cut down the attack to people on your network? Would a simple NAT protect me here?
Also it's bizarre that they're disclosing this so soon, given that there are bound to be Lenovo (at least) customers who are not business customers and who don't read hacker news and who aren't exactly going to update their BIOS as an everyday thing.
This feature is only active if you set it up and configure it for your company/management system. Random Lenovo customers aren't at risk. The only people at risk are companies that set AMT up, and then they should be looking for security issues with all the vendors they use.
If you're not a business customer you won't have AMT provisioned, so this hack won't work remotely. Apparently a local attacker could provision AMT and then perform the attack, but that's substantially less bad.
The article says that a Local Management Service (LMS) must be installed for the bug to be demonstrated[0], and describes a Windows package that provides that. Is there a Linux equivalent?
[0] I say "demonstrated" instead of "exploited", since I don't understand the details sufficiently to rule out exploitation in the absence of LMS.
I haven't looked into the exploit, but if the attack uses AMT's http(s) interface, you could also simply access the AMT port(s) from a remote machine. The service simply allows one to speak to the local ME.
FTA: "the discovery of a possible zero-day in widely distributed firmware"
Is this CPU firmware? Or microcode? Or is it logic board firmware (UEFI base)? And if it's CPU firmware, is that replaceable or is it permanently baked into the CPU?
I've read literally dozens about how Intel ME is a potential vector and it's problematic to have, particularly when unneeded on consumer devices (a number of them here on HN). There's whole discussions about it from people like Libreboot and others who work on fully open systems. Every security professional I've worked with has been aware that there's a potential hardware level backdoor you can't wipe out the firmware for without bricking your machine, and has opinions about it.
I typed "Intel management engine second computer" in to Google, and found articles calling it a privacy/security threat and potential backdoor ranging back to 2010/2011 timeframe on the first page of results. (That's not even a good phrase to search with to find info, I just wanted to prove the point that you can find pieces with literally the first thing that comes to mind.)
It seems super useful to me. It lets you do OS installs without a keyboard/screen using VNC. Note that stuff like IPMI is standard in the server world.
I certainly see the benefit in the server world (eg, managing a data center) and even in the corporate one (eg, managing a lot of workstations), and didn't mean to imply that everyone thought it was a negative. (Though, even in that space, some people do because of the closed source nature where they don't have full control of the system.)
Rather, I meant that everyone who was serious about security was aware that it was there and included it in their threat modeling. It's not worse than other remote management technologies (and may be better, depending on your needs and trade-offs).
However, for certain systems, there's never a need for the remote management capabilities, and hence represents a threat for which there's no upside when included. (I would argue most consumer systems fall under this.)
There are of course, a range of opinions depending on ideological bent, and my main point was that there was a discussion about it happening.
Authentication still worked. We had discovered a complete bypass of the authentication scheme.
What. the. fuck.
This is not the kind of bug you should ship in anything if you have the barest bit of testing in place, much less a large company like Intel, in an enterprise feature which has a lot of security ramifications, and which has apparently existed for a long time (years?).
Edit: Also, this is really good evidence for short and hard disclosure deadlines. What's the chance something as simple as this wasn't known by someone else? All they had to do was decide to look and they found something within minutes. It's not like this is obscure or doesn't get your much, it's about as juicy as they come.