It makes me so unhappy that this is what things have come to. They make hardware that we can't control, there's no real alternative to buy, and now we gotta rely on volunteers and wiki pages to give instructions that might work but who knows you might brick it.
I wish there was more widespread outrage over ME and PSP and "trusted computing" so we could collectively tell them to stop selling this garbage. There's so much cynicism out there, though, that I think the public would hardly bat an eye if they knew that all hardware since 2008 or so has secret backdoors. We're just used to this kind of abuse and control.
I haven't bought a new computer since 2007 because I don't want backdoored hardware. If it really is For My Own Safety, as they advertise it to us, then let me control it!
It seems strange to me that the European Union is allowing this. The fact that a foreign company has a backdoor in every computer running in European countries is so concerning on so many levels. Defense ministers from European countries should wake up and outlaw this.
Of course, there is no specificity to the EU here -- this applies to Russia, China and other countries too.
Edit: maybe Intel is already eligible to a € multiple Billions fine from the EU based on this? (Disclaimer: I have no clue about how such fine works).
Both Russia and China (afaik) work on completely locally sourced production of CPUs with a design which is completely open for DoD review.
This does come at a cost of performance, energy efficiency, high price, etc, but DoD / homeland security offices (or their equivalents) are ready to tolerate it.
Same applies to OSes, but mature open-source OSes are several.
The fines you are probably thinking of are mostly antitrust law based. I don't see grounds for making this issue an antitrust case - at least as far as I can see, nobody else says so either.
You are also talking about defense politics, i.e. the military sector. That and the secret service sector is not covered at all by EU legislation and EU executive.
Also, I don't see why after looking at interests banning Intel from doing so would be advisable. Most relevant EU countries have close cooperation with the US in the military and secret service sectors.
That said, defence ministers usually don't outlaw things. It would be the parliaments. And EU law might actually hinder them from doing so, given that it first and foremost focuses on enforcing open competition and free flow of goods.
It should be noted that ASML pretty much has an monopoly on the EUV Lithography market... At least that's what they said at the CYMER (Subsidiary of ASML) tour I went to recently. I'm not exactly sure about the DUV market, so I won't comment on it.
I'm not sure that's a negotiation point for Intel, as they own 15 percent of ASML. Might be a rough ride for everybody if the EU had any intention of playing that card.
> It seems strange to me that the European Union is allowing this. The fact that a foreign company has a backdoor in every computer running in European countries is so concerning on so many levels. Defense ministers from European countries should wake up and outlaw this.
Can't find source for this, but a few months ago I read that US (NSA, CIA) and EU (BND) agencies get the same Intel CPU WITHOUT IME, even more NSA made it a requirement for Intel that IME must be completely removed.
Disclaimer: I've conducted personal research on politics and gov between 2003-2014; I've been exposed to non-public facts and discussions the likes of WikiLeaks as we have come to know it since; however I do _NOT_ have "insider knowledge" of any kind regarding these issues. I've merely learned to read between the lines, so to speak (read: Snowden speaks in a fairly intelligible language to me, like many others privy to how gov works I'm rarely wrong about my assumptions in the long run; and believe me when I say it's easier than one thinks: we tend to overcomplicate matters as adults, obfuscate ourselves the underlying 'whys'). At best, these are educated guesses from historical and modern facts, which represent my _opinions_ and only mine. Also, by far and large, I do _not_ subscribe to conspiracy theories. More like the opposite: I do advocate that govs run pretty much as they should and near the best of their ability most of the time. The real question is indeed less about intent and more about capability (in a broad sense, including intellectual/philosophical 'enlightenment').
With this out of the way, here's the TL;DR: like most corporations executives, the fundamental goal of most officials in place isn't to conduct their mission/mandate but to further their own power; two motivations which ideally go together but in actual fact seldom do. Only the illusion of the former is actually necessary to achieve the latter.
I won't elaborate much because this view amounts to a borderline revolutionary thesis of sorts (calling for a new kind of regime, in place of the disingenuously called "representative democracy"). It's interesting but it's a book, not a comment.
Just recall history and ask questions about:
- The subtle and yet vast spectrum of 'corruption': where does it begins, how far it goes, even after every involved party has died?
- The illusion of safety or 'acceptable' conduct that officials (or execs) can convince themselves of, regarding their organization and themselves: how cognitively biased can human beings be, paradoxically increasingly so as the stakes go higher?
- When we could predict with reasonable confidence that things could go very wrong very fast, did we always manage to prevent just that from happening? Did we even try to the best of our then-capabilities? (Don't even go Godwin on that, not even wars, think market crash or viruses).
- How often in history do we need, as whole societies, the lessons of examples and dire consequences before we put our act together? What is the limbic value of 'could' versus 'hasn't yet'?
Once you realize, as I did some time ago, that we are much less rational beings than anything else, every mind-blowing decision you've witnessed in your own life by people who should have known better becomes not just possible but likely and even understandable, if not outright predictable.
My personal contention is that we'll basically need to go through a 'black swan' of sorts to really deeply learn about electronic safety; much like knowing about germs and viruses did not automatically provoked decent health procedures even by health professionals, let alone the public. But now we do wash hands and sanitize rooms. Just consider how hard a lesson it's been to learn for humanity.
We are in the infancy of the electronic age; I'm afraid it will get worse before it gets better. I just hope it will take the form of a temporally 'short' catastrophy rather than a whole new medieval age of sorts, "dark ages" as seen from a hypothetical future; but the more I observe human beings today, the less confident I feel about us taking the high road directly. Especially when lives are threatened not clinically but rather socially and psychologically. Our survival instincts tend to be numbed when life/reproduction isn't directly threatened.
At this point you may go Orwellian because it's almost less frightening than what's potentially facing some of us on this planet, and our societies appear to be both very resilient yet very susceptible to flaws (probably goes hand in hand). And when there's a flaw... be it legal, psychological, or otherwise... you know someone someday will exploit it. Circumstancial aspects like this ME merely serve as paths to what will eventually become history, but alas it's almost impossible to predict which particular aspect will be the trigger, as impossible as it is to shield ourselves from all angles. But whatever, it too, shall pass.
This isn't to say we can't mitigate because we absolutely can and should (I do try personally), but we can't really alter the shape of things to come because it would amount to changing human nature before the fact. Never happened in history. I don't think evolution works like that, period. Sorry for taking the discussion into a rather philosophical light, but that is my best answer to the 'whys' you asked and seem concerned about —concerns that I share but fail to astonish me nowadays.
> but the more I observe human beings today, the less confident I feel about us taking the high road directly.
Oh, we are incredibly diverse and adaptable. It matters very little that almost all of us are wrong, until it becomes "absolutely all".
I just hope that black swan happens before we start directly plugging our brains into the network. If it does, I am confident we will be fine after a few decades.
Next time I change laptop (which will probably happen soon, my laptop is also a 2007 hardware), it will be a Librem 13 from Purism. I want to send money to a company that cares about this issue, and Purism has demonstrated they do multiple times already[0][1].
I know purism is just neutralizing IME, so it's the same as this, but they're the best solution right now, and I hope that by supporting them, I participate in a clear message to the industry that I don't want their undocumented blackbox on my hardware. Who knows ? Maybe one of the major players will take the hint and try to remove them, or document them. Or maybe a new player will displace them.
The alternative presented is a laptop with 2008 hardware. Sure, I can do that. But that's not sending a message to the industry. That's just living in a bubble, while still giving Lenovo some money. Don't get me wrong, I love minifree and what they're doing, I just don't believe it to be a viable alternative. There are lots of people who can't or won't use outdated hardware. Purism is a sane solution to this problem.
Also, Purism is taking steps to get coreboot to run on their laptops[1][2], so the graph on this website isn't even right anymore.
And I just found out that, interestingly, the T400 and X200 both come with IME. The only difference is that the AMT can be fully disabled on older intel chipsets[0].
First, Lenovo doesn't receive any money when you buy a used laptop. Second, using a ten year old laptop sends a much stronger message than buying a new, slightly alternative laptop. Third, security is like history - if you want to get it right, you have to wait for rigors of time to reveal the truth. We think that Purism is at least as secure as Libreboot, but we know a lot more about ten year old hardware than something fresh off the fab in China.
Are they actually selling laptops without IME? It seems to me like they have published how it can be removed, not that they are selling products with IME disabled.
You can't buy a mass-market computer that doesn't have either an obvious backdoor or proprietary firmware right now, except for the one Chromebook that Libreboot supports.
And even then, your RAM has proprietary code running on it (I'm not aware of any DDR4 that doesn't have embedded SoCs for initialization, none of that is done in CPU firmware anymore), your hard drives can have up to entire embedded SoCs with proprietary code running on them, and there isn't an unencumbered 802.11AN wireless chip in existence. The only bright side to that is none of that hardware has system-wide access the way these backdoor coprocessors do.
I doubt we can get Intel to change directly. Most people don't buy from them directly anyway. My belief is that we can get OEMs to consider this a serious issue, and have them turn into a market pressure on CPU manufacturers. After all, OEM's life would be simpler if Intel simply didn't put the IME that everyone wants removed.
>After all, OEM's life would be simpler if Intel simply didn't put the IME that everyone wants removed.
Most people don't know that IME exists, they won't know either. You won't see an uproar from the consumers, because there are few that care (us). You are better off informing the general public in my opinion.
EDIT: I'd actually put money into RISC-V products instead.
IMHO (not speaking for skyport, of which I am an employee, but using us as an example) there is a huge mismatch between how technical people (on hacker news and elsewhere) perceive and _hypothetically_ value security and how enterprises value security that is the source of your unhappiness/surprise.
We sell an x86 server where the management engine is unreachable; there is no path from the external network to the ME. I can tell you point blank that most enterprise customers are unaware of even hypothetical issues there or for that matter with IPMI implementations. We stopped using this as an early feature in our customer conversations because they just couldn't care less about it until they hand you off to their deeper technical team (if they even have one, which is not as often as you'd think if you have not realized that security in enterprise datacenters is almost always an afterthought). Otherwise it just doesn't come up very often.
Until high-volume customers take a concern, no amount of outrage or paranoia is going to make any difference. You might as well be outraged that, for example, implementations of some BIOSes are so poor that the measured boot (hashes to TPM) don't cover all of the configuration items or vendors fail to correctly lock down their firmware flash or update path. This is pretty common and you don't hear about it because most people can't make measured boot work and generally don't even try.
Security, in general, appears to be mostly viewed as optional. Obviously I think getting this right is important, in our case for other reasons, but in and of itself people generally view security as an expensive vitamin.
Hell, they may even consider it a boon. After all, the official aim of this system is to give admins a way to access a system outside of a potentially compromised/broken OS.
And yes, security will be seen as optional. Because security is not the product, and thus not the revenue source, of most of these companies.
Could it be that high volume customers stay high volume because they keep buying new machines before malware developers has time to find a security fault and the exploits trickle down to exploit tools?
I think that's true but orthogonal. IPMI stacks had huge issues for a decade and no one ever upgrades that or the other lights-out management solutions (idrac, etc.). Buying cycles and planning are such that no one ever does responsive buying in the short term, but customers demonstrate a lack of valuing security-by-design regardless of that.
I'm not pushing our solution, just sharing what I have observed. Management engine in practice will _always_ be unpatched. It is _inherently_ unsafe. Anyone who thinks they will keep their patches up to date, unless they work at Amazon or some other entity where they can institutionalize it, is crazy. I had a CIO directly tell me their mean-time-to-patch CVEs in their exposed servers was two years. The statistics on Linux bug lifetimes show that 5Y is not uncommon.
Security is our underpinning but not how we sell (for the reasons I am explaining); there is a very good reason for that which is that people don't actually buy real security, they buy tools to pass audits. You can see shadows of this in the way that companies continue to deploy vulnerable agents even when they are revealed to degrade host security, or in the resistance to internal secure communications (and the reaction in general to TLS1.3).
Security practices inside large enterprises are bizarre and the result of perverse incentives. Let me give an example. The internal use of Websense proxies, for example, will tend to block encrypted content within TLS connections using entropy detection; a malicious actor will simply use an entropy reduction strategy (bananaphone or whatever trivial solution); a legitimate user who is sending an encrypted document will have that document blocked. People justify these tools as meeting the need to allow inspection of traffic; however your Websense administrator is not actually qualified to see all of the traffic that flows over the proxy (for example, does being able to read the CEO's mail make the admin an insider?) and protocols that carry content that is both safe-to-inspect and unsafe-to-inspect (the CFO's password, for example) do not differentiate the two making inspection inherently dangerous. A decade and a half ago, I knew someone who peeked at the loading dock, invoicing and shipping so that they could predict how the quarter was going to time stock sales.
That's a comical example but it's one we have directly encountered in the secure exchange of cryptographic material; but similar issues crop up all over, from the use of transparent proxies and corporate CAs (to "allow inspection") to all manner of craziness.
The facts lead you to a world view that is even more cynical than the idea of security as theater: instead, security as _ritual_. "I need this server to be secure, I'll install some agent"; "We need this server to be inspectable; I'll disable all forward-secrecy protocols and decrypt and inspect the content." And so on.
So .. I don't think it's high volume or anything else like that which is an outcome of process rather than a choice. It's that many companies long ago adopted a world view where any kind of real security is as far from consideration as possible.
I hear you! It's frustrating to me, too. Having said that, I think it's but one manifestation of a larger phenomenon. People, in general, believe in a just world. They implicitly trust the mechanisms of society and government. They either can't or don't want to take responsibility for every detail of their lives. They trust the police. They trust big companies like Apple, Facebook, and Google. They trust the government!
I honestly believe that a lot of people would be capable of taking responsibility for all of this themselves. I think they just don't want to. They have neither the time nor the energy to put constant pressure on all of these organizations. They have so much other stuff to worry about in their lives.
I think most people trust their government, large companies etc. to some degree, but much more so they trust others who are in "their tribe".
e.g. if I read a story on HN about some police brutality, posted by some familar, karma-rich HN user, I'd quite possibly give it more credence than a report from said police department denying it. Just because I'm in the HN tribe.
This effect is why posting deliberately inflammatory fake news on platforms like Facebook is so damn effective. The typical pattern is (a) establishing the author's tribe (e.g. gun-toting 50-something male living in a southern state) and then (b) launching the attack.
I really like to think that, as a rational person, my bullshit detectors would fire... but its an interesting thought experiment as to what I'd give credence to just because it was uttered by a respected HN poster.
A macbook pro 15" has a ~15000 score on geekbench, and Apple's A11 has a score that is almost at 10000. We are a generation or two away from ARM having the same power as top of the line laptop processors, which can free us from intel and the ME.
> Apple's A11 ... We are a generation or two away from ARM having the same power as top of the line laptop processors
Apple's SoC's, including the CPU, are designed by Apple, not ARM (the company). They run the ARM instruction set but the CPU core is not ARM's design. The last Apple chip to have ARM-designed CPUs was the Apple A5(X) with the ARM Cortex-A9 from 2011, every Apple chip after that has had a CPU designed by Apple. Additionally, the SoC most likely contains "hidden" core(s) not unlike Intel ME (like most SoC's do) that boot up the system and perform power management and other peripheral duties.
ARM is not the answer because they don't do their own chips. They license the technology and their clients are free to add whatever management engine hardware to their SoC.
An ARM-based SoC could be built without hidden hardware or firmware blobs but I am not aware of a SoC that could run on completely free and open software.
The chipset firmware will still be proprietary, and ARM chips are very often configured with a "secret" core that runs only firmware based code that is invisible to the OS. Add in the fact almost all ARM SoCs integrate modems that are themselves turing complete systems that have total system access and have priority control over the host CPU and you have less power there than you do being able to kill the IME and rarely run coreboot.
I don't care that such things exist, as long as there's a way to completely disable them. It should also be possible to verify that they are disabled, and be sure that they can't be re-enabled by rogue software.
The approach of Intel and AMD is completely hostile to anybody who cares about such things.
The ASUS Chromebook C201 and Pine64 are interesting, but I suppose a bit low powered. I'd be keen on replacing my ageing AMD-based desktop with an ARM system if the performance can be improved a bit.
Is that (our inability to influence this) a result of a monopoly? If so, should it be addressed using anti-monopoly institutions (FTC)?
Update: I know other vendors (AMD) do that too, but it still looks monopolistic. First of all, Intel has a huge CPU market share. Also, probably the vendors together constitute a monopoly.
Nah because AMD ships something similar with its processors as well. There was an effort to get AMD to open source its management coprocessor which could have given AMD a competitive advantage but they resisted, which has really fueled the conspiracy theories that some outside organization is responsible for this industry-wide stubbornness.
Its indicative of an extremely high barrier to entry into the CPU market, and certainly a large part of that has to do with both industry collusion to keep competition down and the behavior of corrupt government actors under their payroll.
It also shows that the market of people willing to put money where their mouth is on this are fairly few and far between. The Raptor workstation crowdfunding campaign promised to get rid of all the proprietary firmware in the system and failed by a large margin.
Maybe it is a business opportunity for a system builder -- disable IME, eat bricking risk (minimal if you set up a rigorous process), then sell the end system to the consumer.
I wish there was more widespread outrage over ME and PSP and "trusted computing" so we could collectively tell them to stop selling this garbage.
It's almost 20 years ago since the public convinced Intel to remove a feature from their processors which would seem almost harmless today: https://news.ycombinator.com/item?id=10106870
However, the serial number was not heavily marketed as a security feature, unlike ME and all the other new and oppressive functionality today. The security argument is impressively powerful --- and more scarily, it seems the companies and government have almost perfected how to harness it.
The problem is not that everything is shit, but that everyone just points at it, and expects other people to fix it.
There are mutliple efforts to design and create a 'better'future for computing, in operating systems, but also in hardware areas, people desining new CPUs which are open and well documented etc. If you want things to change, invest time and effort into such projects, or start on of your own. Be the change!
Yes, the state of things is currently worse (i.e. no known way to disable) as far as the AMD PSP is concerned as not much research has been done on it due to their lack of market share.
"If it really For My Own Safety, as they advertise it to us, then let me control it!"
I do not believe the argument that these changes are undertaken out of concern to protect users, whether it is made with respect to "features" of Intel chips (including Intel SGX), corporate-controlled OS, advertising-supported browsers, etc.
It is difficult to prove such organizations are deliberately trying to make things more difficult for users to control, unless of course employees inside these organizations speak out. The organizations can always provide alternate reasons for doing what they are doing.
However I recall reading of a comment made by Gates in the 1990's, at a time when he wanted Windows installed on every PC sold, that he did not like the idea of users potentially changing BIOS settings. I can only guess as to why, but it is curious to me that we are left with certain strange historical artifacts such as Windows always needing to be installed in the first partition. Other OS do not have this requirement.
While that sort of deliberate subterfuge may be difficult to prove, I think it would be easier to argue these organizations believe (or pretend to believe) that users are "incapable" in ways that have no basis in fact.
You, dear user, are incapable of controlling your computer.
We, benevolent organization, can help. Just sit back and we will take care of everything.
It feels reassuring that you can actually get access and read the assembly of the IME now, thanks to https://github.com/ptresearch/unME11. For instance
using the the Gigabrix-BSi5ha-6200 IME Firmware update archive:
3. The uncompressed modules are located in image/00004000.FTPR/* after that
4. You can i.e. load image/00004000.FTPR/kernel.mod in IDA using 80486 in 32bit real-mode or use
"objdump -m i386 -b binary -D kernel.mod --adjust-vma=0x80000" with entry point being 0x80000
or
"objdump -m i386 -b binary -D bup.mod --adjust-vma=0x2d000" with entry point being 0x2D04C
You can see that there is an extra function call added that tests for response.length.
Given the easy availability of the assembler dump you can expect progress towards demystifying IME.
I'm not a professional in the security field, but I sense that there is lots of possibilities by just doing a
"strings image/00275000.NFTP/amt.mod". Gigabyte might be special, but they have left their assert prints in the code
and you can get a sense what the thing is doing...
I bet you that the next generation of Intel processors will have patched this workaround from working, and maybe go as far removing the ability to kill IME unless you use some kind of rotating encryption dongle. Unfortunately for consumers, there's no way to escape this as even AMD has their own IME.
I'm pretty sure at this point what Intel has done is create a way to bypass everyones encryption schemes by hijacking the CPU, which might be prosecutable under DMCA anti-circumvention laws...
If I were in charge of collecting encrypted data at the NSA, I would make sure that whatever keys were used by the CPU's AES instruction set would be copied and saved using the CPU firmware. Then that data could be sent out with the ME, or simply stored in case that computer was ever an object of interest. In that way, anything encrypted (using hardware) could be decrypted. Seems like it would be malpractice by the NSA not to collect AES keys at the hardware level.
Not sure that's correct. If you disable ME then DRM that depends on it just won't work. So it would be like saying that a power button is a DRM circumvention tool.
I'm saying that the ME, by nature of the access to the CPU, memory and IO that it has, is likely capably of circumventing almost all software encryption run on the CPU through surreptitious gathering of information as the system runs encryption/decryption routines. Whether Intel intended to or not, they've built the mother of all circumvention tools, and circumvention is specifically not allowed in the DMCA.
Intel
ME is used as a component of DRM (see PAVP), which is what I was
referring to. But this is quite an odd argument, by that definition your
CPU would be a circumvention measure. And while you can use a CPU to
circumvent DRM, just having a CPU isn't enough.
Also, a normal user of a computer cannot do whatever they like with
Intel ME, so its the least useful circumvention measure I've ever heard
of. GDB is infinitely better.
Not to mention that most DRM depends on the encryption being done outside of the computer's CPU. HDMI's DRM relies on the local computer not being aware of the contents of the stream and the monitor itself does the decryption with burned-in keys.
No surprise, I mean verifying the integrity of the BIOS is like PlayStation 1 level of crypto chainloading madness. A great scheme that will always fail because whatever software you are verifying will be imperfect. The chance that software produced by companies whose "day job" is putting flashing LEDs and DC-DC converter heatsinks on reference designs is zero.
> I bet you that the next generation of Intel processors will have patched this workaround from working, and maybe go as far removing the ability to kill IME unless you use some kind of rotating encryption dongle. Unfortunately for consumers, there's no way to escape this as even AMD has their own IME.
What would be the justification for Intel going through all that trouble to do that (besides a conspiratorial "the NSA needs it to spy on everyone")?
The impression I've gotten, to this point, is that Intel just doesn't care enough about people in the general public who are bothered by the IME to publicly support ways to disable it. Governments had enough buying-power to get Intel to implement an unsupported workaround, but I'm not convinced Intel has a motivation to make accessing that workaround hard.
Some people seem to be in denial about NSA. The body of evidence about the ties to private industry via programs like Prism and others is overwhelming for anyone with the barest curiosity.
Why would Intel go through all the trouble of creating ME or the broke AMD PSP at around the same time? This is hardwired in silicon, is not cheap to do and definitely needs huge business justification to initiate, implement and maintain. What's the business case?
Are they charging a premium for these features and disabling it for everyone else as they would if it was actually a 'premium' feature? Are they giving an option to those who don't want it to disable it without ado?
No, they are pushing it on everyone. That itself compromises ME. How do we know there is no NSL in effect forcing both Intel and AMD to implement this. Nobody does. But it is an intrusive piece of tech that has complete authority over your PC.
It's absurd to suggest that the NSA is the guiding hand behind the IME. For one thing, the enterprise use cases for the thing are quite obvious, so it's not like the NSA would have had to compel Intel to design and implement it in the first place. And if there weren't a demand for it, there's no way the US government could COMPEL them to make the business decision to undertake a massive change to their core product. Nor could they realistically bribe them to do it, not because the government lacks access to funds to make a sufficient large hypothetical bribe, but because Intel is a huge publicly traded multinational. How many employees do you think worked on the IME? How much money do you think it cost? Maybe the NSA has access to some unaccountable billions to throw; would Intel be able to explain that windfall in its books? Perhaps. But how many people would have to keep quiet in perpetuity about all of this? It just keeps stretching believability the more you think about it, like all conspiracy theories that involve more than a handful of actors.
Do you have any specific evidence that the NSA is directly involved in pushing implementation of IME for the purposes of espionage? That's an extraordinary claim; the mere existence of the NSA isn't sufficient evidence that it's true.
Yes, the NSA exists. Yes, it's their job to spy on people. And yes, it's perfectly valid to include mass surveillance in your threat model. But these constant, uncorroborated claims that the NSA boogeyman is hiding behind every rock and tree are starting to become quite tiresome.
> they have backdoor-ed every ISP, nation, router, HDD, what-have-you in existence
This is exactly the kind of hyperbolic nonsense I'm talking about. You can't claim that every HDD in existence has an NSA-installed backdoor installed in it without providing any evidence.
If the NSA's capabilities were nearly as extensive as you're claiming, the US wouldn't need to bother maintaining a military. They could just remotely command all of North Korea's computers to shut themselves off, then sit back and wait around for their surrender.
I'm not saying mass surveillance isn't a problem, or even that the NSA doesn't have a backdoor installed in IME (I certainly don't have evidence to the contrary); just that people need to stop exaggerating the threat, or making claims about the NSA having compromised any specific system without providing any evidence to back those claims up.
Remember PRISM ? How about stuxnet, and the other couple of viruses that are almost surely the NSA's.
Sure, those are not backdoors installed by the OEM, rather they are malware that the NSA can use to infect (lots of) systems. But let's not kid ourselves, the NSA does have a ridiculous amount of power, and we have lots of evidence of it.
At this point I'd be rather surprised if the NSA haven't hacked my fridge yet somehow.
And I agree, those are some pretty impressive feats. Whenever your threat model includes state actors, it's probably not a bad idea to be paranoid. Let's not carry that paranoia to ridiculous extremes though.
_Could_ the NSA have the ability to utilize IME somehow as a means to infect computers? Certainly. Do they _actually_ have that capability? We have no idea. Same goes for your motherboard's firmware, your hard drive's hardware, and any number of other possible vectors. They _could_ be compromised somehow, yes, but let's not claim that any specific motherboard firmware, HDD model, or CPU processor brand definitely _is_ compromised without evidence.
There is an equivalent article from Ars technica, if "theregister" isn't to your liking.
As for the evidence on routers and Internet infrastructure, you should pay more attention
to leaks of state-sponsored tools, like the equation group leaks and the CIA one.
I don't see how that's related to my comment. I'm not discussing whether IME is a good idea or not, I'm just objecting to the unsubstantiated claim that the NSA is involved in pushing it.
Actually: It is quite well substantiated that the NSA has a mandate to usurp, and has factually usurped all technology in its purview, for its purposes - i.e. American information technology corporations, manufacturers, and so on, are bent to its will - it is treasonous for Intel to allow IME to be completely disabled, cf., the Yahoo case against NSA, etc.
The fact of secrecy around this issue is the only thing to be doubted. Its no secret to non-Americans/concerned citizens; if you choose to continue to be ignorant, I choose to set you the challenge: go and find out for yourself just how bad it really is with the NSA. (It is atrocious.)
The idea of the NSA working with a major components supplier isn't an extraordinary claim. Intel will have a major presence in all sorts data centers of interest to the NSA.
If I were an American and believed in what the NSA was doing, I'd want to know what they were doing with their budget if they didn't at least try to get a hardware back door into Intel CPUs. Why waste budget on finding exploits in patchable software when they can go straight to the silicon?
> What would be the justification for Intel going through all that trouble to do that (besides a conspiratorial "the NSA needs it to spy on everyone")?
Well, some companies and users use the IME to enable theft-prevention technology. If you can circumvent the IME, you can easily steal a laptop and disable this theft-prevention technology.
If that’s worth building an ever more closed walled garden is another question.
Wouldn't it be possible to isolate the processor from the network interface? E.g. by using a network card that requires special access codes? I imagine you could build such network interface using an FPGA module.
It already can't communicate with a non-Intel network card IIRC. It would need some kind of driver, and it's not like it can use the OS' network card driver, unless the IME is automagically able to use Windows, OSX, and Linux drivers.
Are you sure? I suppose any network card on the PCI bus can be accessed by the CPU in principle. And leave it to the people of the NSA to install clever software in the IME module that doesn't disrupt the OS's communication with the network card.
So I guess it comes down to what you think the point of the ME is. If its for what Intel claims, then having the ME have to access the NIC through the OS is useless - the whole point is to bypass the OS, so you can manipulate the computer regardless of its state (as long as its plugged in anyways).
If you were worried about NSA spyware that tunnels through your OS... well, then you may as well just be worried about NSA spyware in your NIC.
Because in the end, even if the ME was directly connected to the NIC, it still needs to know how to talk to the NIC (thus why NICs have drivers...). When you use an Intel NIC, the ME knows how to talk to it, and everything is hunky dorey. In practice, Intel could probably build a database of common non-Intel NICs and load all of their drivers into the ME. But really, if we assume that the ME exists for business reasons, then its obvious why Intel will just keep it all as Intel. They are nominally selling something that is advantageous to large organizations, and would like said large organizations to buy as much Intel gear as possible. Said large organizations believe that ME provides sufficient value to go with Intel NICs instead of other NICs.
If it's for what Intel claims, then they would trivially offer an option for the customer to disable it. Hell, they could even make it a selling point for higher-margin chips like Xeon.
Does ME have business purposes? Sure. But their staunch refusal to allow disabling by nongovernmental customers does not pass the sniff test. There is clearly some other, non-business reason for it.
> then you may as well just be worried about NSA spyware in your NIC
If you own my NIC but not my CPU, I can encrypt my traffic and blind you.
If you own my CPU (especially my AES-NI ISA), my options are more limited. Much higher-value target.
>If you own my CPU (especially my AES-NI ISA), my options are more limited. Much higher-value target.
>especially my AES-NI ISA
why aes cpus specifically? you're free to not use aes instructions. besides, if you own the cpu, you load whatever shellcode you want, no need to backdoor the aes instruction.
good luck getting a system compiled that does not use them at all. Might be possible with gentoo and the right configuration as it compiles everything, but with a dominantly binary distro like Ubuntu or Debian you're SOL.
I was just saying my options are more limited, and the options for someone unwilling to recompile their own software is very severely limited.
But even if I'm running, say, software-only Salsa20 with all compiler optimizations off, if my CPU is still owned, my keys are still at risk of silent compromise.
The only real defense is to ascertain the processing limitations of IME and then choose your crypto primitives/implementation for both processing and memory hardness that exceeds the ability of the IME's lower transistor count and/or clock speed to keep up with.
How about a foreign (to the US) power? Imagine the kind of damage they could do if they were sitting on a 0-day. North Korea, for example, could steal financial data to fund their nuclear program, or manipulate critical infrastructure. As a system designer, I'd want to mitigate that kind of threat, and I don't know if I have the knobs and switches available to me to do that with this design.
Hmmm, this stumped me for a while. Your argument makes complete sense, but then why do I need drivers for my OS to access the NIC, when it should be able to use the same API exposed to BIOS? So I looked it up: https://en.wikipedia.org/wiki/Preboot_Execution_Environment
It's fairly light on details, but my interpretation is that for a NIC to support PXE boot it has to have firmware that exposes a standardized API so that the boot loader can functions as a DHCP client and a TFTP client. As far as I can tell, the NIC doesn't have to provide some standardized method for general access, just those two things. The NIC could even have a completely different, completely separate API that it expects the OS to use, and just has some firmware that acts as a shim to expose to the boot loader the PXE client API.
Assuming this is true, Intel ME could access the NIC using the PXE boot API, but this only gives it the ability to act as a DHCP client and TFTP client, not arbitrary networking operations.
I could be completely wrong about that, though. This is, at best, an educated guess.
The bios and uefi obviously can also access the hard disk, otherwise you can’t boot. The OS still has its own driver though. That’s because (at least the BIOS version) is optimized for a small code size and a limited environment, while in the OS you want speed and features.
pxe booting loads a lightweight os, ime can access the networking later when the full os is loaded but has generic or specific drivers for low level net access. ime doesn't use pxe though, afaik
Sort of? They do have a simpler core off to the side for SoC management on their newer 64bit design.
You're going to be hard pressed to find a moderately complex SoC without something like that. At a bare minimum, reset and power state sequencing is complex enough to offload to a microcontroller style core these days. The iMX6 is the biggest, most reent SoC I an think of without that.
What caught my eye was "removes (...) Java VM" - I had imagined the ME to be some kind of very basic maintenance task runner, not a full-blown dynamic app environment.
Same here. I was like "what, there's a JVM in there?". But then again, apparently there's a JVM in most (all?) SIM cards (https://en.wikipedia.org/wiki/Java_Card)
Before Java card, all smartcards were programmed in assembly, Each brand incompatible with others. And even if smartcards were introduced as security devices, their "security" was actually a joke.
Then there was an effort to create a secure interoperable platform for smartcards, it was Global Platform and it uses Java card for implementing their goals. All post 2000 smartcards are compatible with GP:
Minix? Are you kidding me? That is amazingly funny and depressing at the same time. So Andy Tanenbaum got the last laugh on Linus after all. Thanks to this ME garbage, Minix is running simultaneously on the same hardware as the vast majority of modern of Linux machines. Cue Alanis Morissette.
It's basically a tiny computer, with its own OS. Seeing JVM is used on it through, yea, that's surprising. I guess it was cheaper to beef up the IME to run Java than it was to hire someone to re-code whatever functionality the java app(s) provide.
(Edit, in response to all of the comments in reply to this) Wow, I had no idea so many tiny devices ran java..
JVMs can be simple and tiny. Pre-smartphone handsets run Java apps on 20Mhz 32-bit ARM application processors. Some of the JVMs are so simple there is no thread preemption - just round-robin. A 64kb jar is big for that kind of device.
I think it's very likely that the order to backdoor all hardware came from the government and Intel decided to market it as some kind of advanced management benefit for the consumer. Evidence in favour is reverse-engineered functions in Intel ME that have NSA's name in them:
That's a blatant misinterpretation of that article. Here's the main quote:
"According to a highly technical blog post, Positive Technologies experts revealed they discovered a hidden bit inside the firmware code, which when flipped (set to "1") it will disable ME after ME has done its job and booted up the main processor.
The bit is labelled "reserve_hap" and a nearby comment describes it as "High Assurance Platform (HAP) enable."
High Assurance Platform (HAP) is an NSA program that describes a series of rules for running secure computing platforms.
Researchers believe Intel has added the ME disabling bit at the behest of the NSA, who needed a method of disabling ME as a security measure for computers running in highly sensitive environments."
TL;DR the evidence is that the NSA made Intel give them a way to DISABLE the management engine on their machines, not a backdoor.
So, your comment is as good a place as any to put this. There's a ton of allegations about ME being a backdoor for the NSA or some other government entity, but very little in the way of good evidence, at least that I've seen. Many proponents seem to provide circumstantial evidence or simple accusations as opposed to what I would consider substantial evidence.
Here's some examples of solid evidence
- Leaked documents either from Intel or the NSA documenting the NSA pressing Intel to provide backdoors, and Intel accepting
- One of the APT groups believed to be associated with the US government is found using an ME vulnerability in the wild, before the vulnerability became publicly known
- Refusal or extreme reluctance on the part of Intel to fix a discovered vulnerability
- A vulnerability which is clearly intentional, like having ME automatically execute any shell script sent over the network signed with a particular certificate or something.
Examples of circumstantial, but still quite compelling evidence:
- A vulnerability which appears intentional, as in it doesn't require any buffer overflow or shell code injection or abuse of certain registers, but rather seems to be a part of normal operations.
- Intel pushes a change to fix an ME vulnerability, but that change simultaneously introduces another vulnerability.
- Evidence that Intel, while giving the US government the ability to disable ME, refuses to give such an ability to other large customers (Google, Amazon, other governments, etc.), for any amount of money.
Seriously, if someone comes forward with evidence like the above, I'm completely open to accepting the possibility Intel ME is an active backdoor for the US government.
Re: Evidence that Intel, while giving the US government the ability to disable ME, refuses to give such an ability to other large customers (Google, Amazon, other governments, etc.), for any amount of money.
I'm not intimately familiar with the situation but I heard some chatter a few years ago about Google (which likes to use Coreboot on its Chromebooks) asking Intel for documentation and source code for various firmware components including the IME and being turned down.
Researchers recently found an undocumented bit can be set in recent IME versions with a name ("High Assurance Program (HAP)") said to be associated with the US government which disables most functions of the IME.[1]
I'm completely aware of the hap bit, I explicitly linked it in the GP, and so did the OP.
If you have a source to back up the claim that Google offered money for access/documentation for disabling ME and were rejected, that's something, though.
Ah, I was reading quickly and didn't realize that "while giving the US government the ability to disable ME" was a given, not one of the things that had to be shown.
I did some quick searching (I mostly skipped the coreboot mailing list which is probably the most interesting place to look but I didn't want to spend too much time) and every reference I've found alludes to someone from Google or Coreboot _asking_ for Intel's assistance in the form of documentation or source code, which Intel has plenty of non-NSA-related reasons to refuse to provide for free (licensed code, preserving competitive advantage, DRM shit, etc.). I couldn't find any references to anyone offering to _pay_ Intel for ME firmware source code or similar but then again such an offer would hardly be public. So this is weak circumstantial evidence at best.
I find AMD's refusal to cooperate with the FOSS community on the matter of its management coprocessor to be slightly suspicious as well as it's an area where it could possibly gain a significant competitive advantage vs Intel but a lot of Intel's reasons for keeping the IME locked down apply to AMD's insistence on black boxing its PSP as well.
I know, I didn't say that they found an NSA backdoor smoking gun, but it still seems suspicious to me that the NSA got to dictate anything about what should happen in ME. I remain a conspiracy theorist. I really don't see a market demand for all hardware since 2007 to have this built-in by both duopolists.
So the fact that the NSA doesn't like ME on their machines either is evidence that they put ME on the machines in the first place? Those two things are unrelated at best.
The NSA gets to dictate to Intel what it wants in the chips because the US government is an absolutely huge customer. Hell, Intel has an entire subsidiary devoted to selling chips to the federal government (https://washingtontechnology.com/articles/2011/08/30/intel-f...). And if the NSA says computers must meet these standards to be able to interact with classified data, well then Intel better make it possible for their chips to meet those standards or they stand to lose a lot of money. That's why they get a special kill switch for ME.
Which isn't to say that the NSA could use that influence to inject a backdoor. They can't refuse to certify a computer for handling classified information because the manufacturer didn't do something to their other computers.
If its good enough for the NSA its good enough for me. I want control over that bit, on my hardware. Why should I not have that? So far, I'm not hearing any good reasons.
First off, I completely agree. This really should be something you can disable, because often times it isn't used and it increases attack surface.
You can control that bit, as shown here: http://blog.ptsecurity.com/2017/08/disabling-intel-me.html . Of course, that method isn't documented or officially supported by Intel, and you could brick your machine if you mess up, so that's not exactly what you meant. I think Intel definitely should document it and provide official support. As to why they don't currently, I have some guesses. The first is general lack of interest: it costs money for them to support this new feature for everyone and make sure it plays nicely with everyone's setup, and maybe there isn't enough customer demand for the feature. The second is money: I wouldn't be surprised if they sell their chips to the government with the bit already set in firmware at a markup. Making that easily available to everyone means less money.
We need to make Intel hurt for this. I'm losing .. some percentage .. of my power to keep this IME running inside my computer, and yet it serves me no purpose at all.
So, at the very least, a class action suit to cover the cost of running it...
This is off-topic, but the Gentoo installation guide that this page is a part of is one of the most comprehensive and accessible Linux guides I've ever read. It taught me a lot of Linux concepts that I never needed to use before when setting up cloud VMs, and now I have a fully working installation of Gentoo + GNOME (with an encrypted root partition) that I'm confident in managing and upgrading. I definitely recommend checking out the rest of the guide.
Disabling IME and similar bugging subsystems is only a temporary solution: vendors will create a different one with next gen CPUs and all those brave folks dedicating their time to the task of removing/disabling it will be forced to go back to square one: study again, reverse engineer, reflash, brick, try again, etc. That way CPU makers will always be ahead.
We instead need a platform (CPU+peripherals) which is open by design; no more ME or closed device drivers, blobs etc. No matter if it's 10 times slower than the equivalent by Intel or AMD or draws 10 times more current than the corresponding ARM processor; the point is funding such a development, producing it and selling it even to a small fraction of users will send a heck of a message. Also a well crafted PR campaign could do the rest (does your boss know that all his/her files and communications can be accessed by Intel, AMD and every government with ties to them? What about making him/her aware?).
If someone starts a project like that, I'm pretty sure I won't be the only one ready to donate a few quid right now.
New Intel chips have only had minimal improvements to previous chips, so there isn't a huge incentive to upgrade if you don't mind giving up a little performance in exchange for security.
Can I get a serious alternative view on this? What purpose does Intel have for things like this exactly? Also, are AMD chips an alternative that can help here?
The IME, like Computrace which is similarly awful[1], is intended to be used for enterprise management. It's nice for someone in IT to be able to remotely wipe a company laptop that was left in an airport, but there's absolutely no reason why consumer devices should be burdened with these backdoors.
AMD has something called the PSP which essentially serves the same purposes of the IME which is also undocumented and cannot be disabled.
> there's absolutely no reason why consumer devices should be burdened with these backdoors
Intel's advantage is built and defended on economies of scale. Making a hundred million of one thing [1] is more profitable than making 50 million of two things. Particularly if enterprises' demand for IME is stronger than consumers' demand for non-IME chips (assuming no strong externalities, e.g. NSA corruption).
I think the Intel Management Engine is crap. But there's a comprehensible reason it's there.
Intel doesn't have to ship IME-less chips. All it has to do is provide a way for users[1] to disable features that compromise their security but Intel doesn't, for whatever reason.
[1] I'm aware that a "High Assurance Program" bit exists that disables most functions of the IME but you can't expect the average consumer (or even the majority of technically-inclined consumers) to own an SPI flash programmer and to be willing and able to use one on their motherboard. Furthermore, researchers believe that setting this bit would cause any computer with the Verified Boot fuses set (i.e. pretty much any recent laptop) to fail to boot[2].
> there's absolutely no reason why consumer devices should be burdened with these backdoors.
Beyond the obvious economy of scale argument, why don’t consumers want that? I thought things like Apple’s remote lock/wipe feature were selling points for anyonel concerned with theft.
(As an aside, it doesn’t feel accurate to term this a backdoor without evidence that it’s being used without the owner’s consent)
The article mentions they've already found a remotely exploitable security hole. Backdoor or not, nobody wants that.
We would all be fine with IME if we had access to it, could define the keys, and enable/disable features of it. Locking users out of their own hardware is not acceptable. I'm the only one that should be able to remote lock/wipe my phone. It should not be possible for Apple. They shouldn't have the keys. It's my phone. Even if I did trust Apple, I don't trust the government.
Look, I’d also like this to be open and documented but hyperbole isn’t helping that. If it’s undocumented, say that rather than pretending anyone is locked out of their own hardware — nobody has lost the use of their device due to IME. Similarly, calling it a backdoor and implying that the NSA is behind it without any proof is only going to reflect badly on the person making those claims.
The other thing to remember for effective advocacy is thinking about what normal people experience. If you say Apple shouldn’t have the keys, you’re also saying the average person has to be good at key management; most people are happy to outsource that. Conflating the issues of mass surveillance with control over your own hardware is great if your goal is confusion but I don’t see it producing results.
Whether it is actually being used is irrelevant. Security is about confidence that unauthorized access is impossible, not about whether unauthorized is happening right now. An unlocked door is not secure against unauthorized entry merely because noone has opened it yet, what makes it insecure is the fact that anyone who wants to can walk through it whenever they want.
It is not even relevant whether the door was left unlocked intentionally. An unlocked door is a security problem, whether it was put in place intentionally or due to negligence.
That’s not what we’re talking about, though, is it? Unless the claim is that vendors are shipping IME enabled with secret credentials it’s not a backdoor.
A backdoor does not necessarily have to be in the form of a door, a wall made from cardboard will do just fine as a backdoor.
Putting a complex piece of software that you cannot disable, that is potentially reachable from the network, and that won't be updated into every machine is the software-equivalent of building a safe with a cardboard wall: Even if it's not intended as a backdoor, it still has to be treated as a de-facto backdoor for security planning purposes, and whoever uses cardboard to construct a safe wall is to blame.
Also, you do not judge security based on what has already happened nor on what you can prove to be insecure. The default assumption is that things are insecure, unless you can demonstrate that there are good reasons to believe that it's not--just as everywhere else in reliability engineering. A bridge is not assumed to be safe to use until it collapses or is demonstrated to be unsafe--a bridge is assumed to be unsafe to use until it is demonstrated that to the best of our current understanding of how to build reliable bridges, there is no reason to expect failure. A bridge where the builder makes a secret out of how the bridge was built is never considered safe.
Imagine your pc vendor shipped windows with remote desktop enabled, you can't disable it, and you can't see if there are any local accounts in the group, and also they could be administrators, and also it was far worse than even that, because the computer doesn't have to be on or connected to a lan. imagine if dell shipped with logmein and wol but it was permanent. how are these not security vulnerabilities?
They're backdoors because they allow for remote access[1] and we (as in the users who ostensibly "own" our own devices) can't turn them off without going through great difficulty.
Please read the piece linked above on Computrace. I am not very familiar with the inner workings of Find My Mac and other AppleID-associated features but I'm going to go out on a limb and say that those features 1. don't involve injecting code into system processes in the exact same manner as a BIOS-level rootkit and 2. aren't preactivated without the owner's knowledge or consent so the attack surface they create is likely much smaller than that of something like Computrace.
Also, if you like, please elaborate on why the economy of scale argument is "obvious".
Edit: Management coprocessors are also rightly called backdoors because they operate below ring 0 meaning that if we cannot trust them (and we can't because they are largely black boxes to us) then we cannot trust our systems as a whole.
Management and security. It started as just management, and then it added security as demand for that service grew. There are very real needs that it fills, though it doesn't need to sacrifice end-user control:
Management: Imagine you manage and support 10,000 desktops and laptops. Remote access is essential (otherwise you'd effectively pay something like 2/3 of your support staff to cover time spent walking around), but typical remote access depends on an available processor, memory, OS, etc. For the many cases where those components aren't all available (something failed, OS is being updated, need to disable a component to diagnose it, etc.), you need out-of-band remote access, such as what Intel ME provides. It's a high-value service for corporate IT.
Security: You need an out-of-band machine to perform crypto functions, to protect the crypto functions against in-band attacks (e.g., in the OS, BIOS, applications ...). The out-of-band machine can then be used to verify the integrity of BIOS, OS, and other important things. It's also useful for DRM. If you're in corporate IT, you suddenly have a way to provide reasonable security guarantees across your 10,000 computers, a huge step forward.
If you are in corporate IT, or if you're a vendor wanting to enforce, protect, or hide your media or other proprietary bits, end-user control is undesirable. Obviously, that could be optional for owners of the computers who have other needs, but somehow it never works out that way ...
EDIT If you really want to learn about it, save your time and go to the source:
Platform Embedded Security Technology Revealed: Safeguarding the Future of Computing with Intel Embedded Security and Management Engine by Xiaoyu Ruan, a security researcher with the Platform Engineering Group at Intel
The purpose is in the name, it's used for management. When Intel sells you their expensive management solution for your enterprise deployment of machines with Intel CPUs, this is one of the technologies that is used.
Neither AMD nor modern ARM chips are much better in this regard, both have some form of management system (AMD has a "Secure Processor" based on ARM's TrustZone, and ARM has TrustZone).
Configured in BIOS menus by smaller shops, or preconfigured from the factory for the specific machines that enterprise customers are buying in bulk. (Enterprise accounts with Dell, HP, etc. can also get their corporate image preloaded at the factory to avoid any onsite provisioning).
If you don't mind a fairly slow connection, there's PLIP or SLIP. The latter is point-to-point only, though at 9600 baud it is indeed slow. If you're relying only on text-based transfers, these may be acceptable.
(I've run systems, at least for a time, relying on both.)
Otherwise, airgapping and read-only media might be an option, for transfers you need / want to perform.
If you are serious, I would recommend using local firewall rules to modify your outgoing packets so that they are in some way non-standard, and then filtering all but the non-standard packets at your networks firewall.
I don't know why everyone thinks doing this much is safe. If I was the NSA/CIA director, I would put this in as a first level and if anyone figured out how to hack this backdoor, I would have a 2nd and 3rd hidden backdoor or maybe more. Maybe a particular sequence of instructions which opened a backdoor.
It's entirely possible that there exists something analogous to the HAP bit[1] or that something like me_cleaner could be developed for the PSP but AMD is not the market leader (and is frankly negligible when it comes to laptops) so much less research has been done on the AMD PSP. I remember that when excitement was building for the release of Ryzen there were petitions for AMD to open source the PSP and some talk that it might actually happen but they eventually shut the idea down during a Q&A.[2] It's a shame because it would have given AMD something to really distinguish itself from Intel and would have been viewed very positively by a lot of different communities/market segments. Really makes you think about why they didn't do it...
Can someone point me to an explanation of exactly what this is, and whether I need to worry about it? Particularly on a home server I built myself from parts?
If you ever do something revolutionary that would upset the apple cart, yes: you do need to be worried about this. Its a computer, within your computer, which you do not control - and it can be used for any secret purpose your enemies may desire.
Don't have any enemies? Well then, you've got nothing to fear. But lets say you do something big, which will change society, perhaps disrupt an existing American Military-Industrial player .. well then, that small component is going to keep you down, brother.
If the machine has two network interfaces, the management engine will only be active on one of them (always the lower numbered interface). If you don't need both, disable NIC #1 and only use NIC #2.
You will get the benefits of the management engine without any external exposure.
The management engine is the little computer that is used to get the big computer running.
Back when I was a kid, after hiking to school in 4 feet of snow uphill with no shoes, if I were building a PC or adding a new card, I would spend a bunch of time flipping little DIP switches to set things like addresses and assign IRQs. I'd reboot and my Gravis Gamepad controller would work, but the SoundBlaster wouldn't. So I'd power down, flip some switches and try again until I could get everything working.
Those switches went away, but the underlying issues remained. The new method was some firmware that did a bunch of pre-boot configuration. That was refined over the years and now today there's an entire computer running it's own OS that manages all this stuff. It works amazingly well.
However, once the machine is up and running, most people (especially consumers) have no need for it after that. It would be nice if it just powered down and waited for the next reboot. However, a little hidden computer was too useful to be ignored and it's used for a bunch of things including DRM (it can have secure-enclave-like functionality) and remote management. I'm not sure we have an exhaustive list of what it can do.
> Those switches went away, but the underlying issues remained. The new method was some firmware that did a bunch of pre-boot configuration. That was refined over the years and now today there's an entire computer running it's own OS that manages all this stuff.
I'm really impressed about how people manage to self-convince themselves so much about something they don't know about to the point they can explain their imaginary tech stack with such aplomb.
The ME is not needed to do PnP conf (or whatever it has been renamed too those days), and to the best of my knowledge is not used to do that and has never been.
ACPI/EFI & their friend are sufficiently hosted on the CPU, and can run platform code at so called negative privilege level at runtime. I expect those computers with ME disabled to run as well as if ME would not have been disabled, including if you add or remove an extension card.
However you are right about remote management (that's the main advertised application of ME) and probably DRM stuff.
So, as I understand it, the first the ME does on boot is run a module that configures everything. It's called the bring-up (or BUP) engine. I thought that it was doing IRQ and other conflict resolution.
With due respect, I think the parent conflates the Management Engine with something else. It does not replace the BIOS, to which UEFI adds many features, or dip switches. See this post for a description:
If you really want to learn about it, save your time and go to the source:
Platform Embedded Security Technology Revealed: Safeguarding the Future of Computing with Intel Embedded Security and Management Engine by Xiaoyu Ruan, a security researcher with the Platform Engineering Group at Intel
I'm wondering if someone could clear something up for me. There's the me_cleaner project that the above guide relies on in order to generate the new BIOS image. However, I thought me_cleaner could be run directly without dumping, modifying, and reflashing an image using the Pi. What's the difference in efficacy between the above guide and just using me_cleaner directly?
me_cleaner is Python script that operates on a dump, it can't modify the firmware without the help of other tools; this guide just explains in detail the complete process of using me_cleaner.
It doesn't, but because the IME is so undocumented and most firmwares are so horribly written its dangerous to remove these parts of the firmware image because it might brick your system for completely nonsense reasons (all it takes is the firmware randomly misaddressing into an IME area where there is now no data and failing to make your system unrecoverable without the ability to operate on the flash chip by hand).
Make sure someone else has managed it with your same board to know it works.
Imagine the havoc if (when?) Intel's code signing keys for IME are leaked? Sure it might be possible to update keys in all the world's post-2006 Intel computers. But in reality it'd be a free for all that makes botnets of home routers look like a needle on a football field...
i remember when it was found that me had a "kill switch," wasnt it found that this kill switch still leaves a rather lot of power in the hands of the IME?
I don't see anything that would make it specific to version 3, except that you might have to build your own version of gentoo. It would be nice if anyone else has an idea because I'd like to do this with the raspberry pi2 I already have.
Even if enthusiast Gentoo wasn't a thing, Google's ChromeOS is a customized Gentoo[1] which has grown in market share fairly drastically in K-12 schools especially in the US[2].
You've posted many uncivil and/or unsubstantive comments to HN and we've asked you before to stop. If you can't or won't stop, we will ban you. Please read https://news.ycombinator.com/newsguidelines.html and fix this going forward.
Yes it bloody well is. This laptop runs it for starters (and keeps my lap warm).
You also get to see some damn fine hackery in the OP, in the finest spirit of Gentoo. Although it is pretty advanced for those of us who habitually end up repairing broken toolchains without breaking a sweat 8)
I wish there was more widespread outrage over ME and PSP and "trusted computing" so we could collectively tell them to stop selling this garbage. There's so much cynicism out there, though, that I think the public would hardly bat an eye if they knew that all hardware since 2008 or so has secret backdoors. We're just used to this kind of abuse and control.
I haven't bought a new computer since 2007 because I don't want backdoored hardware. If it really is For My Own Safety, as they advertise it to us, then let me control it!