I'm not much into hero worship, but if you guys don't know Bunnie you should really take 5 minutes to understand who wrote this article. Bunnie is a hardware monster of the best kind and an EFF 2012 Pioneer award winner.
It's amazing how much firmware has these back doors, where the engineers responsible have one or more of the following justifications:
- "I don't care, this is just my job. And I was told to do it by management." [what can I say? This sums up a lot of grunt coders I know]
- "What are the chances that anyone will find this?" [lack of appreciation for how smart and dedicated attackers can be]
- "So what if they do? It's not like it's useful" [lack of proper analysis]
- "How else are we going to run tests?" [poor design / fear]
- "Huh?" [absolutely oblivious about security]
I've worked on projects where we made the very conscious choice to leave doors like this open, but I doubt that most firmware shops are that intentional about it.
Strange that at this level of hardware the HN zeitgeist views firmware replacement as a flaw†. At a higher level, say the cell phone or desktop computer, there is a sentiment for "if you can't replace its software, it isn't yours".
There seems to be a threshold under which a device ought to just do what you expect, what the manufacturer decreed it would do and no more, even if you own it and it could do more. I propose that this level varies widely across individuals.
␄
† Granted, this capability is undocumented. But if it were documented on that scrap of paper that fell out of the packing materials, in 4pt type, using gray ink on not terribly white paper, would it be that different?
I don't see it as a flaw, but now that it's brought to light I think it deserves more investigation. I'd really like to see an open-source SD controller, maybe even one of the SD-to-raw-flash type.
But on the other hand, I think we should be enjoying the relative freedom of today (and trying to preserve it for the future); it seems too many are trying to spin "security" as something beneficial, when what they are really saying is "we're making things secure against you and taking away your freedom so we can control what you do; it's also effective at securing against attackers, which is all we're going to promote". If this line of thinking continues we may see devices in the future that are even more locked-down and user-hostile.
(FYI I've worked with embedded systems for quite a bit and also knew SD cards had firmware in them that could be modified, but never really investigated it - just put it in the back of my mind as one of those "I'm curious enough that if I had the time I'd have a go at it" things - along with several dozen others.)
Consider this: Someone hands you a SD card when you need to transfer some data between some devices. You think you delete it afterwards, but the SD card has hacked firmware that just pretends its been deleted. You hand the SD card back, and your "benefactor" now has your data.
It's a flaw if people are not aware of it. Most people see things like SD cards and USB sticks as "dumb" storage devices with no real ability to run software, and are totally unaware of the risks they can cause.
People (well, most reasonably aware people) understand that a cellphone is basically a small computer, and it has a processor and memory and storage, and executes code, etc. And it's nice to be able to update/modify that code to change how the device operates, and somewhat obnoxious when you can't. (To a limit: it's also obnoxious and dangerous when someone else can change that code without letting you know.)
In the case of SD cards, many people assume that they are "dumb" devices. They don't realize that they have a processor (microcontroller) which executes code, and that it's not functionally equivalent to a floppy / Zip disk / CD / pick-your-favorite-dumb-storage-metaphor.
I don't think this is the last time we're going to run into this issue ... an increasing number of devices have embedded, potentially-reprogrammable microcontrollers (laptop batteries, power supply bricks, headphones, to name just a few) that could be used as attack vectors, or as platforms for cool hacks.
The solution, IMO, is not to just further obfuscate the programming method, but to make the code easier to inspect/validate and maybe even reflash, so that users can ensure that the devices are running what they think it's running.
Note that all of the above arguments can be fixed by a "disable backdoors in production builds" policy. It's requiring field-updatable devices that really kills you here. (You can implement high-quality public-key crypto, but that is hard.)
My point is that the engineering organization has to care in an effective way; a build policy like that is great, but you have to back it up with intent.
Before Microsoft's big security push (whatever else you want to say about it, they made a huge effort) most of the above attitudes existed. Now you can't turn around without going through a security review . . . some more effective than others, but at least they're trying.
Armoring a system that will accept only signed updates isn't that hard (just check signatures and refuse updates that fail). This is different from armoring a system against hardware-level attacks, which Bunny and the NSA and a LOT of other people are good at.
Armoring a system against intentional holes is not an engineering problem, it's a people slash attitude problem.
Armoring a system against bugs (buffer overruns, etc.) requires that you solve the people / attitude problem first, and then do meaningful security engineering. This might be really easy for a flash drive, which should have a really simple surface area.
But it may still be an 8-bit 8051 core, from around the same era. You may also find that your monitor's embedded controller is a 6502. These old architectures just won't die, they tend to turn up everywhere now.
WDC, run by Bill Mensch, the "other" 6502 designer next to Chuck Peddle, claims on their website hundreds of millions of licensed 6502-compatible cores per year from WDC alone (though it's not backed up by anything, and it's not clear how current that claim is, but it doesn't seem impossible given the presumed price point for volume license), which would put it near the top of the top 10 CPU architectures by core volume (ARM is in the 3 billion range apparently; Power/PPC, MIPS and x86 in the 300-500 million range) still. Quite amazing.
The C64 had a 6510 (later 8510) that is different from the 6502 pretty much only in that it had 8 general purpose IO lines, some of which were used for bank switching (to map the ROMs in and out of memory over the RAM), and some for the tape connector at least. I don't remember what all 8 lines were used for.
They're pin-compatible enough that in some devices you can swap them around and get things to work (e.g. putting a 6510 in a 1541 has a decent chance of working unless the GPIO pin register clobbers something important in the 1541 memory map; with the reverse you'll at least have problems, though a 6502 in a C64 might work if you only run things like cartridges and/or put the right voltage on the right pins to map the ROMs into place).
Also, the Amiga 500 keyboard had a 6502 compatible CPU with built in PROM and RAM as well (MOS 6570).
For me, the big take-away here is not that SD cards have firmware that can be reprogrammed, but that there's apparently an opening for a comparatively high performance, cheap Arduino competitor. Being decidedly on the software side of things, I have to admit I was surprised to see that a 100MHz core with loads of memory could be produced for just a few cents now. There are probably dozens of low-cost places where fabrication of such a SoC would be only a minimal departure from churning out flash cards. I'd say let's do exactly that!
So, this has potentially interesting value for implementing secure storage (assuming one can replace the whole firmware with something trusted).
I assume it would be possible to, for instance, make every "delete" operation a secure delete operation...wherein data gets overwritten a specified number of times. Shortening the useful life of the device, sure, but if security matters, that's a small price to pay.
Going further, what about a handler that serves out one set of data about what's on the device to any random person that plugs it in (like empty or with a few harmless photos or something), and another set of info to someone that has a key? Sure, for a high capability attacker, they might even know about this kind of firmware magic and know how to circumvent it, but it would make it very unlikely that some random person picking up your device would find anything that you want to keep secret.
Obviously, if your data is encrypted on the host system before writing to the card, that's reasonably safe...but for people in really dangerous situations, where torturing someone to obtain their key is not out of the question, making it seem like there's no data to obtain a key for is the best of all possible solutions.
Is there any reason to overwrite data multiple times on flash storage? I thought that the principle was due to head-alignment on spinning disks (and data theoretically being recoverable from the 'edge' of tracks). Even this is considered overkill for just about everyone. How does it make any sense on flash storage, which operates on completely different principles?
No, it wouldn't need to be overwritten, but you'd want to ensure that it was zeroed rather than just trimmed (marked as empty in the physical block allocation table).
I don't actually know much of anything about how flash memory works. I just read the bit about data sticking around in the article, and assumed there would need to be some special action to make it actually delete stuff. However it works, it seems like this would be useful to know about your flash storage...since nobody documents the behavior of their flash drives, having one with your own (or Open Source) firmware would allow you to know what it does in a given circumstance, which is the only way to securely use any tech.
With process sizes where they are today, there's basically going to be nothing remaining after one erase + program. Raw flash today is already struggling to hold the contents of one write cycle as it is, never mind remaneance from several...
But if you wrote sensitive data to a block that then went bad, then there's a good chance that a large fraction of your data is there and will never be erased, not matter what high-level commands you send the card.
It's kinda scary how many microprocessor and different firmwares are needed/used in nowadays computer/hardware, and how each one of them add a new point of failure.
I was reading just today a similar article, but involving HDDs instead of Microsd cards (and even with a PoC): http://spritesmods.com/?art=hddhack
I have a few servers at work with one Intel CPU on the motherboard, but two PPCs sitting on RAID controllers, and about a dozen ARM controllers on the harddrives (that article scared me - consider patching the drive controller to modify your boot process by transparently injecting stuff into your reads; means it is insufficient to just reformat if someone gets root). Plus whatever CPUs handles the IPMI cards (monitoring/KVM/reboot functionality)
There might very well be more micro-controllers in them that I don't know about. And these are quite run-of-the-mill rack mountable servers...
It’s as of yet unclear how many other manufacturers leave
their firmware updating sequences unsecured. Appotech is
a relatively minor player in the SD controller world;
there’s a handful of companies that you’ve probably never
heard of that produce SD controllers, including Alcor
Micro, Skymedi, Phison, SMI, and of course Sandisk and
Samsung.
Which begs the question: so why target Appotech rather than Sandisk or Samsung?
If you watch the presentation, it's pretty funny why they used the Appotech chipset:
They managed to read out the embedded raw-flash on one device, and when they searched for the vendor/device, the third link that popped up on Baidu brought them directly to a download for the windows based firmware-update-tool (in chinese, of course)... so much for a headstart in analyzing the firmware :-).
This and the article on Der Spiegel [1] mentioning how the NSA has a whole catalog of custom firmware for all major HDD makers tells me never to yield to the temptation of relying on built-in hardware-based full disk encryption.
I've always been afraid of how self encrypting drives work, since it's really not transparent to the user what's going on. I'd be fine using it as an additional layer (since it's generally "free" from performance perspective), but I'd trust CPU-based encryption (with AES-NI) for bulk disk crypto like file vault, and then application-specific (or more "trusted" apps like gpg) for things which actually matter.
Yeah, the pressure of NSA demanding access on anything resembling HSM is obvious. Anything that's not open source has the potential to hide undesired behavior.
Also, more fun would be "cryptolocker" disk-based malware. The aspects of capability exist elsewhere today as mentioned in the article and cryptolocker's $15 million USD and counting.
Also also: is there any HIDS yet for checksumming various chipset/peripheral firmwares?
I've only read part way through, but good grief, you owe it to yourself to read this. Also, in retrospect, it seems obvious. Nonetheless...
Not having finished the article, one of my initial thoughts: I guess my thoughts and intuition were right. It's not time to throw away those optical disks (and drives), yet.
You don't connect new microcrontrolers (from unkown procedence) into a main I/O bus every time you get data from somebody in optical disks. You always use the same set.
No, but they do have "I'm really a hub with a keyboard and a mouse (and a mass storage device) behind it". Or, if you go for simple but (too often) effective, "please autorun evil.exe". (Also, how well-secured do you think your USB stack is? It's been exposed to tons of shitty devices, of course, but proper attacks?)
Unless someone invests time into creating a safe, open-source USB passthrough device. I imagine it wouldn't be that hard to do for specific USB classes. It could even spot a "charge-mode" switch which cuts data lines as an option.
IIRC either the original Xbox or the Xbox 360 was sometimes modded/jailbroken by using a modified firmware for the internal DVD drive. Not exactly the same thing, but in the same vein.
I could definitely see it being easy to write bugs where verification code assumes that nominally read-only devices always return the same data for two subsequent reads of the same location, and then getting up to mischief by taking advantage of that assumption.
At least it's not the case that each inserted storage device (i.e. "disk" or "card", as opposed to "drive") necessarily includes arbitrary execution (Microsoft's "AutoRun/AutoPlay" and the like -- now more constrained if not disabled -- aside).
I'm not too concerned about the vulnerability but just amazed at the technology. Those tiny little microsd cards contains a microcontroller running at 100mhz equivalent. Didn't ever really consider that
"I'm shy on the idea of just selling it to anyone who comes along wanting a laptop. I'm worried about buyers who don't understand that "open" also means a bit of DIY hacking to get things working, and that things are continuously under development. This could either lead to a lot of returns, or spending the next four years mired in basic customer support instead of doing development; neither option appeals to me. So, I'm thinking that the order inquiry form will be a python or javascript program that has to be correctly modified and submitted via github; or maybe I'll just sell the kit of components..."
I hope he chooses the latter option.
If Bunnie is a "hacker's hacker" as someone else suggested in this thread, then I am confused why he believes the proper hoop to make a fellow "hacker" jump through is making sure they know some JavaScript or Python and how to upload to Github.
I thought "hacker's hackers", especially hardware hackers, were not the type to follow the path of least resistance, namely, JavaScript, Python and Github. Whereas, assembly and C (and FORTH, APL, Lisp, etc.) are the languages of the "hacker's hacker".
But that's just me. Maybe I am the only one. If so, pay no mind.
As I see it, the point is to set a minimum bar to filter out people who know nothing and end up coming back to complain. Presumably tinkerers who know at least the basic stuff would be willing to put the time into learning/doing the DIY hacking.
To set the bar any higher reeks to me of elitism (i.e. creating an "exclusive" club of "hackers"), which doesn't seem like his intent.
From my experience with flash, uC operating current is at least an order of magnitude below flash write current. If you are going for seriously low power in (for example) a uC logging values from a sensor, the thing to do is buffer readings in memory until memory is full and then in one burst, commit to flash. (rinse repeat) Any write smaller than the blocksize is extremely wasteful, and simply preparing the flash for write burns a lot of juice too.
Apparently it can be as low as 50mA maximum now [0]. However bear in mind this is the maximum speed. For a low power sensor, you could bring the clock speed down a lot lower to reduce consumption on writes. It's also unlikely you will be writing 40MB/s constantly :)
Excellent article. I wrote a simple SDIO driver for the STM32F4 and have three different MicroSD cards to test it with (they all behave slightly differently) and its clear that such systems "working" is a small miracle in itself :-) All the vagaries of implementation.
I'm not quite sure what is so special here. It is a device, it has firmware, the firmware can be upgraded. The same is true for your HDD or SSD. Why is an SD Card any different?
If someone hands you an SSD in an external enclosure do you automatically suspect it too? A similar hack is known to work there, witness the number of SSDs that needed a firmware upgrade after their field release.
I do applaud the finding of how to do it and the proof that it really does work. It is a nice work in that regard and I have a few SD cards I'd be happy to hack their firmware for fun if nothing else (damn fake SDs, if they at least just advertised their real capacity they could at least be useful).
We always knew that the implementation of these devices incorporated a uC, simply because the way you interact with them (SPI or SDIO interface) involves a state machine that would take up lots of space or upgrade headaches to do in pure hardwired logic.
What a lot of us thought, however, is that the uC would be in the form of what's found in other single purpose devices with similar interfaces (e.g. temp/humidity sensors) : code exists in some ROM table whose mask is set in production.
Secondly, flash is a highly competitive product with narrow margins. Check out some other posts on his blog to get an idea, esp. the ones about the ghost runs.
It's only after you read up on the complexities of bad cell management in flash that you get a sense of this problem. And that it involves complex on-device logic. In the end, the devices (uC) become so high-spec that the firmware update feature is a no-brainer. Compare it to cell phones that increase in complexity until one day they're capable of running Linux, at which point a floodgate of possibilities opens up.
Just another reason why we need to start getting direct access to the underlying flash instead of relying on vendors to provide a bunch of unupdatable translation software. This is particularly the case with SSDs where the end result of all this is "just buy Intel SSDs if you value your data" with the corresponding price premium.
That USB flash drives have reprogrammable firmware is fact that is even sometimes advertised by manufacturers. It's not exactly well documented, but certainly better documented than with SD cards, you can even often get meaningful datasheet for the controller.
So, could this mean that one could theoretically wire a MicroSD card directly into ethernet plug and with some voodoo harness PoE to create an ethernet plug with busybox on it?
No. For PoE you need a PoE controller and then you need a MAC and PHY to convert the ethernet twisted pair signals into something the microcontroller can read (PHY decodes the signal for the MAC which handles packets and etc)
You can theoretically use something like [1] or [2] to connect the SPI bus to an ethernet controller.
He's a hacker's hacker.