Neat little device, looks like a Yubikey clone. One could get a similar device by hacking a Logitech unifying receiver, which contains a ..16MHz 8051 clone in it, and a radio to spare.
8051 uses several (typically 4 or 12, etc. [1]) clock cycles for one actual machine cycle. So an instruction that takes "2" cycles, can actually take 24 clock cycles.
Of course, some modern 8051 clones are more efficient. Some of them can execute one instruction per actual clock cycle.
Even then, one ARM instruction can often do work of 2-10 8051 instructions. 8051 is particularly bad at pointer arithmetic (except incrementing pointer by one) and, of course being an 8-bit CPU, 16/32 bit math.
"Microcontrollers (and many other electrical systems) use crystals to syncrhronize operations. The 8051 uses the crystal for precisely that: to synchronize it’s operation. Effectively, the 8051 operates using what are called "machine cycles." A single machine cycle is the minimum amount of time in which a single 8051 instruction can be executed. although many instructions take multiple cycles.
A cycle is, in reality, 12 pulses of the crystal. That is to say, if an instruction takes one machine cycle to execute, it will take 12 pulses of the crystal to execute. Since we know the crystal is pulsing 11,059,000 times per second and that one machine cycle is 12 pulses, we can calculate how many instruction cycles the 8051 can execute per second:
11,059,000 / 12 = 921,583
This means that the 8051 can execute 921,583 single-cycle instructions per second. Since a large number of 8051 instructions are single-cycle instructions it is often considered that the 8051 can execute roughly 1 million instructions per second, although in reality it is less--and, depending on the instructions being used, an estimate of about 600,000 instructions per second is more realistic."
One key (pardon the pun) requirement of a 2FA key is that it can't be cloned - how would this be prevented? Can the microprocessor be locked to prevent reading its flash memory?
The micro in Tomu is like most other modern micros, in that it has multiple lock bits around access to flash, for various purposes. Specifically, one bit prevents anything but the running program - ie the debug port - from reading flash.
The only way to clear that bit is to erase the entire flash, with the notable exception of the user data section, so don't put the secret in there :).
Years back I used to work for a company that built 8051 clones. It was possible to prevent the microprocessor from reading the flash memory because the 8051 has a separate program memory and data memory space.
Genuine question: if you can't clone your 2FA, how do you make spares like a house key ? If there is way to get a spare, what's the way to deal with key loss or shared access ?
Unfortunately the analogy fails slightly, because the answer is that you allow another key to <do whatever> - each key can independently unlock the door.
Like having several doors on your house, (though not a 'back door'..!) each with a different lock/key. _Not_ like having several locks on your one door, or multiple copies of the key for one lock.
Essentially you have a to configure the account/device you are authenticating with to accept multiple keys permanently (so you can have spares) or temporarily (replacing a key by registering a new one then revoking the old).
In the case of key loss on a properly secure service registering a new key could be problematical if you don't have any other key that is still appropriately registered - you might be permanently locked out unless there is an admin function who has a key registered so can do it for you.
Look at multi-key options for encrypted filesystem for one way that this can work. Often the filesystem or block device has a symmetric key that is in turn encrypted by each of the keys that you wish to be able to open it. It is the same symetric key every time, though you can't unlock my copy of it with your key nor can I unlock your's. Once unlocked we could both add a third user by encrypting the base key with their public key (PKI is not required, but is not uncommon).
Ok, so you if you want 9 keys (3 persons in your family has access, have one local spare, and one off site), and 4 services, you need to do 36 registrations of keys ?
You don't need one key per service, so that is as little as 5 keys (3 for active users plus the on-site and off-site spares).
You may choose to have more than one key per person, to reduce the amount of re-registering needed if one key is lost, though remember that this is the second factor so you also already have passwords that vary by service (and if you give people multiple keys they will most likely carry them together to lose them all at the same time rather than individually anyway).
Other than registering multiple keys with a service (as others have explained), you can generate the private key off-device, import the key onto your device(s), then optionally delete the off-device key and/or archive it to external storage (e.g. disc stored in a safety deposit box at your bank).
At a basic level, the key is just a number, so there's no fundamental reason why it can't be duplicated.
However, other responses seem to be leaving out a discussion of why one would want to make identical keys in the first place. If you had identical keys, losing one would mean you'd need to revoke permissions of all copies. If the keys are instead unique, then only the lost key needs to be revoked.
Copying keys would also imply that the secret information needs to pass between devices, which adds some combination of risk and complexity. If the key doesn't need to support copying, then the secret information could be generated inside, and never leave, the key.
It seems better if the permissions granted to unique keys are easy to replicate, rather than the keys themselves.
But for traditional PKI schemes there's always at least one key (your personal master PGP key, your company SSH/X.509 CA) you want copies of, yet it's precisely the kind of key you absolutely want to keep off the network or within a secure hardware token; more so than authentication keys.
There are tokens that permit exporting (aka wrapping) a key for off-device storage or for transferring. If for transferring presumably you specify at key generation time a list of public keys to export to. But I've never used these tokens as the software was just too complex to bother with, proprietary, and poorly supported in the open source ecosystem. In the PGP world the typical advice (for better or worse) I've heard is to archive your master key on a disc and rotate your subkeys occasionally, necessitating brief exposure of your master key. Theoretically you would sign the subkeys from an old, non-networked computer.
Hi there - neat project! I'm always interested in open hardware for cryptography, so thanks for contributing to the space. :) That said, this quote concerns me:
>The SC4-HSM is designed to defend against a compromised client machine, i.e. an attacker who pwns your laptop or desktop machine. If you think about it, this is the only threat model that makes sense for dedicated secure hardware. If you can trust that your client machine is secure, you don't need an HSM.
From my prospective, that's the bare minimum threat model for an open secure hardware device. I also include unsupervised physical access to the device. My ideal HSM would also provide robust protections against a myriad of complex hardware-level attacks - JTAG debugging, power & RF analysis, glitching, de-encapsulation just to name a few.
Of course, a cheaper HSM which lacks advanced hardware level protections can still offer a lot of protection against common threat vectors, and physical access threats can be mitigated to some extent by keeping the device in a secure location or on your person.
> that's the bare minimum threat model for an open secure hardware device
Yes, I agree. But if you think about it, that can't be done by a device that does not have dedicated I/O, and two LEDs are not enough. At a minimum you need something capable of displaying a cryptographic hash if you want to protect against an pwned host.
this is really cool! thanks for sharing it here, I'll definitely dig in deeper on it. I've been following Tomu development for over a year now (two?) I've got one on desk right now, the ideas that mithro cares about, the open hardware, are very important to me and I'm glad to see as much development in the space as possible.
Here is my more compact layout [1]. Twice the Tomu, but at a certain point that just makes it easier to extract. The big advantage of tomu's size is you can leave it in a laptop port, but that is the last thing you want to do with a security device.
At some point won't you run into IP issues with the ARM SoC? Plus how would you deal with things like power analysis attacks that require modifying hardware to fully mitigate?
ARM IP is bought by the manufacturer of the device, in this case Silicon Labs.
What IP issues can you imagine happening considering ARM SoCs are the most widespread processors in existence with many, many different manufacturers?
Regarding side-channel attacks such as you mention, all processors are subject to this vulnerability to various degrees. Do not expect a project such as this to have any mitigation for that type of attack.
Maybe apply some pressure instead of tapping? Same effect, but with less sound, so you'd need a really good mic, or something crazy like pointing a laser at the device/hand, and if that's the kind of attacks you're facing, you don't go for a device that costs 30$ xD
Having bought a number of maker projects where the PCB is designed to plug into the USB port, like this one, I find that the mechanics generally don't work all that reliably. Not sure about the exact cause, maybe manufacturing tolerances on PCBs are not tight enough.
This looks like it's supported by a 3D printed enclosure. Otherwise I agree - you need >2.0mm thick PCB to prevent falling out and these are not offered by the usual cheap low-volume PCB manufacturers.
Hold on - doesn't having this live permanently in the USB port reduce the security possible with a 2FA device? If the user has to get the key from their pocket and plug it in, it will at least prevent an attacker from accessing the user's account in a remote-desktop scenario. Certainly the requirement to press the button will mitigate this risk to a degree, but might there be exploits that can trigger this button-press event using a carefully crafted USB signal?
I understand the security vs usability thing, just beware of the risks of something like this (perhaps bluetooth-type keys like this are more usable as you don't need to bother plugging them in, assuming bluetooth decides to play nicely)
My assumption is that if Google (which deploys the Yubikey Nano across their employees) is willing to make the leap that there'd be no way to trigger the button press via USB, then assuming Tomu has done their due diligence, it's impossible to do so on here.
I'd say it depends on the security model. I have a yubikey nano permanently in my mac, I use it for example for vpn. Of course, I don't use it to unlock my mac, and I have a pretty long passphrase for that (and an apple watch to make it easier).
For 2fa in my own gmail/facebook, I use external keys, similarly as you're saying. But I could be comfortable using this yubikey, it's just that one is personal, the other is work in this case.
The button press is shorting two contacts on the device, there’s no way to fake it. If you’re thinking that there might be a bug you can trigger to execute the “print code” routine, the good news is that ARM microcode is simple and small enough that you can audit and verify it (and the USB stack).
I admire your diligent concern, but I thought the same thing for a split second and dismissed it.
I can't imagine even a corporate churn machine with the most reckless abandon designing a device like this and missing the most basic obvious attack vector.
It might be possible if you find a hardware or software bug in the USB interface on the chip, but the Yubikey uses a chip designed specifically for security applications. Those sorts of chips are designed and tested explicitly for security applications are tested for all sorts of attacks and and exploits, both physical and in software. I doubt it's even physically possible to craft a USB packet that is able to interfere with the ADC on the chip enough to look like a touch event, but even if it was it'd be exceedingly difficult due to the nature of how USB signaling works. You don't have much direct control at the electrical level of what actually goes down the wire.
I can recommend the DigiSpark as a cheaper, slower, alternative that works with the Arduino IDE [1]. Somebody could easily get it into a smaller form factor.
I've bought some of the official versions in their KickStarter and whenever I buy through some company/research funding - but can also recommend the cheaper Chinese implementations to be just as good for projects.
>That doesn't fit entirely within the USB port like the Tomu,
No, but it would be possible to get something more low profile with some work. For most projects I would imagine it's low profile "enough".
For a practical joke (because I'm cool/evil), I plugged one of these devices into the back of somebodies desktop PC and it would occasionally output a random character (either G, H, J or K). I thought the same would be amusing for mouse control too.
You can get it to ultra-low power states too if you replace the power drop down (which consumes about 10mA from memory) and build a low power monitoring device.
>and it's out of stock (with no eta).
That's a shame. Erik is currently working on a 3D printer, it's likely this takes up most of his time now [1].
I was amazed to discover a few years back that my wifi SD card also had full ARM system on it that you could get access to, and if you powered it with batteries it all worked stand alone.
GnuK (http://www.fsij.org/doc-gnuk/) implements an OpenGPG Smart Card. Unfortunately, it targets STM32 chips with 128KB flash and 20KB DRAM, and the EFM32HG309 in the Tomu is only 64KB/8KB. I don't know how much work it would be to squeeze the code into the Tomu.
You should be able to use other chips in the EFM32 family as a drop in replacement, obviously requires soldering your own board. Seem to be available with up to 128K flash and 16K RAM.
Of course the GnuK site lists some STM32 boards designed specifically to run GnuK. Or if you don't care about open hardware you can buy a $2 STLink clone on aliexpress and flash GnuK on it.
The mcu used has a lot more ports available but size poses some limits, however an IR LED+photodiode/transistor could be a nice viable addon, so that it could be used as a bridge to any circuit implementing IR serial communications, or allow communicating with it when it's plugged in a USB supply only port.
well.. I'd probably disable the flashing LEDs. But, that aside, I too thought "oh cool... a complete ARM system in my USB.. what could possibly go wrong here" But, the other side of this is: this is precisely what every closed-box USB device I plug into my USB potentially IS. This is just the overt "hi I'm an awesome tiny computer" state we're actually living in, all the time
TL;DR how do we know we aren't exposed to these devices all the time?
I loved pointing out to people that my Amiga 500 and Amiga 2000 both had a 6502 compatible CPU as the keyboard controller (it had an onboard PROM and tiny amount of RAM, so there wasn't much you could do with it, but it was awesome at the time to upgrade from a 6502 compatible machine to one that had a similarly powerful CPU in the keyboard. But of course the C64 floppy drive (the 1541) had an actual 6502 in it, and enough memory and ability for you to download programs over the serial port...
In other words, we've been in this world for a very long time - a lot of peripherals used to have fully fledged CPUs in them already in the 70's in some cases, and some of them were user-programmable. The reduction in this in the early PC era was an exception, not the norm.
Each time I do the OSX "need to reboot" thing and it sits at the equivalent of BIOS blowing driver updates for things like this, I think "yea, i hate BIOS, but at least I get told which devices are being re-coded" -with OSX, its a bit more opaque.
This is exactly what the vast vast majority of the world hasn’t realized. ARM has made almost EVERYTHING a tiny computer, where tiny == early 90s desktop computers or better. I’m frightened and excited for what happens when the world catches up.