Hacker News new | past | comments | ask | show | jobs | submit login
TKey is a RISC-V computer in a USB-C case, that can run security applications (tillitis.se)
204 points by jandeboevrie 9 months ago | hide | past | favorite | 78 comments



This is really neat!

We've been working on some research to formally verify the hardware/software of such devices [1, 2]. Neat how there are so many shared ideas: we also use a PicoRV32, run on an iCE40 FPGA, use UART for communication to/from the PicoRV32 to keep the security-critical part of the hardware simple, and use a separate MCU to convert between USB and UART.

Interesting decision to make the device stateless. Given that the application keys are generated by combining the UDS, USS, and the hash of the application [3], it seems this rules out software updates? Was this an intentional tradeoff, to have a sort of "forward security"?

In an earlier project I worked on [4], we had run into a similar issue (no space for this in the write-up though); there, we ended up using the following approach: applications are _signed_ by the developer (who can use any keypair they generate), the signature is checked at application load time, and the application-specific key is derived using the hash of the developer's public key instead of the hash of the application. This does have the downside that if the developer is compromised, an adversary can use this to sign a malicious application that can leak the key.

[1]: https://github.com/anishathalye/knox-hsm [2]: https://pdos.csail.mit.edu/papers/knox:osdi22.pdf [3]: https://tillitis.se/blog/2023/03/31/on-tkey-key-generation/ [4]: https://pdos.csail.mit.edu/papers/notary:sosp19.pdf


Thank you. Interesting paper!

As you've already noted the TKey's KDF is Hash(UDS, Hash(TKey device app), USS), which means every device+application combination gets its own unique key material. As you conclude this means an update to the loaded application changes the key material, which changes any public key the application might derive. This is a hassle and not very user friendly.

However, nothing prevents the loaded application (A1) to load another application (A2) in turn. This is a key feature, as it allows A1 to define a verified boot policy of your choice. The immutable firmware would do the KDF using A1's machine code. A1 running accepts a public key, a digital signature and A2 as arguments. A1 measures the public key as context, verifies the digital signature, and then hands off its own contextualized key material to A2. In this example A1 is doing verified boot using some policy, and A2 is the application the end user uses for authentication: FIDO2, TOTP, GPG, etc.

Regarding key compromise of the developer's key you might want to look into transparency logs. Another project I'm a codesigner or is Sigsum - a transparency log with distributes trust assumptions. We recently toggled it v1, and it should be small enough to fit into a TKey application. We haven't done it yet though. Too many other things to do. :)


Very cool! That's a nice design that gives the developer the choice on the trade-off between being upgradeable and being future-proof against developer key compromise.

Transparency logs indeed are a neat ingredient to use here. I've heard of other software distributors (e.g., Firefox) using binary transparency logs but hadn't heard of anyone use them in the context of HSMs/security tokens/cryptocurrency wallets yet.


Thank you! We think so too. It is inspired by TCG DICE, which came out of Microsoft Research if I recall correctly. This approach has several other benefits as well (ownership transfer etc) which I've outlined in another comment in this thread.

Here's a cool application we've yet to make: Instead of only using the transparency log verification for the verified boot stage, use it in the signing stage as well - imagine a USB authenticator that only signs your software release if the hash to be signed is already discoverable in a transparency log. You could also rely on cosigning witnesses for secure time with distributed trust assumptions, and create policies like "only sign stuff if the current time is Monday-Friday between 09-17". That would require a challenge-response with the log though.

Regarding binary transparency I think Mozilla only considered doing it, but never actually did it. In part this was probably because CAs and CT log operators didn't want CT to be used for BT as well. Speaking of transparency, you might be interested in another project I'm involved with - System Transparency - which aims to make the reachable state space of a remote running system discoverable.


Which FPGA are they using?

Edit: There's a lot to like here, but a lot that is confusing. An FPGA based version of PicoPV32 could be really secure. Your attack vector would be the FPGA vendor doing something at the hardware level (hard to pull off), or the toolchain being compromised.

But what FPGA it is matters, also what toolchain they are using to programme the FPGA (yosys is not mentioned ...). The whole "locked down" FPGA bitstream sounds very fishy as well.

PicoRV32 does fit into Lattice FPGAs and they are fully reverse-engineered and supported by yosys.


FPGA: Lattice ice40up5k

Toolchain: Yosys.

For the convenience of most end users we configure and lock the FPGA. This allows them to start using it right away. The core cryptographic technology relies on a Unique per Device Secret (UDS). If we didn't lock the FPGA's configuration memory from reads a physical attacker would be able to read it out in seconds.

Users that want to provision their own hardware design / FPGA configuration / bitstream can simply buy the TKey Unlocked and the TKey Programmer. They can then configure the on-die OTP NVCM and lock the FPGA themselves. Configuration and locking of the ice40up5k was not possible to do with open tooling until we made it happen, as part of the project to create the TKey.

Since you seem knowledgeable it might interest you that:

* The OTCP NVCM uses antifuse technology, so it's most likely not possible to read out the UDS with an electron microscope. The physical attacker will have to circumvent the locking mechanism and read out the NVCM through probing.

* One of the pins can be used to toggle SPI slave configuration mode even after NVCM has been configured and locked. This allows a physical attacker to configure their own bitstream. Unfortunately EBR and SPRAM also keep their state across warm reboots. As mitigations we (1) store the UDS in LCs until it is used by the KD, (2) use our TRNG to randomize when the UDS readout happens, (3) accelerate the hashing (Blake2s G function) in LCs, (4) randomize address and data layout using a non-cryptographic PRP, and some other things I don't remember at the moment. Depending on the user's security concerns we recommend the use of a user-supplied secret in addition to the UDS. In that case the TKey by itself doesn't contain all the key material, making a physical attack insufficient. The KDF can be read in the manual.

Edit: Clarified _physical_ attacker. Added details about the chip.


Supply chain attacks can be mitigated with "golden chip analysis", you destructively analyse a known good chip after measuring various power and timing benchmarks across adversarial configurations, and repeat those measurements across all future chips and check they are within margin of error.


Genuine question: how many transistors and/or logic gates do we need to perform ECDSA and feel more secure than using FPGA or other security elements? I see that security elements used by Ledger crypto wallets is the [1] and an ASIC one would be better to reduce the attack vector? But do they could have memory inside or just cache? I don't know too much about electronics.

[1] https://octopart.com/stm32wb55ccu6-stmicroelectronics-100293...


The answer is, as always, it depends. I'll do my best to characterize the problem:

If we only care about minimizing logic gates we could use SERV, the world's smallest RV32 core, and run a bare metal ECDSA implementation on it. Let's use it without the M extension, so RV32I. I'm not sure what SERV's max clock frequency is, but assuming we configured it on the ice40 and it runs at 40 MHz I'm guessing a single ECDSA signature would take hours to comput.

Due to the math involved in ECC it is quite challenging to make a "hardware-only" ECC signer. The ones I've seen are effectively ECC accelerators with some kind of state machine or microcode to run the algorithm.

In the case of TKey we use picoRV32 configured as RV32ICZmmul (multiply without divide). We use the FPGA's DSPs to accelerate multiplication. On the TKey an Ed25519 signature takes less than a second, which we believe is acceptable for many use cases, and I'm willing to bet there is no Ed25519 signer that is more open source hardware and software than the TKey.

As GP points out using an FPGA is in fact an excellent way to mitigate various supply chain attacks. It's like hardware ASLR, to paraphrase bunnie in his CCC talk.


Thank you! BTW I assume you will support U2F, Crypto, etc in the future? or do you expect third parties to develop on it?

From a quick glance at the product it seems I should buy the unlocked to have full control of the device and in the future could be a device with a display and some more sensors and/or buttons to know what I am signing in?

I am currently in South America so waiting to travel to one of your shipping locations to buy several TKeys.


I believe we already have a U2F prototype for Linux. In general we are quite selective about which applications we take on development and maintenance responsibilities for.

Given that this is the most open source hardware USB authenticator we hope the communities that value this level of openness, design assurance and design verifiability will adopt the TKey and build whatever applications they need for it. Having said that we see lots of opportunities for us to make it easier for developers to build what they need.


If they could add some hardware tamper-evidence/tamper-response (design-to-meet FIPS 140-3 level 3 or EAL whatever, although no need to actually get certified), this could be super useful as a cheap application-specific HSM (which can run app logic inside, rather than the lobotomized/zombie signing oracle PKCS "HSM" use which is common.)

Obviously can prototype without this functionality, and can build out the toolchain/etc. I'm a lot more excited about Tropic Square than most alternatives, but this is shipping now.


Regarding FIPS 140-2 level 2 tamper evidence and level 3 tamper response I'm interested to hear what you (and others here on HN) value and why.

Level 2 can be accomplished with nail polish and glitter, or plastic potting. Yubikey's potting is a good example of level 2 tamper evidence.

For level 3 tamper response, e.g. of a rack-mounted server case, it is enough to put micro-switches under the lid. For that type of product level 2 tamper evidence could be accomplished by covering the server case's screws with copper cans stamped with serial numbers.

When I first learned what is actually required for FIPS 140-2 level 2 and 3 I thought it was security theater. Then I realized that in practice such a server case will be (1) in a locked server cabinet, in a server room with access control, within a building with access control. Add a two-person requirement for entering the room and unlocking the cabinet, and the attacker will have to be quite sophisticated and capable to not be discovered way before any tamper evidence or tamper response comes into play.

In other words, if you don't care about FIPS for regulatory reasons, or insurance reasons, or company policy reasons, then how much does level 3 tamper response really matter?

Regarding CC EAL, I'll simply say that I don't believe there is any other hardware security product that is more open source hardware and software than the TKey. Thanks to it being FPGA-based it is somewhat protected from various attacks on the supply chain. If you're looking to do an Ed25519 signature I don't believe there is a single device more simple or easily verifiable than the TKey and it's Ed25519 signer application. I'd love to be proven wrong.


I was at ACM Copenhagen although more downstairs at ASHES (which my company sponsored) and DeFi. Ryan Hurst who was there is probably the best person to ask about the specific certification use cases.

But overall -- I work for an insurer. Having to document how something was done a year+ after it was done to convince auditors/insurers/large clients is a LOT easier if you can point to a specific piece of hardware vs. a bespoke process. The downside of not having tamper-evidence or tamper-response (ideally) at the hw level is you need to build a lot of other controls around it, and that is expensive, impractical, and difficult to document in a lot of environments. e.g. in a datacenter it's unlikely you can enforce "two man from security/senior executive team in the cage"; it's probably going to be (at best) two ops staff, often contractors, and two-man rule isn't actually that widely used outside CAs themselves. The use cases I'd like for this are application logic, not traditional CAs (who do need FIPS). Add to this lifecycle management of the hardware from production to pickup/delivery to provisioning, pre-deployment storage, deployment, operation, routine audit, replacement, audit, and decommissioning, and it's way easier to put protection into the module. Most of the time I've just seen hardware protection inherited from single-chip vs. module-level protection created, but if there's no certified IC available, module protection it is.

Actual FIPS certification would be nice (or a better standard), but challenging in deployment because almost every vendor FIPS certifies with a specific application load and as soon as you run custom applications (which is badly supported in the HSM world), you lose certification anyway. "FIPS rated hardware running a non-FIPS load, audited in appendix A" is insurable, but ideally there would be something better. 99+% of HSM deployments are just used as dumb key signers which are trivially exploitable when you pop the host, though, so I'd certainly take the non-FIPS option.

(Today, the state of the art is pretty much SGX or another TEE on the host running custom application logic to evaluate and instruct signing which talks to the FIPS HSM to do key control and actual signing, e.g. what Anchorage does.)


Thank you for the detailed response! Lots of good perspectives.

Ryan Hurst is great. I ended up spending several hours with him at CCS talking about HSMs.

I was mostly upstairs at CATS due to two other projects I'm involved with - Sigsum and System Transparency. Sigsum is a minimal transparency log design which uses proactive cosigning witnessing to mitigate split view attacks. In collaboration with TrustFabric (Al, Martin et al) and Filippo Valsorda we have achieved consensus on the cryptographic semantics of the witnesses, so all of us met at CATS to discuss the bootstrapping of a witness ecosystem.

I'd love to learn more about some of the things you mention!

* What standards do you consider better than FIPS in this context?

* When you say "ideally there would be something better", what would you like to have?

Something else to consider: Performance. TKey is RV32ICZmmul running at 20-ish MHz, with the on-board 16x16 bit DSPs to accelerate the multiplication instructions. If you need more than a few ECC signatures per second then you need something more powerful than the TKey.

You've already mentioned the USB Armory II elsewhere in this thread so I'm sure you've already thought of it. I'll point it out here explicitly so others might benefit as well: The USB Armory II looks to be a fantastic flexible HSM substrate. Andrea Barisani et al's work on getting it to run Go is genius. Go as firmware is genius for that matter (see TinyGo).


I created the TKey together with my colleagues. AMA. :)


Very cool project, will certainly dig deeper!

If you need code for u2f/fido2 I hope you can find something useful from our solokeys (v1 for C, v2 for rust). https://github.com/solokeys

P.S: the usbc plug looks a bit brittle (been there). not sure about your plans to scale production but you prob want to address that sooner rather than later :)


Thank you! I absolutely what you've done with the Limited Edition Solo 2. The epoxy + nail polish and exposed PCB is a great artistic expression of the openness of the device. I appreciate the tip about the USB-C plug. As I recall we've improved the structural integrity through the case and glue.


Would appreciate it if you tested for compatibility on a Chromebook running Linux via Crostini. The Crostini distro is fairly ordinary Debian, but Crostini has an unusual system mediating USB-device access, so it's hard to predict whether any given device or device feature will work.

(And even if a feature works today, it might break tomorrow. It's too bad the Chrome OS team doesn't appear to dogfood with Yubikeys, for example; GnuPG smart cards will work for a few CrOS releases and then inexplicably break for months/years. VSCode runs amazingly well on a Chromebook, but if I can't reliably sign a git commit, it doesn't matter.)


The TKey enumerates as a CDC ACM (serial port) to the host, so I'm afraid it doesn't work on the Chromebook from what I can remember. My colleagues would know better. In any case it's something we aim to support one way or another, even it if means a new hardware.

I suspect using Yubikey's FIDO2 functionality rather than its GnuPG smart card functionality would be reliable. I think Crostini's motivation comes down to security, which they enforce by only passing through certain USB device classes. A device enumerating as USB HID FIDO2 should work reliably, given that Google uses those kinds of USB authenticators internally.


What assurance do I have (I'm speaking cryptographically) that the per-device secret isn't known to you?


None. You'll simply have to trust that we have configured your TKey with a UDS of good entropy, and that we haven't saved it - intentionally or unintentionally.

However, the device's flexibility allows you considerable control. For instance you could have its KDF mix in a user-supplied secret in addition to the UDS.

Of course you'd still be trusting that the device actually performs the KDF we claim it does. That's why we also sell the TKey Unlocked.


Seems like that secondary derivation is something that can be attested to, if it is done homomorphically.


Genuinely curious, what’s the argument for locking down the FPGA?


To protect the UDS that we provision for you. See my other responses in this thread for more details. Also see the TKey Unlocked.



Whether TKey or Precursor is better depends on your needs. Here are some differences between them:

* TKey uses open tooling for everything. Precursor uses Vivado.

* Precursor has a screen and a keyboard, allowing the user to interact directly with the trusted device in a completely different way than the TKey.

* The Precursor is bigger than the TKey.

* The Precursor costs much more than the TKey.

Finally I'd just like to mentiont that bunnie and xobs work on Precursor as well as their other projects have been a great inspiration to the TKey project.


Thank you!


I'm not affiliated with either but I just looked at the precursor page because I'm a huge nerd and love crowdfunded projects.

They seem to be in slightly different leagues. Tkey is small and meant to be easily carried with you to authenticate with ssh, gpg and more. While precursor is a full development board that can probably do everything tkey can, and more.


Different product at a different price point with different objectives. Security keys are suitable for mass deployment across an entire workforce, precursor aims to be a launchboard for general purpose secure and trustable computation. Even at mass production prices, precursor would likely still cost more than 100$.


For security-only end-user applications, what are the benefits from spending €80 on your device instead of €60 on a Yubikey 5?


It depends on what you value. If you want a computer that can produce Ed25519 signatures with a minimum of software and hardware complexity and a maximum of inspectability then you probably can't do better than a TKey.

Yubikey on the other hand have a lot more features than TKey, for now.


Can you consider OpenBSD support?


Yes. :)


I'm kinda skeptical of security devices like this that don't have their own screens that let you know what you're authenticating. Obviously there's some applications where it's fine but if the PC it's plugged into is compromised it could be MITMing the auth.


It's a trade-off.

We've thought about adding a screen to a future product, but it begs the question of how much information you want displayed in order to make your authentication decision.

And even if a screen gives you more information, are you authenticating an action where the cryptography terminates in the device? If you are authenticating an SSH session for instance, the screen helps you ensure you are authenticating a login to the right server. Once you have authenticated though, nothing prevents the attacker in your local machine from using the established SSH connection.


Very good point. "What you see is what you sign" is a powerful property, but it does often need some support on the side of the signature verifier to make it possible.

Reading through an entire email (maybe MIME multipart encoded) on a small OLED screen isn't fun; on the other hand, something like "confirm transfer of €x to IBAN y from your account z" would be great – but needs support of your bank.

That support is incredibly hard to get. Android has supported "protected confirmation" [1] for many years now (on Pixel devices), but I have yet to see support for it by any real-world service I'm using, and I'm not holding my breath, since I can't even use FIDO with any of my banks...

[1] https://developer.android.com/privacy-and-security/security-...


Have a look at Bunnie Huang’s Precursor project, but it’s much more expensive.


Yes agree the end user needs feedback at the authenticator. This device has an RGB LED and a touch sensor, so there are still some possibilities.


I'd put a key inside my app which then talks out of band to multiple devices (including maybe a trusted one?) -- e.g. "run app on the device, request input using the directly-connected PC, message tunneled out to my phone for confirmation")

A screen and 2-3 buttons would be nice, though; also a tamper-evident/tamper-responding package. But my main desired use case for this is a lightweight app-specific HSM so I'd just need to validate integrity of device before connecting it to host. USB Armory II is the main alternative.


I don't understand the usecase. If the application is loaded from the host, why is this better than the host running the application directly?


The program running on the device is able to access a special Compound Device Identifier secret, which is only available by running that specific program on that specific device. Any different program you run on the device gets its own unique CDI secret, so it's not possible to load a secret-dumper program onto the device and grab the CDI available to a different program.

You could use the device to implement a TPM/HSM with application-specific logic, even like a cryptocurrency wallet. A program on the device could be made to generate its own private keypair using the CDI, announce the public key to the connected device, and offer to sign messages using its keypair under some specific conditions enforced within the program.


Stoked to see Tkey back on the front page! Bought one when it launched and it's been great fun tinkering with my own apps using their Go library, and the community seems pretty active.

Also hardware wise the new injection molded case is a huge step up!

I'm really hoping for fully fledged FIDO2 support soon, perhaps on android aswell?


Thank you for your support! I'm pleased to hear you've been tinkering with it. We're working on FIDO2 and will get there one way or another. The current TKey uses CDC ACM as USB device class, which presents some compatibility challenges. I suspect we'll have to change that to USB HID FIDO2, possibly with additional endpoints to simultaneously support loading of applications from the host. We'll get there. :)


Can anyone comment on the design decision to not store device firmware? Is this common?

I would’ve assumed it would generally be safer to have permanent but updatable firmware, to reduce the attack surface of malicious firmware loadings


Sure. Specifically regarding not storing firmware: The TKey does have roughly 6 KB of immutable firmware which is responsible for loading device applications sent by the host. This firmware receives and then hashes the application together with the Unique per Device Secret. The resulting hash is handed to the loaded application as key material, which is now unique per device+application combination. This model doesn't prevent storing applications on the device, but to enable the user to have flexibility to load other apps we would need to change things a bit.

Regarding security: This is a question of measured vs verified boot. Until the TKey, users of USB authenticators had to choose between security and flexibility. Yubikey is not updatable at all, but several USB authenticators have updatable firmware using signed firmware updates. Some of them are open source software, and of those a few are open source hardware in the sense that they provide BOM, schematics and PCB design. We do all of the above, and in addition we provide the HDL / "chip design" / FPGA configuration. In any case, having a USB authenticator which is updatable using a signed update begs the question of who is allowed to sign updates, which then leads to central control and a long tail of interesting use cases not being signed. TKey aims to combine security, openness and flexibility. We accomplish this thanks to unconditional measured boot as a KDF, as opposed to verified boot.

There is more information on our website.


If you think about it, no secure computing device of this form factor usually stores its own firmware:

Apple's Secure Enclave and (I believe) Google's Titan security coprocessors both load their firmware from untrusted flash and validate it on-chip against an embedded signing key.

Sometimes, they do have a very limited amount of on-chip storage, but that's usually only used to prevent rollbacks to older, potentially compromised firmware versions or implement rollback-proof PIN or biometric authentication; sometimes this is also solved by burning e-fuses.


If you really want to get into the details, effectively all modern CPU chips does have some firmware in ROM on-die. (I'm sure you know this but I'm writing it out here for the benefit of others)

As the name Read-Only Memory implies it is immutable, unchangeable. These are the first machine instructions executed by the CPU once it comes out of reset. It is the ultimate Core Root of Trust for Measurement and Verification.

In my opinion the industry should move away from embedding verified boot (digital signature verification) firmware in this ROM. What I would like to see instead is the ROM stage doing unconditional measured boot a la TKey / TCG DICE. The second (mutable) stage could be loaded from Flash, and include the preferred verified boot policy for verifying the third stage. This has several advantages:

* It reduces the software complexity of the immutable firmware on-die. This reduces the likelihood of vulnerabilities that can only be fixed by creating a new mask set.

* It reduces the size of the mask ROM.

* It causes the identity / attestation of the device to change if the second stage is modified, allowing some attacks to be detectable that are not detectable today.

* It allows the manufacturer and its customers more flexibility in upgrading the verified boot policy. Intel BootGuard is stuck on single RSA 2048 bits. Why not m-of-n signatures, or m-of-n Ed25519 with a transparency log requirement? As we've seen from several firmware vendors they don't seem to protect their signing keys very well. How many of them are compromised? Having transparency log verification very early in the boot process, and thereby making key compromise discoverable, would be fantastic. This type of experimentation / innovation / improvement, or post-quantum verified boot for that matter, is very unlikely to be placed in the mask ROM of an IC anytime soon. That's another reason to change to the TKey / TCG DICE approach.

* It allows more flexibility in deciding how to enforce owner control and how to do ownership transfer. Intel BootGuard blows fuses that represent a hash of the RSA key. The same can be done by the measured boot firmware in ROM, effectively making the second stage immutable as well. Some will want that, while others are satisfied with something equivalent to a fuse but reversible through physical access, allowing for easier ownership transfer.


> In my opinion the industry should move away from embedding verified boot (digital signature verification) firmware in this ROM.

Interesting, is it possible to perform attestation without an embedded signing or verification key that way as well? If so, how is it possible to tie an attestation statement to a vendor that way?


Yes. I've described some of the flexibility characteristics elsewhere in this thread, but here's how you could do it:

1. The ROM stage measures the first mutable application stage A1, and makes the Compound Device Identifier (CDI) available to A1. CDI=Hash(UDS, Hash(A1), USS).

2. A1 creates a key pair for itself using CDI.

3. A1 measures A2 and creates a key pair for A2.

4. A1 signs A2's public key. This is the attestation.

5. A1 scrubs CDI and its private key from memory while ensuring that the attestation as well as A2's key pair is available to A2. Note that the TKey only supports two-stage boot chains at the moment.

A1's key pair could also be signed by the vendor before shipping. We do it like this: https://github.com/tillitis/tkey-verification


Ah, I see, so rather than embedding a shared private key or a signed certificate, the private key is essentially deterministically created using UDS, USS, and the application hash?

Very interesting, I’ll have to wrap my head around the idea a bit more, but I think this also answered a couple of questions I had around PUFs.

Thank you, and good luck with the project, I’m always happy to see open projects in this space!


Yes, that's correct. Thank you!


I guess as a follow-up, you possibly can’t actually “update” firmware for this, without it creating entirely new keys that would have to be registered with every application


You're mostly correct. The immutable firmware derives unique key material for each device+application combination. However, nothing prevents the loaded application from loading another application. This allows the developer to construct their own update logic. For instance, the first application could simply exist to verify a digital signature over some hash, then load an application, hash it, compare it to the trusted hash, and then execute it. The first application could hand over its own key material.


"There is no way of storing a device application (or any other data) on the TKey. A device app has to be loaded onto the TKey every time you plug it in."


hardware wise, looks to be a usbc version of fomu.

https://www.crowdsupply.com/sutajio-kosagi/fomu


One major difference between TKey and Fomu is that Fomu uses the ice40 for USB hardware logic as well as USB device firmware. TKey uses a dedicated USB chip (CH552, which comes with open source firmware).

Fomu could likely not fit USB logic as well as the security-related cores we have. That, and the fact that we don't want the attack surface of the USB firmware running in the same core that handles the KDF.

They are very different products for very different use cases.


So that would be the Chinese WinChipHead CH552. You have surely vetted the firmware, but what about the hardware?

I understand that for some people this is an agitating stance and question to ask, but it's after all a security-focused USB device meant to be plugged into users' computers.


It's an excellent question to ask. We chose the CH552 in part due to the chip shortage at the time we locked in the design.

According to the specification: It is effectively a USB PHY with a small microcontroller attached. Firmware updates are done through the plug but can't be done from a USB host due to special electrical requirements that USB can't satisfy. One of our greatest concerns regarding the CH552 is any remote code execution vulnerabilities in the firmware that would allow an attacker from the host to gain persistence and use that to traverse an airgap.

For what it's worth we have the same concerns for other dedicated USB chips as well. Many of them are in fact trivially upgradeable from the USB host. It should also be noted that while the lower layers of USB are easily designed in hardware logic, the upper layers would be quite expensive to do in logic. In other words, any USB IC that provides USB to UART or USB to SPI likely has a microcontroller in it.

In our case I would love to find a USB IC with OTP memory. Our security model would even allow remote code execution vulns. A compromised host can upload anything they want to the TKey anyway by design. It is attacker persistence that would get annoying.


thanks, that's an important distinction.

i like the conceptual purity of fomu, but it does take quite a bit of work to get to the point of talking to it over usb.


Same here. It was a source of inspiration when we started working on the TKey.

Could you expand on what you mean by "take quite a bit of work". I thought Fomu came preconfigured?


yes, sorry, i was referring to writing gateware from scratch. you have to implement a usb device before you get past a blinky.


Right. Speaking of which, do you have any recommendations for good USB cores?


Way too expensive.


Qty 1, $80 seems tolerable for a dev device. They should explicitly list 10 for $500 and 100 for $3000 or something though.

This is competing with $5-30K HSMs as well as $30-400 "hardware wallets" or second factor keys, as well as (free) embedded secure enclaves.


I would guess because of the FPGA. The advantage is it's (somewhat, arguably) more secure than a RISC-V ASIC from some random vendor.


Do not forget that they also need to recoup the $$$$$ spent getting it certified. For niche electronics this can be a big percentage of the price. Making something into an actual commercial product, especially globally, results in extra markup.


And they couldn't even put application ROM on there.


> Note well: In the end-user version (not TKey Unlocked) the FPGA configuration is locked down. This means you cannot change the FPGA bitstream or read out the bitstream (or the Unique Device Secret, UDS) from the configuration memory, even if you break the case and insert it into a programmer board.

That seems like it's only useful for "security" against the owner, rather than for any legitimate form of security.


That's the whole point of a TKey as a security device: the secret available to an application depends on both the device it's running on and the application, and can't be extracted, so you can do things like "sign a blob only if it follows these rules" and enforce it in hardware. If the device wasn't locked, you could just... change the rules.

How is that not a legitimate form of security?


If it's my device, then I should be able to change the rules.


If you can change the rules then someone else can also change the rules.

And any ability to change the rules will greatly increase the surface area of attack.


You're more than welcome to change the rules. Please read my other comments in this thread, and you'll hopefully find answers to your concerns. The TKey Unlocked gives you all the control you're asking for.


Buy the unlocked version, program it and lock it down yourself.


How so?


Consider a DRM system that does challenge/response against said secret so that only one computer at a time can use something. As the owner of this device, if I want to clone it, I should be able to.


But then the device could be used by 2 computers at once defeating the security. Another use case would be as a second factor of authentication. Making it impossible to clone is essential for a "something you own" factor.


You can buy an unlocked version and clone it, no?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: