Hacker News new | past | comments | ask | show | jobs | submit login
Case study: fake hardware cryptowallet (kaspersky.com)
206 points by freerk on May 15, 2023 | hide | past | favorite | 156 comments



Another nasty supply chain attack exists, way simpler (unlikely to work on knowledgeable users though)... A legit hardware wallet is shipped, but with fake documentation accompanying it. Some evil people working for delivery companies would swap legit hardware wallet for the exact same model, but with documentation using the official company's logo and font and saying, basically:

"Here's your hardware wallet, initialize it with the seed written on this piece of paper, it's the only one that's going to work for this hardware wallet. Do not lose this seed or you'll lose access to your funds!".

Several unsuspecting users, not aware that a random seed is supposed to be generated by the hardware wallet (or by throwing dice, or whatever) have been pwned this way.


There have also been cases of software using malicious seed generators which have semi predictable outputs. People assume it’s safe because they see what looks like random seeds, combined with no network activity. But the attacker can then just scan over the whole possible key space and check for funds.


Even more concerning than predictable wallet seeds are covert channels in the form of nondeterministic signature outputs.

Most wallets let you provide your own seed words, which users can derive using diceware themselves, but DSA (and its elliptic-curve variants) need a secure random input, and I'm not sure if all wallets commonly use a deterministic (i.e. provably free of covert channels) construction (like in RFC 6979) for that.


From the outset, you can't prove that RFC 6979 was used. I.e. RFC 6979 doesn't provide provable security. If you want a proof that there are no covert channels, you need to implement some kind of interactive protocol between the signer and the verifier -- I'm not aware of any standard/popular way of implementing that.

What you can do is use a dice to generate a key and the sign a bunch of messages with your hardware wallet and a piece of software that you trust. You can then compare the two outputs. This gives you a probabilistic trust level (the more messages you check, the higher the likelihood of there not being a backdoor). (note: I implemented this logic [1] to check that three different RFC 6979 implementations were returning the exact same bytes).

[1] https://github.com/alokmenghrajani/decv/


The best defense against potentially malicious hardware wallets is to set up a multisig scheme. If designed properly (with careful planning related to backup/recovery), you end up with better security properties (i.e. defense in depth).


A classic from 2008. Probably not malicious, but no way to prove a negative.

https://en.m.wikinews.org/wiki/Predictable_random_number_gen...


For a better known, actually malicious one:

https://en.wikipedia.org/wiki/Dual_EC_DRBG


There's a shelf life to this attack for each distributer though. You'll eventually distribute to users who _do_ understand what's happening and they'll raise alarms.

With the article, it can go unsuspecting for years even simply waiting for maximum distribution and then a coordinated attack.


Sounds like easy money


The surface of attack is so big, no one should use hardware wallets for anything more than beer money


And you shouldn't keep your keys on a regular computer, because that has an even bigger attack surface, nor should you use an exchange which may rugpull you.


Incredible. This is so sophisticated and takes so much effort it makes you wonder just how many other wallets are compromised from before you even use them. There are so many other low effort attacks you can run that the fact that people are doing THIS really makes me wonder just how many wallets out there are 100% compromised.

It would be trivial for any iOS-based software wallet to compromise your seed before your private key before is even created. You don't even need fancy spyware that calls home. If the seed is generated from a method that isn't random you'd never know. It will appear random to you, but the author of the software could simply increment on a known value and be able to recreate every private key ever created with that app. No one would ever know. The attacker could sit silent for years or even decades, and if they DID drain a wallet there would be no way to prove it and no one would believe the victim. It would just be a case of, "Well, you must have leaked your seed, it's your fault."

I can even see something like Coinbase Wallet being 100% compromised. The apology post is probably already written in a draft somewhere.


There was a recent drainage of many wallets, even old untouched ones on Ethereum. I don't think it was resolved. Your scenario is likely imo, and the fictional quote was what I saw.


It isn't resolved, but there is a clue... a lot of the users had keys stored in LastPass.


I think hardware hacking is becoming increasingly sophisticated. The way car thieves managed to unlock luxury cars using a custom device built out of a JBL speaker also blew my mind.

https://kentindell.github.io/2023/04/03/can-injection/


This recently happened to the trust wallet browser extension due to using mersenne twister to generate their private keys. Issue is that this PRNG is not cryptographically secure. I think trust wallet is more popular than coinbase wallet as well.

https://community.trustwallet.com/t/wasm-vulnerability-incid...


pre-generate seed, sell wallet purporting seed is random when it's not, steal crypto

how many people can or will verify the key is truly one-of-a-kind?


Title seems misleading (and isn't the article title). It implies that Trezor is a fake wallet. The article is actually about a wallet that purports to be made by Trezor but is in fact not (hardware supply chain attack).


It does uncover a vulnerability about Trezor that allows attackers to fake a Trezor without the user knowing it. It should have been defended via attestation, and software downloaded from the official website should have checked the attestation signature so they know the firmware hasn’t been tampered with.


Agreed -- the title should say (Trezor Impostor) to make it clear that Trezor is not the fake.


Or even better, it should just say “Case study: fake hardware cryptowallet”, which is the exact title, and in accordance with the guidelines. No need to append “Kaspersky” On the front, or mention Trezor at all, let the reader click through and form their own opinion.


Or... even better! "Case study: fake"


"choose models with special versions of protected microcontrollers"

I don't see how this is helpful advice.

The whole point of the article was how the look and feel of a legitimate hardware wallet was cloned.

Under these circumstances there is no way to tell what is in the device(clear housing perhaps?). all it has to do is act like the real device. It does not matter how good your security chip actually is if all I have to do is copy the correct interface.

Unrelated: the use of that particular version is a strangely shoddy mistake. It should have been very easy to use a version string that exists. In which case that version would never have been skipped??? perhaps at one point that was a real version and trezor pulled it due to it's use in a batch of clone units.


> the use of that particular version is a strangely shoddy mistake. It should have been very easy to use a version string that exists

Perhaps attackers wanted to discourage user from trying to upgrade firmware/bootloader before first use by using version one step higher than officially released. If they used older version number, savvy user might try to flash newest firmware and discover something isn't quite right. Using nonexistent, but plausible looking version number, can also be used to explain minor discrepancies in behavior between fake and original unit, if some are introduced by mistake.


> It does not matter how good your security chip actually is if all I have to do is copy the correct interface.

A security chip actually deserving the name (i.e. a tamper-proof one) can protect a private key even against physical attacks, with the corresponding public key marked as authentic by the manufacturer.

If the interface contains a challenge-response interaction with that private key (and ideally ties that to any further communication with the trusted applications on it), you can't copy/emulate that.


I foresaw this years ago, which prompted me to build this:

https://sc4.us/hsm/

It's an HSM which you can flash yourself. Unfortunately, it never generated much interest and so I had to fold up the tent. But maybe it was just ahead of its time.


I remember when this came out, and was interested in getting one!

Unfortunately I was aiming to use it to generate TOTP codes (and replace my authenticator app), but IIRC it needed a RT clock and thus a battery, which was not part of the design.

Great project though.


You could feed it time info through the USB interface.


That was actually my first line of thought as well, but I could never find a way to do that.

My low-level development expertise is pretty low, so perhaps there is a way, but after looking through the USB specification and other USB-development related docs, I just could not figure it out unfortunately.

The closest thing I found was to do someting like this (https://stackoverflow.com/questions/13335402/unable-to-sync-...) but I never did.

Do you think that would have worked?


It has been a long time since I touched that code, but the SC4-HSM came with several demos, including a FIDO U2F token, all of which used the USB interface. It would not be at all difficult to make a TOTP application that got its time from there.


I was a huge fan of what was promised here. I ordered one but every time I tried to work with it I had some catastrophic unrelated incident - like a curse lol.

Anyway I suspect the problem is the nature of crypto. For this to actually take off, you would have needed to hand a bag of money to Jake Paul or John mcafee or a bitboy, and I'd highly suspect a really good product has a hard time competing against those that do


Neat! But is the microcontroller used tamper-proof? If not, your customers are still vulnerable to supply chain attacks such as the one in the article.


The core hardware is actually very similar. Both use more or less the same STM32 SoC. The difference is that the Trezor comes pre-flashed in a sealed package designed not to be opened, while the SC4-HSM is designed to be flashed by the user, and the case is not sealed so it can easily be opened to inspect the hardware. So while I can't say it would be impossible, launching a supply chain attack against the SC4-HSM would be a lot harder to execute and conceal.


Hm, but as a regular user, how would I make sure that what I received in the mail is actually an SC4-HSM and not an identical device that looks the part, but is running some pre-flashed firmware emulating all responses expected by a blank SC4-HSM (i.e. "I'm empty", "writing firmware with hash xyz succeeded" etc.) with high fidelity?


Anything is possible, but this would be extremely difficult. You can't program an off-the-shelf unit to emulate itself. The flashing sequence is a hardware function. There is an actual button that determines whether the system is coming up in flash mode or run mode after a reset. To fake the flashing sequence you would need to have a custom chip, or a custom PCB, or you would need to rewire the stock PCB so that wire went to a different pin. Not impossible, but very hard.


> You can't program an off-the-shelf unit to emulate itself.

You don't need full emulation, just protocol emulation should be enough, right? This might involve having more storage than the authentic device (or getting very clever with compression) in order to e.g. be able to authentically provide a "firmware dump", and maybe run at a faster clock speed so that the timing isn't suspicious, but it still seems easier than full emulation.

> Anything is possible, but this would be extremely difficult.

I agree that it would be very difficult, but unfortunately this property sort of caps the "maximum desirable popularity" of a solution: Nobody will go through that effort for a niche/hobbyist HSM, but as soon as people start protecting serious/expensive secrets with it, somebody might just do it.

Shipping each unit with a private key only known to the vendor, and providing a one-time attestation service, could make this attack much harder to pull off at scale (as you would need to physically extract one key per fake device produced as an attacker).


> You don't need full emulation, just protocol emulation should be enough, right?

Yes, but you would need to bypass the hardware button that puts it in DFU mode, and you would need to do that in a way that isn't visible. Possible, but difficult. (Look at the photos of the hacked Trezor. It's obvious that it has been tampered with.)

> as soon as people start protecting serious/expensive secrets with it, somebody might just do it.

Sure, but remember, this was a self-funded one-man project. (Well, I hired a contractor to do the hardware design, and I had some code contributed by another developer, but other than that it was just me.) The idea was to test the market to see if there was any interest at all in this sort of thing. If this had gotten any traction at all I would have had the resources to put additional mitigations in place.

But even as it stood it would have been extremely difficult for an attacker to compromise these devices. They were shipped to me from the manufacturer in sealed anti-static bags, and I did the final assembly myself. By far the biggest security weakness in the process was me. If I wanted to backdoor these devices I probably could, but only because I controlled the manufacturing process. I really don't think anyone else short of a state actor could do it, not because of the technical difficulty, but because they would have to get physical access somehow without being detected.

> Shipping each unit with a private key only known to the vendor, and providing a one-time attestation service, could make this attack much harder to pull off at scale

Yes, that's a very good idea. If I had sold more than a few dozen I probably would have done something like that.


> The bootloader checks the digital signature of the firmware and, if an anomaly is detected, displays an unoriginal firmware message and deletes all the data in the wallet.

This seems like a horrendous design, like a safe that burns the money inside if you try to tamper with it. Sure, it might protect a malicious thief from absconding with the funds, but it is also an attack vector for any bad actor that simply wishes to cause you harm.


The user is meant to keep a backup copy of the seed written down somewhere safe.

If the firmware had been tampered with, there is no safe way to extract the key. Better that the user uses the recovery seed on a fresh device.

Which means the weakest link of your fancy hardware wallet is how well you hide that bit of paper with your seed phrase.


If the attacker's goal was to erase the user's data, and the firmware _didn't_ erase data on invalidation, then the attacker could simply write a firmware that erases the user's data.


Why complicate things? Just smash the device. This is only effective if the user doesn't have their seed phrase.

Edit: Looks like I was beaten to this down thread.


I think in this case the idea is that the attacker isn't physically in possession of the device, but rather has tricked the user into running a malicious firmware updater for the device.


Ah, yeah,that makes sense. Hadn't considered that angle. Would track with the user behavior exposed here, namely getting their stuff from an unofficial source (be it the device itself or the firmware).


Thus the importance of backing up the seed phrase. Bad actor that wishes to cause harm can use a hammer.


If an attacker succeeds in tampering with the firmware on a crypto wallet (and more generally any secure authentication/transaction confirmation device), losing authentication/signature capabilities is very likely the second worst outcome.


Good point.


Same design as the HDD on your laptop. The solution is a backup.


Unlike a safe, a hardware wallet doesn't store money, it stores private keys. These keys are derived from a seed phrase you are supposed to back up offline.


> The housing was difficult to open: its two halves were held together with liberal quantities of glue and double-sided adhesive tape instead of the ultrasonic bonding used on factory-made Trezors.

Other than having x-ray vision, one easy (but by no means perfect) verification to thwart these types of attacks is to weigh your devices.

Manufacturing should be consistent enough that resealing a device like this would be adding some grams that shouldn’t be there. And unlike something like a cisco router, nothing to cut out to make up for the added weight.


the problem is the sorta person to buy a wallet from a classifieds website isn't willing to spend $30 on a scale to weigh it, because if they had that money they'd just buy it from the official store instead


Lifehack: a post office will weigh whatever you want for free. Also many grocery stores have accessible scales.

Best part is they pay for the certifications!

Then there are friends that ahem buy/sell materials in gram quantities. A counted handful of newish coins are a reasonable way of verifying accuracy in those cases. Be sure to weigh different quantities lest the absolute and relative error cancel out.


The Post Office's scale likely only has ounce resolution, or at best, 0.01 LBS (0.16oz) resolution. ie, you won't notice a couple grams of glue...


Indeed, you need a drug dealer's scale!


A jeweller’s scale!


Ah, they’re all grams here, but that still might not be enough precision on a small device.


I’m far from an expert and don’t own any cryptocurrency but I can’t imagine buying a hardware wallet from a “popular classifieds website”, i.e. ebay.


Yeah, it's basically a good market for scammers: you're almost guaranteed whoever looking to buy this is is looking to store some large amount of money, so as a scammer your chances of big success is very large.


It is possible that the buyer of this wallet had no better option. For example, the official place to buy these devices might refuse to ship them to his country.


This just is another example to me how absurd it is to use crypto as a practical currency. There is no recourse for compromising your private key, and no way to know for sure that only you have a private key.


> The main safeguard is to buy your wallet directly from the official vendor and choose models with special versions of protected microcontrollers (even original Trezors aren’t ideal in this sense: there are other brands’ wallets with better protected chips and extra protection mechanisms).

Yet another hilarious example of where a the solution to security in an alledgedly trustless system designed to subvert authority comes down to ... trust and authority.


You have to trust somebody when it comes to hardware devices.

If you don't do anything, that includes the OEM, their supply chain, your delivery courier, an evil maid etc.

If you have the choice of reducing that list to only the OEM, isn't that a win? That's what attestation does.


Might as well trust a bank.


> You have to trust somebody

I know, all the time, and thus the entire premise of crypto is flawed, as are the libertarian ideals that birthed it.


> trustless system

Crypto let's you choose who to trust. You can build your own wallet, you can buy one, or you can choose to let someone hold your assets for you.

Many people will choose to trust large centralized parties, and some will choose to generate their own keys offline with code they've verified.

Do they have to trust that any cryptographic libraries they use generate seeds properly? Yes, but there are plenty to choose from that are well known, well tested, and the developers are funded.

It's not as simple as saying "the entire premise of crypto currency is flawed because you have to have some trust." The people that much of the crypto community don't trust are large bankers and governments.


Nobody can build their own wallet directly from raw materials. Even in the very unlikely event that they had the know-how, they would still require highly specialised equipment manufactured by third parties. Therefore users of "crypto" have no option but to rely on goods and services provided by third parties just like everybody else. And the extent to which consumers can choose which parties to rely on (or "trust") depends entirely on the degree of competition in the market. Crypto isn't special with regards to trust. Calling it "trustless" is false advertising.


It’s not that hard to build your own wallet software, or if you really want, a paper wallet using dice and a pen.

But frankly it’s not that different than cryptography as a whole: nobody implements ECDSA themselves, or builds the computer that runs it, or smelt the metal and assemble transistors that runs the computer, or whatever. There is no such thing as “absolute lack of trust” but some protocols can be “less trust requiring” than others—e.g. more “trustless.”


How does a "paper wallet" work? I thought a wallet in order to work had to interact with other wallets?


Surprisingly, no. All a "wallet" has to do is compute a signature using a private key. Then the resulting transaction has to be sent to a "mining" node (or the "mempool" of a group of nodes) and wait until one of them incorporates it into a transaction, computes fourty trillion hashes and then throws all but one of them away, and broadcasts the resulting signed block to the network.

Because there is no confirmation on sending bitcoin "into" a wallet, no action is required at all to receive and store it. It's only cashing out where it gets difficult. It also makes it possible to send to inaccessible or nonexistant wallets.


Right, it doesn't interact directly with other wallets, but it has to interact with other parties using an electronic protocol in order to be useful. So a piece of paper isn't enough. Some kind of electronic machinery is required to build an actual cryptocurrency wallet.


Wallets tend to have two main features: A) generate random private keys and B) given some private key, sign a transaction and broadcast this message to the network.

Pen, paper, and some dice (and a bit of work) can generate a private key for step A, which you can input into a hardware wallet, and which would have prevented the problem in the OP.

It’s also possible to write your own wallet software or use a “trusted” tool (eg: openssl or node) to create a private key, rather than rely on a random app or device off eBay to generate it for you.

The B) part is harder to do with pen and paper or an off-the-shelf tool as it involves a fair bit of protocol specific math—but it’s also harder to target in a hardware wallet supply chain attack.


So clearly pen and paper doesn't work, since it isn't possible to sign a transaction and broadcast the message to the network using only a pen and a paper.

Writing a software wallet would involve using third-party compilers, operating systems and hardware, which means it isn't "trustless".


As far as the OP is concerned, a wallet generated via a dice roll and pen would have worked to circumvent the vulnerability.

And we probably have different definitions of “trustless.” See here for a common understanding within crypto:

https://www.preethikasireddy.com/post/what-do-we-mean-by-blo...

It doesn’t mean “you can perform some action without trusting anybody or anything at all.” Protocols, software, hardware, and even your environment will all require various degrees of trust.


From the interactions that I've had with many supporters of cryptocurrencies on Twitter and Reddit, I don't think that this a common understanding of the word "trustless" (which literally means "without trust", by the way) within this community.

Even if we take "trustless" to mean "not trusting a single, centralised party" it's not clear at all that blockchains are trustless or even that they're more trustless than other payments systems such as Visa. That's a question that can't be answered from abstract principles. It would need to be answered empirically.


Of course it will depend who you ask; but most Ethereum developers at least would probably agree that the word “trustless” shouldn’t be interpreted literally as “without trust” to the extent your comments suggest, just as “serverless” systems might still involve servers. Call it a misnomer; there’s plenty in the English language.


I wouldn't say that.

So pure trustlessness start to finish is impossible. All information exchange requires shared protocols, and this necessitates trust. The idea here is to design protocols which, once the initial setup is complete, trust is no longer a factor.

This isn't just limited to cryptocurrency, it applies to all cryptography, and more broadly, to all security measures of any kind. Key exchange requires initial trust. The idea is that you do the due diligence to get set up, then you don't have to sweat it after. To say the entire system is flawed because setup requires trust is to say that all security measures are pointless.


“Trustless” is one of those crappy words that implies there is zero trust in the system. Obviously this is not true - you trust the protocol, the contracts, the hardware wallet supply chain. Hell, you have to trust that ECDSA is not broken.

Still, when we talk about ECDSA and other cryptographic protocols, you can use them without being forced to place your trust in the hands of a single person or private company. There isn’t really a great term to describe that ethos, so “trustless” is often used in place.


For physically hardened devices, this attack vector can be mitigated quite efficiently by including an attestation key with each device and validating that after taking possession (or ideally before any interaction). At least one competitor does that.

To my knowledge, current Trezor devices are unfortunately not (sufficiently) key extraction proof, though; in that scenario, attackers might be able to extract the private attestation key of a legitimate device and then go on to impersonate it in their own version.

This again could be mitigated by e.g. making the attestation key device-unique and offering an online validation service (which could keep track of unusual verification patterns and alert users), but it's not an easy problem to solve.


How secure is the attestation key against the wallet CEO's kids being held hostage?


Everyone would know it and the attestation key would be obsolete. New wallets will be made with another key, and for old wallets users already know they are genuine anyway.


Hopefully the attestation (root) key is itself stored in secure hardware (i.e. an HSM or similar) that can't be reprogrammed unilaterally, even by a privileged actor.


Obligatory $5 wrench xkcd: https://xkcd.com/538/

Still, physically threatening/kidnapping somebody is an entirely different threat model, although it's very common in the Bitcoin world: https://github.com/jlopp/physical-bitcoin-attacks


This is not specific to Bitcoin though.

In Latin America there are “Flash/lightening kidnappings” where they take a person hostage and drain their bank account over a period of time.


They can’t take a bank hostage and drain all of it’s customers funds though.


This was solved technically with the invention multisig wallets.

Whether the custodians choose to support them or not is another matter.


If you want a hardware wallet, I recommend software in an air-gapped machine. Unless you can buy the hardware directly from the manufacturer, and ideally you walked into the factory and bought it at the source, the risk of compromise is too great.


> Unless you can buy the hardware directly from the manufacturer, and ideally you walked into the factory and bought it at the source, the risk of compromise is too great.

That's an awful idea. If you're the type of person to worry about being supply-chain-attacked, then targeted supply-chain attacks are far more likely to happen to you than untargeted ones are. Specifically, you are more likely to be supply-chain attacked by an entity who has the power to either compel or blackmail the OEM into giving you a first-party-adulterated device (think: Huawei network switches), than by an entity who's supply-chain-attacking random strangers. This doesn't just include governments, mind you, but also any sufficiently-wide-reaching criminal gang.

Showing up in person to the factory — or to a retail store — means the intelligence operative planted there can recognize you, and give you the "special" device prepared just for you; or the employee can be compelled by certain training (required to be allowed to sell such devices in certain countries) to follow the special instructions that come up when they swipe your credit card.

So what to do? Don't show up in person. Send a one-time proxy buyer to show up in person. And have the proxy buyer pay in cash, or using their own card.

Think what an American diplomat stationed in China would do if they absolutely needed to get e.g. a new smartphone right away. Normally they'd just wait for something like that to be sent over from America via diplomatic courier, specifically to avoid this problem. But if they couldn't — then proxy-buying at retail is the next-best solution.

(Funny enough, this is also the same thing that computer-hardware reviewers have to do to avoid getting a "reviewer special" binning of the hardware. Counterintelligence is oddly generalizable!)


How do you feel about Yubikeys and HSM systems that corporations heavily rely on?


It’s like apples and bowling balls IMO. If the Yubikey directly stored hundreds of thousands of dollars of bearer assets that could be stolen remotely from an attacker anywhere on earth, then it would be a lot more risky. But that’s not typically what the Yubikey is for, unlike a crypto hardware wallet.


Installing a general-purpose hardware or software backdoor on OEM hardware enables general-purpose attacks, and in my view isn't necessarily less lucrative than attacking a cryptocurrency wallet's supply chain specifically.


Theoretically a nice idea, but where do you get your trusted software and hardware from?

The trusted codebase and set of OEMs seems an order of magnitude larger, and I'm not sure whether the lower likelihood of being specifically targeted as e.g. a crypto user by a supplier can make up for that.


I don't understand this. If you ever want to do anything with the funds in that wallet (e.g. sign transactions using the private key), you're going to need to connect it to a machine that can connect to the Internet. Otherwise, how is this any better than a cold storage paper wallet?


> If you ever want to do anything with the funds in that wallet (e.g. sign transactions using the private key), you're going to need to connect it to a machine that can connect to the Internet.

Not commenting on GP's point but... No, you don't.

You can prepare your transaction on an online machine, without signing it. With full access to the blockchain, the balance of every address, the "counter" needed so that you tx is legal (in Ethereum's case), which address you want to spend from etc.

Then you transfer that transaction, without using the Internet, to the offline computer and sign it there and transfer the transaction back to the online computer to broadcast it.

The computer preparing the transaction, the one signing the transaction and the one broadcasting it can be three different computers.

You can even do that with an hardware wallet: the hardware wallet does not need to be plugged to a computer that is online. It can be plugged to a computer that is offline.

There are still many issues, even when using airgrapped computers. For example it's possible that a hardware wallet vendor is using non-determinism in "random" parameters chosen to sign transactions to exfiltrate the seed hidden among signed transactions. So even an offline/airgapped computer and a hardware wallet hooked to that offline/airgapped computer wouldn't help.


I am somewhat technically savvy and find what is described hard. Now I imagine all those crypto bros saying banks are zeros and no one needs to use them and all should go into crypto and then regular people would just quickly lose all the money to scammers. I can't imagine a grandma following all the procedures to store her crypto or trying to send crypto to someone without messing up a letter in the wallet. I don't know what the solution is that is not centralized and yet secured and easy to use & understand for regular people.


Easy as one, two, three!


You don’t need to connect it to a networked machine to sign a transaction though? You sign the transaction in the airgapped machine, a signed transaction is just a hex string. Move it to the networked machine and broadcast it.


You can generate the coin movement operation in the air gapped machine, write it down on paper, and then use a normal, connected computer to transmit it to the network. The private key never left the air gapped machine, with this method.


The method I’ve read about is to print “the request” onto a QR code, have the air-gapped machine scan it, sign it and print off the signed transaction QR to be scanned into the networked computer to propagate to the network.

A bit more to trust but a lot less to type.


And then you get hacked anyway because the QR code generator was compromised and switched around a few bytes before creating the QR code.


Ideally it’s a functionality of the wallet, but I think only bitcoin armory has this function.


You don't, actually. Coldcard works without ever having to be online. You sign transactions on an SD card and just swap it.


I would be immune to this attack because I always generate my own seeds, on a trusted computer. So I set up hardware wallets to import my seed, instead of trusting their seed generation algo. Of course this procedure doesn't protect against other hardware attacks, for example the wallet exfiltrating the private key somehow (R/F signal), but it certainly raises the bar for hackers.


Although you do open yourself to vulnerabilities in how you generate your random entropy—for average user, it might be worse than relying on a hardware wallet.

The safest play here for an average user is to just not buy your hardware wallets off eBay, as it seemed to be the case in the OP!


you still trust that the hardware will use the seed you provided and not one of the pre-generated ones


You don't have to trust that at all. You can verify that the wallet signs a message with the correct private key that was generated offline.

When you send funds to the wallet, you don't need to send them to the address that the wallet presents, you can send them to the address you calculated during offline key generation. As long as you use the Trezor derivation path on your offline machine, it's predictable what the first address will be.


No because I feed my wallet public key (xpub) to scripts that derives the wallet addresses, and I verify that the derived addresses match what my wallet generates. I'm paranoid like this :)

Granted that would be really inconvenient to do if one used the wallet on a daily basis. In my case I use it rarely enough (large transactions only) that it doesn't bother me that much.


What software do you use to generate your 256-bit seed, and to convert that into the 24 words that the hardware keys require as input?


A custom script that essentially reads /dev/urandom and feeds the hex seed to bip39tool from pybitcointools.


What would have prevented this attack is the following:

Use a little bit of python (there are libraries for this or you can do it yourself) to make sure that the addresses generated in the HW wallet by the 12 word mnemonic are indeed the correct addresses. For example the first segwit address using your private key and the derivation path 49h/0h/0h/0/0 should be deterministic. This way you know your 12 words are the ones used and the wallet is using known standards and not some homebrew crypto.

In fact you should always do that anyway in case the HW stops working and/or the company goes under. This way you can be sure that you can recreate your private keys from your mnemonic and access your funds no matter what.


That would not have stopped this attack. I think youre misunderstanding the attack, the seed is pregenerated.

The only thing that would have stopped this attack would be to generate your seed off the device. And then, since the device is counterfeit and there may very well be a way to exfiltrate seeds from it, to use a genuine device, but that's a different attack.

What youre saying though is a good idea, provided the device youre running this on (and therefore entering your seed into) is secure. Since this cannot really be guaranteed it is often advised to never enter your seed into a computer, for good reason.


> That would not have stopped this attack. I think youre misunderstanding the attack, the seed is pregenerated.

HW support entering any mnemonic, you don’t have to generate it on the device itself. So if you create a mnemonic yourself and check what addresses it should generate with a third party tool and then enter it in the wallet and see different addresses something is fishy.


It seems kinda foolish to buy a second hand crypto wallet.


Somewhat related, I was recently pointed to a cool video about someone hacking a Trezor One. Very enjoyable watch.

https://www.youtube.com/watch?v=dT9y-KQbqi4&pp=ygULdHJlem9yI...

> I was contacted to hack a Trezor One hardware wallet and recover $2 million worth of cryptocurrency (in the form of THETA).


My #1 argument against the feasibility of cryptocurrency: Can my parents not their get money stolen?


Can crypto currency be used easily? Yes, through centralized entities like where your parents currently store their money.

Some people may forgoe ease and aim for self-custody because they value decentralization over convenience. Others will choose a middle ground where they and a centralized entity both hold part of the key, ensuring that the 3rd party can't move their funds without permission.


This is an argument against any form of direct payment... i mean, you can get your cash stolen very easily...


Yes. And so cryptocurrency is a reversion to a model of payment that’s been rejected by most people for its poor security.


Cash has been rejected by most people due to its poor security? I have to ask for a source on that. Cash is still the most popular form of money, and if you have low trust in your government or institutions/live in a corrupt society, is more secure than other options.


Does it mean that at the moment of releasing 2.0.4 the Trezor team already knew there is a fake firmware circling around?

I wonder if Trezor team communicated that in some maybe different way than that line in the CHANGELOG. Not blaming them of course, just wondering.



None of the methods proposed by Trezor would frustrate the attack mentioned in the article:

Validate the holograms: Most users aren't forensic experts and don't have an authentic physical sample to compare their evaluation target to, only photos of one.

Only buy from authorized resellers such as the official Amazon shop: Fake products have been introduced into Amazon's supply chain before [1].

The bootloader validates the firmware and displays a warning otherwise: Sure, but so does the fraudsters' bootloader.

[1] https://www.redpoints.com/blog/amazon-commingled-inventory-m...


From that article, it sounds like this wouldn't be commingled inventory, as it's both private label and an opt in process.

That said the obvious way to avoid amazon commingling conclusively is to buy it directly from the Trezor shop.


If I were Trezor and became aware of a fake firmware, I would:

* Offer rewards to anyone able to send me the fake devices or clues who is making them.

* Tell my clients to upgrade the firmware on devices before use. Make sure every new firmware is distinctive in some way - for example the boot screen, and tell the users to check for that to ensure they are actually running the firmware they thought they just flashed.


More sophisticated version of the malicious firmware could try to patch the new ota firmware image on the fly. Once compromised - always compromised.


It's hard to reliably binary patch something unknown ahead of time.

All Trezor would need to do is change the compilation options on a fairly regular basis, and any patching will fail.

Combine with the fact there is a reward to send in devices means they can analyze any evil devices and make sure their instructions to users will reliably detect all evil devices they're aware of.

Still doesn't stop supply chain attacks, but makes them far harder.


Seems like this could also be an insider threat where someone at Trezor knew all the BOM details and could pull this off


Trezor is mostly open hardware and open source firmware to begin with.


Easy to steal and cash out, сryptocurrency is one of the most attractive digital assets for attackers.

Has the author tried cashing out crypto? KYC anyone? It's harder than ever to cash out ,especially large sums. So many restrictions due to fraud.

Hardware wallets are never safe. the only safe way is to generate your own entropy, key derivation. Why would you ever trust a 3rd party to generate your keys?


With crypto, you get to cut out the middle man and be your own bank.

You have fraud team and IT security team on your staff, right?


What if each genuine unit would have entire PCB covered in glitter nail polish at factory? Based on a serial number of your device, you could check if a pattern on your device matches the one taken by manufacturer right after assembling the device.


Would someone be able to spell out how this attack works after initialisation? I don't really understand hardware wallets. How does the information about the user and their key make its way back to the people who created the device?


The attackers know the key to begin with, before the user even gets their hands on the device. The compromised device pretends to generate a random key but instead generate one of twenty keys provided by the attackers.


Other than generating a small set of known seeds, some signature formats (including ECDSA, which is used for many cryptocurrencies) also allow exfiltrating data through some of the values they consist of and which are required to validate them.

If the wallet uses deterministic ECDSA, or the algorithm used is deterministic by definition (such as EdDSA), this can be detected, but doing so requires validating some generated signatures on a second, trusted device.


> Intentionally skipped this version due to fake devices

uh oh! does this imply something is up that the trezor developers know of?


Yes, this implies that trezor developers know of fake devices claiming to run that version so they skipped that version number to ensure that no one who sees that version number would confuse it with an actual version.


What does that achieve? Surely the fraudsters wouldn't dare to use one of the "actually secure" version numbers in their attempts to deceive Trezor users...?


Would the firmware update fail? if the user had decided to update it? Wouldn't that raise a suspicion?


Probably. It seems it was designed to run with the crypto after one month. Trezor's security on the physical realm is pretty good, but this is just a good attack.


How can a successful firmware update possibly be validated if you can't trust the device in the first place?

An attacker can just implement whatever "install firmware version xyz" command by returning "ok, did it!" and remembering that version number if it ever needs to be displayed.

A more complex attacker could emulate the entire firmware in more powerful hardware of the same physical profile and selectively intercept any input and output.


And how is this better than just using regular money?


Casa


Trezor has additional checks that aren't covered here. I'd really like to know how those were defeated. Especially:

> All Trezor devices are distributed without firmware installed - you will need to install it during setup. This setup process will check if firmware is already installed on the device. If firmware is detected then the device should not be used.

>The bootloader verifies the firmware signature each time you connect your Trezor to a computer. Trezor Suite will only accept the device if the installed firmware is correctly signed by SatoshiLabs. If unofficial firmware has been installed, your device will flash a warning sign on its screen upon being connected to a computer.

https://trezor.io/learn/a/authenticate-model-one

There seems to be an element of user carelessness and naivety here. Anyone who follows Trevor's hardware verification checks surely needn't worry about these attacks.


> All Trezor devices are distributed without firmware installed - you will need to install it during setup. This setup process will check if firmware is already installed on the device. If firmware is detected then the device should not be used. [...] The bootloader verifies the firmware signature each time you connect your Trezor to a computer. Trezor Suite will only accept the device if the installed firmware is correctly signed by SatoshiLabs.

This is an absurd security model. Where's the root of trust here? How do I know I am initially talking to an authentic "blank" device, and not a malicious one pretending to be one?

> If unofficial firmware has been installed, your device will flash a warning sign on its screen upon being connected to a computer.

Hopefully, malicious firmware won't meddle with this feature in any way...

The vendor here is either completely clueless, or is trying to paint a better picture for prospective customers despite knowing better.


>Trezor Suite will only accept the device if the installed firmware is correctly signed by SatoshiLabs.

...?

Although I'll concede that I'm now wondering what's preventing compromised hardware from faking this part too. A complex malware could even receive firmware updates, dump them in an unused partition, and report to the connected host that it promises that it's definitely running that firmware, right? Hmmm.


Yes, it could absolutely do that.

The only way around that would be for Trezor to ship their devices with some sort of attestation function (e.g. a private signing key to which they publish the public key, or sign it via a PKI and include a certificate) and validating that, not just the statement "I promise to be running the authentic firmware", a hash over the firmware, a complete firmware dump or anything else not involving a challenge-response or uncloneable function of some sort.


Similar problem to Trusted Platform Module / Secure Boot, right?

In that case the golden keys can leak, but it's better than nothing.


leaked key seems worse in that people will think they have this security measure working for them while they don't. Without this measure there is no illusion at least


Both of these checks seem to rely on the device playing along nicely. During the setup process it can just pretend to be empty, and completely ignore the uploaded firmware. Similarly, the warning sign depends on the device to show it - which the article mentioned was patched out by the attacker.


How does the setup process check for firmware, anyways? If there's a malicious firmware preinstalled I'm guessing it could just lie to the host computer and pretend to be not there until setup is complete. Once an attacker has hardware control, no software can save you.


They replaced the whole microcontroller. Doubt that these checks could resolve this if they were sophisticated enough.


Ah yes, flashing my own firmware. The future of finance.


It's an automatic update. All you have to do is watch the progress bar fill up. Even you could manage it.


I can manage it because I’m a nerd. My technologically inept father on the other hand, could not.


He can't wait a minute or so while the completely automated firmware update completes? Literally all he needs to do is exist. I'm still sure he could manage.


does Kaspersky still work for the Russian government?


Nice article, but are we sure we want to elevate the status of FSB founded and funded Kapersky labs on the front page of HN?


If they're good at what they do, and provide value sharing their knowledge, yes.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: