Hacker News new | past | comments | ask | show | jobs | submit | Tharre's comments login

> This program measures end-to-end latency: it emits keypress events and captures the screen to note when it gets updated (as actually visible to the user)

> [...]

> The screen resolution was 1920x1080 and the LCD refresh rate 60 Hz

How exactly can xterm have an average end-to-end latency of 2ms, if on average you need to wait 8ms for the next frame?


By sending all key-press events an average of 2ms before the next frame is rendered


But the next frame takes a while to actually show up on the screen, so the "as actually visible to the user" part simply can't be right. This may be isolated latency of just the program, but definitely not something reflecting real-world user experience, it's 1-2 orders of magnitude off in that regard.


The "security" of this device is a joke, just look at how randomness is derived:

  unsigned int analog1 = analogRead(ANALOGPIN1);
  RNG.stir((uint8_t *)analog1, sizeof(analog1), sizeof(analog1)*2);
  unsigned int analog2 = analogRead(ANALOGPIN2);
  RNG.stir((uint8_t *)analog2, sizeof(analog2), sizeof(analog2)*2);

(See [0] for a comprehensive summary of why this is a terrible thing to do)

And yeah, analogRead() is a function from the Arduino library because .. well, apparently there's an Arduino compatible chip inside that does all the cryptographic operations. Meaning that there is no hardware security whatsoever and it's trivial to extract all your keys from the device if you ever lose it. Whoops.

[0] https://arxiv.org/pdf/1212.3777.pdf


It seems this is literally written in the horrible Arduino "everything in one huge file" style:

https://github.com/trustcrypto/OnlyKey-Firmware/blob/master/...

The funny thing is they have a "Source code reviewed by Codacy" badge on the readme claiming the code is grade A... but if you actually click through, of course Codacy didn't pick up the .ino file at all, so in fact nothing of substance is being reviewed. That .ino file wouldn't pass any style review... it's a mess.

Anyway, looks like that firmware is incomplete (e.g. "onlykey.h" is missing). Just a quick scroll through the code gives me zero confidence in this thing, code quality wise. Someone who can't consistently indent code almost certainly isn't qualified to be writing security-critical software.

Edit: looks like the rest of the code is here, and yeah, it doesn't inspire much confidence (7000+ lines of code in okcore.cpp, ouch): https://github.com/trustcrypto/libraries/tree/master/onlykey


> The funny thing is they have a "Source code reviewed by Codacy" badge on the readme claiming the code is grade A... but if you actually click through, of course Codacy didn't pick up the .ino file at all, so in fact nothing of substance is being reviewed. That .ino file wouldn't pass any style review... it's a mess.

Meanwhile Nitrokey has actually been audited by Cure53.


The "security" of Nitrokey is a joke.

As others have mentioned they were actually hacked https://old.reddit.com/r/crypto/comments/bis3pf/extract_pgp_...

Nitrokey does not support half of the features OnlyKey does and even the users on their own forum prefer OnlyKey -https://support.nitrokey.com/t/nitrokey-vs-onlykey/638


I understand the Arduino model is different than other projects but we proudly use Arduino as it's open source and has lots of great features. As we use the Arduino model you can find that our source consists of the .ino you mentioned here https://github.com/trustcrypto/OnlyKey-Firmware as well as libraries here https://github.com/trustcrypto/libraries. Our code is reviewed by Codacy and yes, it does receive a grade of A. For the .ino grading you will need to look at the OnlyKey-Firmware Github repo and for the libraries check out the libraries library. I think some of the confusion in your comment here may be related to how Arduino works, all source can be found on Github.


It seems you're confused as to what Codacy is reviewing. Look at their dashboard for the OnlyKey-Firmware repo. They are not reviewing your .ino file at all, because they do not consider that file extension as code. Only the toplevel C files are covered.


The .ino file is included in Codacy review and receives a grade of A. You can find that here - https://app.codacy.com/manual/onlykey/OnlyKey-Firmware/dashb...

All libraries are included and also receive a grade of A.


You just changed that. It was not included when I looked, and this fact is obvious by the "OnlyKey-Firmware has decreased 1% in quality in the last 7 days." banner. You have made no commits to the repo since Oct 23, so the only way the quality would decrease in the past week is if you changed the settings to include the .ino file.


I wanted to make sure I clearly address these comments, one of the issues in reading a post like this in an online thread is the most upvoted post can also be the most incorrect, and misleading.

#1 > The "security" of this device is a joke, just look at how randomness is derived:

Unfortunately, this commenter posted this without reviewing any of the security documentation available for OnlyKey. Had they reviewed they would see that we specifically address how analog input alone is not sufficient entropy for a cryptographically secure number generator and one of the unique features used with OnlyKey is using capacitive touch input for our RNG. This random input is generated every time you touch a button on OnlyKey, it's different for every person, and its truly random. https://docs.crp.to/security.html#cryptographically-secure-r...

#2 > Meaning that there is no hardware security whatsoever and it's trivial to extract all your keys from the device if you ever lose it. Whoops.

Again, had the commenter taken the time to read a bit they would see that this is completely false. As others have already mentioned, OnlyKey is not an Arduino, OnlyKey uses some of the great Arduino software libraries that are available open source and the Arduino IDE. This is completely unrelated to hardware. As for the OnlyKey hardware security we use Freescale Kinetis flash security to securely lock data on the key. As for side channel attack countermeasures we list several that are in use. For full details read this - https://docs.crp.to/security.html#hardware-security

When it comes to security questions, trust an expert, not the top post on a thread. For more information about CryptoTrust, the makers of OnlyKey you can find our team with internationally recognized security credentials here - https://crp.to/t/

For more info on OnlyKey:

Get started - https://onlykey.io/start

General documentation - https://docs.crp.to/

FAQs - https://docs.crp.to/faq.html

Compare to Yubikey - https://crp.to/p/

Setup and User's Guide - https://docs.crp.to/usersguide.html

Features - https://docs.crp.to/features.html

Support - https://forum.onlykey.io/

List of supported services - https://onlykey.io/pages/works-with-onlykey


If you're so confident in your experts, maybe respond to my comment where I point out a major bug? https://news.ycombinator.com/item?id=21889302


Sure thing. Thanks for reviewing the code, we are always happy to get additional eyes on it. For your major bug I have to disagree about the major part, the RNG works well but yes it could work better, I will put the long answer in your comment below. As for the short answer I created a video showing how the OnlyKey uses capacitive touch for RNG. The blue arrow in the video points to the values that change as the buttons are pressed, you will see the four values per button providing random entropy, this is what goes into RNG.stir. Keep in mind the RNG is slowed down for the video, actual entropy gathering is much faster in use - https://vimeo.com/381733010


And I am going to have to give Codacy a grade of F but if this is what they consider grade A.


As a Codacy user for $DAYJOB I can guarantee you that Codacy deserves worst than grade F.


I can safely say that a lot of proprietary crypto code (as in, stuff that is in very widespread use and costs $$$) is not unlike this either. In some ways this is actually more straightforward to read and understand since it's in one file and not wrapped in a dozen layers of abstraction.


Definitely true, anyone who has ever seen proprietary crypto code knows this. Reviewing one file that is 7000 lines long is more straight forward than reviewing 7000 lines of code split in multiple files. It's open source and we will continue to make it better. If the biggest criticism here is the large file size, RNG complaint (top post is incorrect about analog read, they missed that we also use 6 touch buttons to seed RNG), and code style then it's a safe bet that OnlyKey source is better than most of the proprietary security keys out there. Of course it's not possible to know for sure as they are closed source, but you can look at past vulnerabilities. Like this one https://crocs.fi.muni.cz/public/papers/rsa_ccs17 it's not a theoretical RNG issue like the criticism here has been, it's an actual exploitable vulnerability that affected Yubikey and tons of smart cards. This exploit was on devices that were already FIPS and CC certified. Another thing to consider is the way the researchers found this was by statistically testing a bunch of keys, they didn't even review the source so you can imagine how many more security vulnerabilities they would find if they did.


The 7000 lines don't bother me as much as the complete lack of refactoring, heavy use of magic numbers repeated throughout, and logical expressions that duplicate logic over and over again.

Some examples...

Compare these two blocks of assignments and memcpy calls:

https://github.com/trustcrypto/libraries/blob/5bd1f8eb15eb04...

https://github.com/trustcrypto/libraries/blob/5bd1f8eb15eb04...

Yes, they are as identical as they appear. The only differences (other than a couple of lines commented out in one) are the use of 'data' in the first and 'large_resp_buffer+offset' in the second, along with some arbitrary whitespace differences. (The first uses spaces around the + operators, the second does not.) And all the hard coded numbers! What do they mean?

Or this block of code that appears to be a limited version of a decimal number formatter:

https://github.com/trustcrypto/libraries/blob/5bd1f8eb15eb04...

Or this code that keeps checking the same flags over and over again instead of combining the tests:

https://github.com/trustcrypto/libraries/blob/5bd1f8eb15eb04...

(Scroll horizontally to see all the repeated tests!)

Or this code with the same logic repeated 24 times:

https://github.com/trustcrypto/libraries/blob/5bd1f8eb15eb04...

The next function after that one also has 24 copies of duplicate logic.

Well, the logic isn't entirely duplicated. The individual cases call functions like onlykey_eeget_urllen1, onlykey_eeget_urllen2, ... onlykey_eeget_urllen24, and onlykey_eeset_urllen1, onlykey_eeset_urllen2, ... onlykey_eeset_urllen24. Here are those functions:

https://github.com/trustcrypto/libraries/blob/527113dfeeb20e...

Yes, they are all identical except for the different constants each one uses:

https://github.com/trustcrypto/libraries/blob/527113dfeeb20e...

This pattern of "24 copies of the same logic with different constants" occurs all through the code. Look through okeeprom.h/cpp for several other examples.

None of this inspires confidence that the code can be trusted.


> None of this inspires confidence that the code can be trusted.

It's a shame this code isn't so good out of the box, but for all we know there are proprietary devices purporting to do the same job which also have poor code. The difference between the devices is we can review, edit/improve, share, and run the improved code for this device. The software freedom is a feature unto itself. So one is still better off with this device (or another device that runs on entirely FLOSS) over any proprietary device that purports to do the same job.


You have no access to hardware schematics. You have no idea what hardware defects are present that may compromise security no matter how much code you write. FLOSS means shit here.


This is incorrect, a schematic only shows what electronics should contain. It doesn't provide any proof of what hardware actually contains. For that the best way to verify is to visually look at the hardware, we made OnlyKey hardware easy to verify with a clear transparent coating. When you look at OnlyKey you will see one Freescale K20 MCU, you can read the manufacturer number on it and know exactly what is in your key.


The microcontroller isn't the only thing that matters in your design. For example, since you're dependant on the ADC for seeding the RNG, it'd be nice to know what is connected to those pins, which a schematic would reveal. I can't tell that just by looking through your clear epoxy.

Even if I did drill holes in the casing and probe components, I have no way of knowing if what I'm seeing is expected or not without a schematic.


I have to agree with you.

What impresses me even more is that they are selling it already, and marketing as “open-source”. I would leave a note here that if anybody is interested in doing something similar, please get some feedback from community before starting commercialization.


There's a dead simple "quack/crank" test for security products. If it hasn't been publicly discussed and analyzed for at least year, but is already for sale as a "usable device" not a "prototype", the seller is either fraud or a fool, and regardless of which, is not to be trusted.


OnlyKey has been in use for about 4 years. It has thousands of active users and is in use in over 40 countries world wide. This is not a new product, and it has a great user community which is not afraid to test, hack, and prove the security of devices.


>OnlyKey has been in use for about 4 years. It has thousands of active users and is in use in over 40 countries world wide. This is not a new product, and it has a great user community which is not afraid to test, hack, and prove the security of devices.

Like literally the first issue was already linked above. Using the psuedo RNG with some analog pin seed isn't really acceptable. It should have a true rng IC that can generate real random numbers from diode bandgap noise or other sources.


Like literally the first post issue is completely incorrect, thats one of the issues in reading a post like this in an online thread, literally that user copied part of but not all of the function that is used for RNG. The part they copied uses analog input as one of the sources of entropy, they failed to also include the 6 capacitive touch inputs that are also inputs to the RNG. Those touch inputs literally change every time you press a button and even with atmospheric changes, i.e. it's cloudy out today, your RNG has changed.

RNG.stir((uint8_t )analog1, sizeof(analog1), sizeof(analog1) 4);

touchread1 = touchRead(TOUCHPIN1);

RNG.stir((uint8_t )touchread1, sizeof(touchread1), sizeof(touchread1));

delay((analog1 % 3) + ((touchread1 + touchread2 + touchread3) % 3)); //delay 0 - 6 ms integrityctr1++;

touchread2 = touchRead(TOUCHPIN2); RNG.stir((uint8_t )touchread2, sizeof(touchread2), sizeof(touchread2));

touchread3 = touchRead(TOUCHPIN3);

RNG.stir((uint8_t )touchread3, sizeof(touchread3), sizeof(touchread3));

touchread4 = touchRead(TOUCHPIN4);

RNG.stir((uint8_t )touchread4, sizeof(touchread4), sizeof(touchread4));

touchread5 = touchRead(TOUCHPIN5);

RNG.stir((uint8_t )touchread5, sizeof(touchread5), sizeof(touchread5)); touchread6 = touchRead(TOUCHPIN6);

RNG.stir((uint8_t )touchread6, sizeof(touchread6), sizeof(touchread6));

unsigned int analog2 = analogRead(ANALOGPIN2);

RNG.stir((uint8_t )analog2, sizeof(analog2), sizeof(analog2) 4);

// Perform regular housekeeping on the random number generator.

RNG.loop();

delay((analog2 % 3) + ((touchread6 + touchread5 + touchread4) % 3)); //delay 0 - 6 ms

integrityctr2++;

if (integrityctr1 != integrityctr2) { //Integrity Check unlocked = false; CPU_RESTART(); return; }

https://github.com/trustcrypto/libraries/blob/5bd1f8eb15eb04...


Even if that analog pin provides a reasonable amount of entropy (which I'm skeptical of), you have a major bug: you're casting the ADC reading to a pointer, and then dereferencing it inside RNG.stir.

Let me say it again: you're taking an ADC reading (in the range of 0-1023) and accessing it as if it's a memory address.

To make things worse, addresses 0 through 1023 on the Kinetis you're using are the vector table. Take a look at that part of your firmware: it's extremely predictable, and only contains a small number of possible values.


Here is the long answer to the comment provided above, as mentioned there its probably easier to take a look at the video first, the blue arrow in the video points to the values that change as the buttons are pressed, you will see the four values per button providing random entropy, this is what goes into RNG.stir - https://vimeo.com/381733010

You will notice that as you mentioned the analog read values don't change much, that is because it is reading the memory address. Keep in mind that the analog read is only an additional source of entropy, not the primary source, that comes from the capacitive touch buttons. The RNG does not need or require this entropy, but you can never really have too much entropy so that's why it was included. So with reading the analog address values what you get is only a small amount of entropy, these address values do change based on user behavior so its still an unpredictable source of entropy, you wouldn't know on any given day how a user will use their key. I.e. I log in to two sites in a different order on two days, it's going to mix in some non-predictable data.

But you are absolutely right, it would be better to mix in the analog read value. For our next firmware release we will update this to include mixing in both the value and the memory address. Thanks again for bringing this up and feel free to create an issue on Github if you see anything else.


You're right. I tried to give this code a charitable read, but it's horrifically wrong, and not just the ADC, but seemingly all inputs here.

At least we now have a better sense of what an A grade from Codacy actually counts for.


To be fair to Codacy, it's not even checking the file that people are pulling all these examples of bad code from.


Just because on your testbench they changed enough for you to guesstimate they provide enough entropy doesn't mean they provide enough entropy for everyone under all circumstances. They are not designed for that purpose and unless you have performed extensive adversarial testing to gain confidence that they can be used as such, you cannot guarantee anything.

You seem to have zero runtime sanity checks too, so if for whatever reason they are not providing entropy for someone, they will be none the wiser.

Sorry, but this is a terrible RNG.


Your "delay 0-6 ms" only delays 0-4 ms.

Not to mention the fact that the only obvious effect of that delay is to expose entropy information to timing analysis.


The delay is 0-2 + 0-2 so yes combining six possible values for a possible delay up to 4ms. The delay inside of an RNG loop obviously does not expose entropy to timing analysis, it does the opposite. As the loop has a small random delay interval the RNG seeds are never predictably read in, adding to the effectiveness of the unpredictability of the RNG which is a good thing.


You might want to check out this - https://docs.crp.to/security.html#cryptographically-secure-r...

If you read further into the source you will see that analog read is only one of the sources of entropy, it uses capacitive touch from a user's skin and this TRNG passed dieharder tests - https://webhome.phy.duke.edu/~rgb/General/dieharder.php


Passing dieharder doesn't mean anything at all with respect to cryptographic security. It's trivial to define a random bit generator that passes randomness tests and has no real security.


You cn literally run the number 0 through a modern hash function and pass dieharder without having any entropy.


> Passing dieharder doesn't mean anything at all with respect to cryptographic security.

Technically, doesn't not passing dieharder mean something with respect to cryptographic security, though?


I would like to learn more about practical cryptographic issues, and I need some help: what are the tests that can prove or disprove stronger guarantees for cryptographic security of a PRG than diehard? A wikipedia page doesn't give me much info about which one provides stronger guarantee and in which criteria:

https://en.wikipedia.org/wiki/Randomness_tests

Also, which level of security of a PRG is sufficient for keys like this?


There aren't such tests, at least not that work like dieharder. You analyze a CSPRNG the same way you'd analyze a cipher construction (they are essentially the same thing, and often we draw our conclusions about the strength of a CSPRNG by noticing that it's built on and thus inherits the formal security commitments of ciphers and hashes run in modes and constructions that themselves have been shown to be trustworthy).


There are no automated tests for this, since cryptographic randomness requires unpredictability. A statistical test can only tell you when a random number generator is broken, but no statistical test can tell you whether a random number generator is cryptographically sound.


And so the only test is to analyse all known knowledge for any method that is capable of predicting some portion of the random numbers. If none exists, the process is "random".


> what are the tests that can prove or disprove stronger guarantees for cryptographic security of a PRG than diehard?

None. The difference between a good CSPRNG and a broken one might not even be in the construction at all, but in who knows the seed. For example, a keystream generated using Chacha20 or AES-CTR makes for a good CSPRNG... except if the attacker knows the key.


The responses you've gotten so far are pretty bleak even if they are accurate. It's true that once you start mixing randomness, or get into algorithms the only thing statistical tests can really tell you is if it's broken.

Those tests can be used on raw sources to learn about the quality of those inputs. In this case applying those tests directly to the analogRead() on a specific source of hardware (your entire circuit and manufacturing process will effect this, and will even vary from board to board) can give you an estimate as to how much entropy you can expect from each call.

Understanding your where that entropy is coming from is significantly more important, gate voltage breakdown, fluctuations from the pins acting as antennas, in the current temperature and humidity is where analogRead() on a floating pin largely comes from. Other sources can be radioactive decay of particles, timing of events that are outside of the system (such as the time between a device being plugged in and the first time a person touches a key).

These all provide small amounts of entropy (except for radioactive decay, that's a really good one). The next step is mixing entropy. There is a lot of good math showing that with proper mixing, even adding known inputs from an attacker into an entropy pool doesn't decrease the entropy in the pool (it's no less random). If time isn't an issue you can add in a large number of readings from the same source, though sampling faster than the source changes won't get you anything.

That mixing allows you get to up to a minimum threshold of randomness (the seed) where you can use a cryptographically secure pseudorandom number generator (CSRNG). These also have proofs of a different type showing that input bits have an equal chance of modifying any bit of the output which can then be mixed back into the seed getting a very very large amount of effectively good randomness that can be used for keys and the like.

The trick here is that you're effectively at war with attackers, the more of your entropy sources an attacker can predict or control, the weaker your overall input to the CSRNG is going to be. If they can get this down to a small possibility space they can predict the input to the CSRNG and in turn fully predict its output which will reveal your keys.

If an attacker has a way to measure timings on the device a large number of times they may be able to infer the internal state of the system and once again get your keys.

So it's not really about the quality of that final output that is the problem and that's largely what people doing these projects analyze with these tests.

One final bit I'd like to cover. These tests can provide you some information about the final quality of the output (mostly whether it's broken or not) but even for that they're usually used incorrectly. If the CSRNG is implemented correctly but say you always seed it with the value "0", it will pass the tests with flying colors.

For devices like these they should be fully reset, have a small amount of randomness output, fully reset, sampled again... thousands to millions of times. This will help you determine if the range of possible inputs to the system is inherently flawed and most projects I've seen (including this one) don't seem to do that.


for i in range(1,1000): print sha256("lol what's entropy" + i)

This passes dieharder. Completely meaningless.


According to the K2x family guide, some devices have an hardware RNG available -- is it right this project uses a K20 without this?

It seems pretty bad that merely grounding 8 pins on this device will reduces its entropy to basically to a handful of noise bits from the ADC?


Yes it uses the K20, I think you may be confusing the threat model here. If you grounded the 6 capacitive touch buttons the device would not work at all so there would be no need for an RNG. The RNG is used for things like creating keys, in order to get to the point where you are creating keys you would have to be able to enter a PIN on your device by physically touching the capacitive touch buttons. As you do this the readings from your skin is input to the RNG. I hope this explanation makes it clear why this attack isn't possible.

https://docs.crp.to/security.html#cryptographically-secure-r...


It's not 'trivial' to extract the keys - all modern uCs have flash readout protection bits. It's probably easier to do than to read the secure element from your iPhone or extract keys from your SIM card or your credit cards, but it's not something you can do without specialized skills and equipment (although there are companies that provide commercial flash readout services).


Flash readout protection on most microcontrollers is a joke. They are almost always vulnerable to attacks ranging from power/clock glitching to asking nicely with the right combination of flash management commands (I'm looking at you, some PICs from the PIC18 series with blockwise erasable Flash including protection bits). I've seen some things disable their read protection by accident because the power supply wasn't hooked up properly and they glitched themselves.

There's a reason we have real secure elements with anti-tamper mechanisms. The problem is that as far as I know there aren't any that you can develop for without signing an NDA.


I've signed a ot of these NDAs. Dirty little secret, most of them are DUAL_ED_DRBG which is backdoored. None of them have any meaningful protection, and usually they have sidehcannels the size of mountains. There's none of the secure element chips I would consider to be stronger than cryptography in software. They're the same as passing certifications, good to corporate management but a joke to anybody who knows what they're talking about.


Can you talk us through a scenario where you'd exploit Dual_EC to break encrypted flash storage?


It's indicative more than a break of encrypted storage.

For example ATECC508A, a common secure element chip used in a lot of designs. It does ECDSA signing, using DUAL_EC_DRBG (based on the description, it's not mentioned) and produces non-deterministic ECDSA signatures. You can establish this by asking it to sign the same message twice, and the nonce selection is random rather than static for the two requests. This is a very strong indicator that the chip is significantly weak as it's not using the standard RFC6979 which was specified in 2013.

Commonly a lot of "secure" software implementations use the output of the STM32's "TRNG" as a source of entropy, such as many Bitcoin hardware wallets. I don't believe that this is a strong design, based on the documentation that has been made public. It is supposedly based on the output of multiple synchronized ring oscillators which are XOR'd to produce a output into a 32 bit buffer. The documentation goes to a huge length to try and justify it as a secure source of entropy, but the speed of it (the RNG RDY flag) is much too fast for it to possibly be true.

    uint32_t random32(void) {
      static uint32_t last = 0, new = 0;
      while (new == last) {
        if ((RNG_SR & (RNG_SR_SECS | RNG_SR_CECS | RNG_SR_DRDY)) == RNG_SR_DRDY) {
          new = RNG_DR;
        }
      }
      last = new;
      return new;
    }
A common implementation of reading the output of the STM32 RNG is this snippet, which has a single bit of bias, which is enough to break things like ECDSA signatures if used for the selection of k.

The general comment is that people seem to be far too trusting in these devices actually implementing what they say they are, or using output from hardware RNGs in a way that directly exposes the application if they were to fail or be producing predictable output.


I don't really trust any of these microcontroller designs, but the comment I replied to, on a thread about Flash protection, said that the designs weren't trustworthy because they used Dual_EC. I'm wondering if there's some direct connection between Dual_EC and storage protection. It's clear to me how Dual_EC compromises cryptographic protocol handshakes, where its output, which can be decrypted to reveal RNG state, is exposed to attackers.


For my comment, it's just indicative of design issues. Some designs do trust these devices to make RSA and ECDSA keys though, which we've seen in the past can be majorly screwed up by accident.

https://www.ria.ee/en/news/possible-security-vulnerability-d...


I don't know which SE you're talking about, but the ones I've worked with are pretty secure, for one, side channel attacks are extremely difficult


I believe this is mostly down to the obscurity of them rather than good implementation. The implementations of ECDSA predominantly are almost always not constant time, which directly leaks the size of the nonce that has been chosen. That none of them implement RFC6979 deterministic nonces is a very good indication that they have put zero care into their implementation.


Can you comment, perhaps vaguely, on the Infineon SLE 78 series?


I don't have any information about this secure element.


This brings up an interesting conversation. As a user which should you pick, a device like Yubikey that is closed source and unverifiable or a device like OnlyKey that is open source and verifiable but without a traditional secure element? Its not a new question as this is essentially like the Trezor vs. Ledger debate. We try to provide information here https://docs.crp.to/security.html that is clear and gives user's the ability to make a choice. There are actual exploitable vulnerabilities that have occurred with "secure elements" while there are potential and theoretical vulnerabilities mentioned in this HN post.


I̶ ̶t̶h̶i̶n̶k̶ ̶i̶t̶'̶s̶ ̶u̶n̶l̶i̶k̶e̶l̶y̶ ̶f̶l̶a̶s̶h̶ ̶r̶e̶a̶d̶o̶u̶t̶ ̶p̶r̶o̶t̶e̶c̶t̶i̶o̶n̶ ̶i̶s̶ ̶e̶v̶e̶n̶ ̶s̶e̶t̶ ̶f̶o̶r̶ ̶t̶h̶i̶s̶ ̶p̶r̶o̶d̶u̶c̶t̶,̶ ̶a̶s̶ ̶i̶t̶ ̶a̶p̶p̶e̶a̶r̶s̶ ̶t̶o̶ ̶b̶e̶ ̶p̶r̶o̶g̶r̶a̶m̶m̶e̶d̶ ̶u̶s̶i̶n̶g̶ ̶t̶h̶e̶ ̶d̶e̶f̶a̶u̶l̶t̶ ̶A̶r̶d̶u̶i̶n̶o̶ ̶I̶D̶E̶.̶ And even if not, most are trivially attackable with hardware access, for example the ESP32 secure boot stack: https://limitedresults.com/2019/09/pwn-the-esp32-secure-boot...

EDIT: Spoke too soon, claims Kinetis Flash Security is enabled (https://docs.crp.to/security.html#flashsecurity). This looks like it also disables JTAG access, so that is a plus ("8.3.2 Security Interactions with Debug", https://www.pjrc.com/teensy/K20P64M72SF1RM.pdf).

Other than that, this C code has a lot of smell - for example, the repeated use of the ptr variable looks like what something someone unfamiliar with the C type system would use: https://github.com/trustcrypto/OnlyKey-Firmware/blob/c71d207...


Trivial for a motivated attacker, true. But also do note that you don't need to readout the entire flash, it's enough to be able to extract the hash of the PIN (or cut power before the eprom is updated after an attempt) and that is fairly easy given there are next to no side-channel protections.


Given that they're using the Arduino APIs, what are the chances they know what flash readout protection is?


You probably will be getting at least one bit of randomness out of it if the input is not saturated (As in not below 0 or above reference voltage ADC is using), just because pretty much all ADC are noisy enough that the last bit will be flipping. Of course that doesn't excuse every other problem with it

> Meaning that there is no hardware security whatsoever and it's trivial to extract all your keys from the device if you ever lose it. Whoops.

Most micros, ones used in various Arduinos included have fuse bits so there is at least the minimal level of protection. Question whether they used it even...


Doing a quick look through the library repository, I spotted another more scary function. It doesn't appear (at a quick glance) to be used anywhere, but still...

https://github.com/trustcrypto/libraries/blob/master/randomb...

For anyone wanting to try this out (it will compile with plain GCC if you add):

    #include <stdlib.h>
    #include <stdio.h>
to the start of the file, and declare a main function:

    void main() {
      unsigned char* buffer;
      buffer = malloc(32 * sizeof(char));
      randombytes(buffer, 32);
      for (int i=0; i < 32; i++) {
        printf("%02x", buffer[i]);
      }
      printf("\n");
      free(buffer);
    }
And you'll (of course) get some rather deterministic output. As I say though, doesn't look to be used (that I could see), but strange to have something like this there.

RE the RNG implementation, looks to be at https://github.com/trustcrypto/libraries/blob/master/Crypto/..., and looks to have some support for hardware RNGs on certain boards, but not others. Does sound like there's no hardware protection involved.


If you search the repo you can see the randombytes library is just there for reference and isn't used, that library was created by the same guy who made NaCl https://en.wikipedia.org/wiki/NaCl_(software)


The linked paper breaks analogRead-based encryption in section 5 at the end.

But I wonder if you could get better randomnes by, instead of naively pulling one low entropy 10-bit value from analogRead, you pulled 128 bits from successive analogReads and only kept the lowest significant bit.


A very simple hardware entropy source is a noisy diode. I wonder why so few people are using it


Did I misunderstand the article or is it just wrong? Although a certain entropy source might not be completely random, it can still be a part of of a complete solution, right?

Network packet timings aren’t random either and might be attacker controlled as well.


Uh that is treating the value from the A/D conversion as an address, and reading that location for data to stir in.

Unless HN markup ate the &s that are missing.

That is very bad code indeed.


WireGuard is a kernel module, it already uses what the kernel gave it, which is get_random_u32(). Note that this is NOT used for anything crypto related, it's just used in the hashtable code.


Amazing. Apologies to the Wireguard folks: first for casting aspersions, second that they have to deal with such a trash heap of an API.


Quite frankly, this site's examples are pretty horrible.

Case in point, Idiom #46 in C[0] uses strncpy and predictably fails to correctly terminate the buffer.

Idiom #55[1] converts a integer to a string with itoa, never bothering to mention that it isn't part of the C standard or POSIX, while for some reason using a 4096 byte big output buffer.

Idiom #39[2] first example uses clrscr(), which I assume is some sort of old DOS function? The entire example looks unrelated to the problem.

The idea is certainly good, but you need some serious vetting system to make this usable.

[0] https://www.programming-idioms.org/idiom/46/impl/477

[1] https://www.programming-idioms.org/idiom/55/impl/438

[2] https://www.programming-idioms.org/idiom/39/impl/2042


%d for strlen is cute as well.

That’s the fate of all such sites. They start with a nice idea and a good intent, but the crowd fills it with crap, nobody cares/is able to screen it, and it instantly becomes an unreliable source of nonsense.

Ed: btw, #1 is terminated by calloc(6), but it is indeed a trip mine looking for prey. It is unknown if strncpy was used intentionally or out of ignorance and if the next person will be aware.


> #1 is terminated by calloc(6)

Yet this seems to be the perfect place and time to remind the kind audience that sizeof measures the number of char-sized chunks, thus sizeof(char) is always the constant 1, no need to spell it out.


> Case in point, Idiom #46 in C[0] uses strncpy and predictably fails to correctly terminate the buffer.

calloc returns zeroed memory.


Plus #46 casts calloc, and uses the unnecessary sizeof (char).


5/5, but I don't think this test is very good at capturing the more obscure features of C, they all just deal with the fact that platforms have different datatypes/alignment requirements, except for the last one. I think a better example would be the following:

  int a=1, b=2, i, j;
  i = a += 2, a + b;
  j = (a += 2, a + b);
Whats the value of a, b, i, j? Hint: i and j are different.

Which begs the question, why does C have those features in the first place? The only C code where it's reasonably common seems to be in crypto algorithms.


5 and 7? If I remember the behavior of the comma operator correctly.

Anyway, I am inclined to agree that these are misfeatures. I’d almost certainly ask for it to be changed in code review.


This can't be right.

Granted, I've not been running OpenVPN lately, but with WireGuard I'm saturating the internal USB-ethernet adapter of the pi long before I'm running out of CPU power.


You're correct. I confused MBit with MByte - the correct number is 20 MBit/s.


And yet, scdaemon will still hang everytime you suspend on linux[0]. I've written a udev rule to somewhat mitigate this[1], but it's still really annoying that seemingly nobody cares enough to fix this issue.

[0] https://wiki.gnupg.org/SmartCard#Known_problem_of_Yubikey

[1] https://github.com/Tharre/pkgbuilds/blob/master/arch-system/...


How could that possibly be improved? Append-only means no deleting or freeing up space by definition.


Not necessarily, because the backup server could squash backups itself, even when the client is not allowed to do so.


On the contrary, it has triggered development of wg-dynamic[0] which should eventually fix those issues.

[0] https://git.zx2c4.com/wg-dynamic/


wg-dynamic was proposed well before that email. Actually, that email came after discussion the two of us had shortly prior to the email.


You should actually read the comment that was linked in that issue as well:

> [...] if using hybrid graphics where the primary is Intel and the secondary is nvidia, sway would continue to work (just without nvidia monitors) and the user would never see the log message. Then they're likely to create issues related to outputs not working, causing us to invest time in troubleshooting it, only to find that they are using nvidia and never saw the warning."

So yes, you can run sway with the nvidia module loaded, just please unload it and test again before you report bugs.


  > So yes, you can run sway with the nvidia module loaded, just please unload it and test again before you report bugs.
If that were the language used in the issue tracker, then fine. It's sensible to request that users reproduce the bug with the proprietary driver blacklisted before reporting. But the attitude they're projecting is hostile and unconstructive. The bug tracker says:

  > If you are using the nvidia proprietary driver for any reason, you have two choices:
  >
  > 1. Uninstall it and use nouveau instead
  > 2. Use X11+i3 and close your browser tab
  >
  > If `lsmod | grep nvidia | wc -l` shows anything other than zero, your bug report is not welcome here.
Furthermore, sway's behavior is to check whether the proprietary driver is loaded and, if so, exit unless sway is started with the flag --my-next-gpu-wont-be-nvidia. It's hostile, childish, and very off-putting to a potential user and contributor.


Technically, I think that they can do that. That’s what the fork button is for. However, if they are willing to alienate graphics enthusiasts, they wouldn’t be getting any respect.

I don’t want to settle for Intel GPU, because I have a clearly superior GPU available (Nvidia). I don’t want to settle for X, because I have a clearly superior API available (Wayland). I have a 4k OLED display, because if it is there, why settle for something worse? I realize that enthusiasts are rare, but it’s us who help iron out the bugs for the newest hardware, and it is sad to see such an alienating attitude from fellow cutting-edge developers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: