Hacker News new | past | comments | ask | show | jobs | submit login
Go 1.20 Cryptography (filippo.io)
204 points by mfrw on Jan 4, 2023 | hide | past | favorite | 66 comments



> What it does not support is other curves, which I am ok with.

Ed25519 is relatively nice, but I do wish it also supported Ed448, or at least future-proofed the API so that Ed448 support would be possible as a drop-in replacement. Ed25519 is "128-bit" security, while Ed448 is "224-bit" security. Both are ECDH.

128-bit "ought to be enough for anybody", but for the many applications that asymmetric crypto isn't the bottleneck, it's nice to have the option to pick a stronger curve. Why place this arbitrary limit on the standard library API for the (probably) next decade? It feels counterproductive.


The API can easily accommodate adding a curve to crypto/ecdh, just not doing it outside the standard library. The reason not to implement X448 is simply that ~no one uses it, and it would be a lot of maintainer time to make it safe and make it fast.

FWIW, no one uses X448 because if X25519 falls it's unlikely to be due to something that leaves us with any confidence in the similar-but-bigger Ed448 curve. It will have to be a cryptanalysis breakthrough or progress in quantum computers, both of which are more likely to break both than only one.

The original Ed448 paper is a good read for understanding the "paranoia" motivation behind the curve. https://eprint.iacr.org/2015/625.pdf


I'm not exactly a crypto expert, just someone who pays attention to this kind of stuff, so I might be incorrect or have some misconceptions, but even that paper's justification agrees that there are possible outcomes where a stronger curve could prevail in the face of a weakness of Ed25519.

There are certainly scenarios where "all crypto becomes invalid" or "ECC becomes invalid" or "curves similar to Ed25519 become invalid", but with Defense In Depth... if all other things are equal, a stronger curve is better, and that is effectively the case here. The only downside of Ed448 that I'm aware of is a slight performance penalty, which is irrelevant in most applications, so there's no reason to actively choose a weaker curve. The "overkill" option is the sensible option in most cases.

AFAIK, no one is actually encrypting large sums of data using an asymmetric algorithm like we're discussing. It's typically just used to encrypt a symmetric key, which is then lightning fast to use on the bulk of the data. The performance of the asymmetric algorithm is only important in very specific scenarios.

I also believe that "no one" uses Ed448 in much the same way that "no one" uses JPEG XL; a lack of ecosystem support drastically hinders adoption, and it has nothing to do with people believing that Ed448 has no advantage over Ed25519. But, that's just like, my opinion. JPEG XL would be drastically better than the other image format options, but ecosystem support is a chicken and egg problem. Measuring the existing adoption of a poorly supported option doesn't give much insight. If people appreciate Ed25519 (and they really seem to), then offering the stronger version of Ed25519 seems like an obvious next step, even though it is a lot of work (which is why my original comment mentioned future proofing the API, but you seem to indicate that it is, so that's good). If the option existed and were exactly as easy to use, then why wouldn't people pick it for projects where the curve is open for selection?


> If the option existed and were exactly as easy to use, then why wouldn't people pick it for projects were the curve is open for selection?

It’s a good question with a nuanced answer. First, performance. The amount of data encrypted has no effect on key exchanges like X25519 and X448. They happen once per connection/exchange/encryption/decryption and have a fixed cost. In synchronous settings like TLS that time shows up directly as first byte latency. Second, a hard to quantify ecosystem risk: what are the chances you get broken by one of the few attacks that get X25519 but not X448, versus the chances you get broken by an implementation bug in the lesser used X448 implementation?

Cryptography engineering doesn’t happen in a vacuum. When selecting primitives you have to weigh the risks you are mitigating against the cost in engineering and hardware resources, and compare them against other risks you could be mitigating with those resources. It’s very unlikely the risk gap between X25519 and X448 is where anyone should invest next.


To add to what Filippo said: Your Ed448 implementation will also need SHAKE256. SHAKE256 is a SHA3 variant.

There was a recent flurry of buffer overflows in SHA3 implementations. I'm aware of at least PHP being affected.

Wanting Ed448 for political reasons, or purely for psychological comfort reasons, is a perfectly understandable stance to take as a non-expert.

Unfortunately, the details that experts are privy to matter a ton, and severely outweigh any notions of having eggs in multiple baskets.

When cryptography nerds say "don't bother with Ed448", we're open to having our risk calculus checked, but we're nearly unanimous on this one.

The only real reason to prefer 448 over 25519 is "our legal/compliance folks say we need 192-bit or greater security for our asymmetric keys, or we void our [contract, license, certification, etc.]".

If anyone does fall into that trapping, please speak up.


> There was a recent flurry of buffer overflows in SHA3 implementations. I'm aware of at least PHP being affected.

Both your comment here and some stuff FiloSottile implied in the comment above seem like they would be (largely) mitigated by what the "Go 1.20 Cryptography" post mentions about using formally verified primitives that are generated by "fiat-crypto".

Beyond the curve primitive, wouldn't the majority of the code involved be shared/identical? These are closely related curves, not some oddball algorithm that requires a bespoke implementation. If things were more bespoke, I completely agree that would drastically alter the calculus, but if Ed448 is closely related, that seems like it would limit the surface area for mistakes quite a bit?


> Both your comment here and some stuff FiloSottile implied in the comment above seem like they would be (largely) mitigated by what the "Go 1.20 Cryptography" post mentions about using formally verified primitives that are generated by "fiat-crypto".

> Beyond the curve primitive, wouldn't the majority of the code involved be shared/identical? These are closely related curves, not some oddball algorithm that requires a bespoke implementation.

Well, fiat-crypto only provides the curve implementations.

Each language, library, etc. that wants to support ed448 will need a SHAKE256 implementation too. Given the memory bugs in SHA3 implementations, that has historically not been a safe addition, in practice.

Also, I don't see Ed448 on here (but I do see P448? Not sure if that's interoperable): https://github.com/mit-plv/fiat-crypto/tree/6e6809be8290a7d7...


I think Filippo has the right side of this debate, but "SHA3 means likely memory corruption flaws" is not a great argument.


Yeah that's not my argument at all.

My argument is specifically: "SHA3 implementations are necessary for Ed448, which increases code size and therefore also increases attack surface". The memory corruption bug was just a specific recent data point about the attack surface increase.

The more meta point is "just add one more algorithm" isn't a free decision. It involves non-obvious trade-offs.

I hope that makes it clearer.


Oh, I follow the argument, I just don't think it's persuasive. You wouldn't look at a system that deliberately built on SHA3 and say it was worse than a Blake2 scheme because of SHA3 code quality concerns, so I don't think it's a particularly compelling reason to pick a curve.


It's difficult to determine whether you follow an argument when you summarize it in a way that sounds incorrect.

If that's an attempt at highlighting that what you summarized is not persuasive, your tactic also isn't very persuasive. Instead it comes across as snarky, dismissive, or that you misunderstood. It might be worth reconsidering this tactic.

The main reason I wouldn't pick Ed448 is simpler: Nothing else really uses it.


For what it's worth, I'm responding to this:

    There was a recent flurry of buffer overflows in SHA3
    implementations. I'm aware of at least PHP being affected.
     
    Wanting Ed448 for political reasons, or purely for psychological
    comfort reasons, is a perfectly understandable stance to take as a
    non-expert.
     
    Unfortunately, the details that experts are privy to matter a ton, and
    severely outweigh any notions of having eggs in multiple
    baskets. We're open to having our risk calculus checked, but we're
    nearly unanimous on this one.
Who's "we" here? I think Filippo has a mainstream take on the 448 curves, but I don't know that your take on SHA3 is widely shared.


We here is people who don't advise using Ed448. It's tautology.

You don't have to agree with the specific reason I cited. That isn't encapsulated by "we". I was providing an additional argument in case the mainstream take isn't sufficient.

You cut the "To add to what Filippo said" part out of the excerpt you quoted, which was the necessary context to understand I was making an additional, supplementary argument.


I don't agree with the specific reason you cited, which is what I wrote about.


You're advocating for cryptographic agility, not defense in depth -- with multiple curves comes multiple vectors, and increased attack surface.


Perhaps that is a clearer choice of words, but I find it debatable to say this categorically isn't defense-in-depth. Choosing a stronger curve is similar to an additional level of defense, since it mitigates additional theoretical weaknesses that the lesser curve would not. The point of "defense-in-depth" is to add additional layers that mitigate different weaknesses.

Maybe making a stronger layer isn't the same as adding an additional layer, but it has the same outcome, so it feels like a distinction without difference.


> The point of "defense-in-depth" is to add additional layers that mitigate different weaknesses. Maybe making a stronger layer isn't the same as adding an additional layer, but it has the same outcome, so it feels like a distinction without difference.

Not to speak for Filippo, but adding extra layers comes with a cost. Presumably they've weighed up the cost of adding this functionality (in terms of maintainer time etc.) and decided the cost of implementing it outweighed the benefit.


I agree. I'm mostly happy to hear that the new API is able to support Ed448 in the future; that it isn't locked out by a compatibility choice being made today.


> Breaking Curve25519 would not necessarily break Curve448, like a thicker wall which can withstand certain attacks that a thinner wall cannot.

This sounds great to a non-cryptographer, but it's really not all that true. We've crossed a threshold where—to continue the metaphor—the wall is so thick that the only way it's likely to be breached either by going over/around it or by something that exploits a material weakness in a way where it doesn't matter in practice how thick the wall is.

While it's of course possible that someone finds a way to break Curve25519 in a way that leaves Curve448 standing, I suspect that many cryptographers believe it's far more likely that an API handling multiple curves will create opportunities for vulnerabilities that otherwise wouldn't exist. Particularly when one of those code paths is used extremely infrequently.


It is entirely plausible that an attack is found which halves the number of bits of all EC algorithms (this is about the amount of progress on integer factorization during the 20th century). With such a breach, 1W28 bit security is no longer enough, but 220 bits is. When the performance difference is marginal, I think it makes a lot of sense to build in some wiggle room for a little breakage


That’s true of quantum attacks on symmetric systems, but I’m not aware of any work that hints toward such a possible future for elliptic curves.

If you know of something I’d love to hear it.


I don't know of any possible future for such attacks. My point is just that it's really hard to estimate the probability of unknown mathematical advancement so especially for secrets that you want to keep for a while, it's sensible to build in some buffer for mathematical advancement. I'm not an expert, but I would have a hard time being 99% sure that no one will come up with a sub-exponential algorithm (e.g. some sieving technique) that speeds up solving discrete logs (but doesn't make it trivial). If the cost of the improvement were high it would be a hard tradeoff, but I really don't see a strong argument that the extra couple nanoseconds are that important here.


I removed that from that comment several minutes before your reply to just focus in on the terminology issue. I had already made the point about Ed448 being stronger than Ed25519 in the previous comment and the implications that has, so it seemed redundant to repeat here.

Regardless, whether it is "far less likely" or not, it appears to be a very real possibility that is being ignored in favor of what, exactly? An extremely marginal performance gain?

> I suspect that many cryptographers believe it's far less likely than an API handling multiple curves creates opportunities for vulnerabilities that otherwise wouldn't exist. Particularly when one of those code paths is used extremely infrequently.

Honestly, that's just as much an argument in favor of entirely deprecating support for Ed25519, in my view, but that would be an unpopular opinion. In reality, the ECDSA curves support multiple security levels. Has that additional curve support ever been the direct cause of a vulnerability?


It's not defense in depth because the curve you using failing means you're compromised and the fact other one exists in lib changes nothing.

This is not extra layer but a side entrance, especially if used protocol allows one side to pick what primitives to use, so attacker can make it use the vulnerable one.


>> if X25519 falls it's unlikely to be due to something that leaves us with any confidence in the similar-but-bigger Ed448 curve. It will have to be a cryptanalysis breakthrough or progress in quantum computers, both of which are more likely to break both than only one.

This argument is often cited in this context, and is a bit generic. A bigger curve requires more qubits to break, costs more, and provides a number of additional years that could be important.

Still I won’t use 448 (unless I need a prime-order group), because the support for it is limited, and you have to make sure it has been safely implemented.


Also, secp256k1 is a big missing curve. It makes this package useless to me :(


Isn't it used only in bitcoin ? That's definitely the thing that should be out of core stdlib


There is a reason why it's often used in crypto. From Wikipedia "now gaining in popularity due to its several nice properties. Most commonly-used curves have a random structure, but secp256k1 was constructed in a special non-random way which allows for especially efficient computation. As a result, it is often more than 30% faster than other curves if the implementation is sufficiently optimized. Also, unlike the popular NIST curves, secp256k1's constants were selected in a predictable way, which significantly reduces the possibility that the curve's creator inserted any sort of backdoor into the curve."


And Ethereum. These are both non-trivial uses cases. There is various work being done to make it a standardized curve for non-blockchain use as well.


Most of these cryptography changes are a bit over my head, but I'm grateful that Go has such diligent contributors. Every new version improves the language in significant ways, and 1.20 is shaping up to be a big milestone.

Filippo is an inspiration as an open source developer. <3


I love the style and detail of this post. There is so much high quality engineering going into the Go ecosystem.


Go is great. But if you look close enough, go also has some dark corners.

The reason the biggest parts of go/crypto is well designed, good readable code, is mostly the work of one person (the author).


Any particular dark corners you know of?


One example, the started (and then stuck) net/ip to net/netip migration?

The tailscale netip package is great (as external pkg), but without commit to a full migration plan, just sprinkle here and there some (half/done/slow/wired/bridge) interfaces leads anyone, who tried to migrate, stuck as well. Some all ready started to migrate back ...


[flagged]


"Great" is far fetch especially "you shouldn't use Go for production services", which does not make any sense, also the same author works for a compagny that uses Go in places, which is kind of funny.


It also helps that Filippo is very diligent and observant. He is one of the most frequent people to send in corrections or extra background information to my Go newsletter :-)


<3 to both :)

Love the newsletter. https://golangweekly.com


Thanks Filippo!

I'm excited for Ed25519ph support. Ed25519ph is designed to work with digests, just like the ECDSA algorithms. (ES256, ES384, etc...)

The [latest FIPS draft](https://csrc.nist.gov/publications/detail/fips/186/5/draft) is requiring Ed25519ph, so I'm happy to see Go support out-of-the-box. [For reference, see section 7.8](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-5-draft...).

I've also been looking for Ed25519ph support for other languages. [Paul Miller](https://github.com/paulmillr), who is the author of the noble libraries for Javascript has just added support in his newly released [curves](https://github.com/paulmillr/noble-ed25519/issues/63) library. Paul has suggested on Twitter holding off on using "curves" until an audit, but most of his other work has already been audited and all his works are highly polished.

Also, for all readers, we wrote an [online Ed25519 tool](https://cyphr.me/ed25519_applet/ed.html), which is useful for testing and verifying. Previously the top result on Google, which has now been taken down, was sending the keys off to a server, which motivated us to write a tool that didn't openly phone home.


> TLS handshakes now return a CertificateVerificationError if they fail because of, well, certificate verification.

... as long the crypto/x509 CheckSignatureFrom ignores the pathlen contraint (the /ONLY/ way of an CA Owner pin down a delegated SubCA usage/raw-key-abuse!) im not sure that CertifictionVerification does what a high-level api user expects!?


CheckSignatureFrom is a low-level API that can't check anything about the path (including more important constraints such as nested EKUs and Name Constraints) because it can only see the immediate parent.

The high-level certificate verification API is Verify, which does check all of the above, see https://pkg.go.dev/crypto/x509#TooManyIntermediates.

We should probably add a line to the docs, to avoid users getting confused like this, but I haven't seen misused in the wild.

(I also disagree that maxPathLen does anything about raw key abuse, since once you have the key of an intermediate you can issue leaves arbitrarily, without needing to issue another intermediate, but that's besides the point.)



Thank you! That is a quick (helpful) fix!

Beside of checks for the integrity of SelfSigned Certificates, I do not used this function directly.

My assumption about the TLS impact was based on the fact that [Certificate.Verify] uses internally [Certificate.buildChains] who calls [Certificate.CheckSignatureFrom], who has only the direct parents IsCA status, but not the full picture of the full path (and ignores it at the end, when missing on the root-Anchor)!?

And yes, in general, the max pathLen helps only to restict the impact of a specific (delegated) subCA compromise, not for a full root CA key leak. But I expect to see a well prepared narrative about a (revokeable) SubCA as most likely response, to protect the Root-Anchor itself. (What a happy co-incidence that the most Root-Anchors do not declare the planned SubCA strategy via maxPathlen upfront.)


I'm pretty sure these changes are great for e.g. standard TLS connections.

But I don't quite understand what this means for other/custom curves.

For example I'm often using brainpool curves. Currently I just set the CurveParams and I'm done.

They are part of a lot of official standards (especially in Europe/Germany but also for e.g. travel documents in the ICAO standard) so I can't get around them.

Do I have to implement everything for that curves myself? That would probably be more insecure than just using crypto/elliptic.


> Finally, reviewing the BoringCrypto integration by Russ.

Do you think there is a rollback/cleanup possible of current boring/fips <enter most diplomatic/nice phrase here> situation?

(eg. push it back into a seperate stash, guarded by a fixed build-time compiler directive?)


Heh. Look, no one likes FIPS 140. We don’t like it, those that need it don’t like it, sometimes I wonder if NIST likes it. But it is what it is, and the current situation is marginally better than having to fix the merge every time we touch anything.

All Go+BoringCrypto code is behind the compile time GOEXPERIMENT, and mostly in its own files or in its own blocks. It could be worse.


Will Go ever be able to encrypt/decrypt ECDH, similar to the RSA package?


Where is secp256k1, the most important curve in the history of cryptography and the one most used by real-world money applications in the world today?


I honestly can't tell if this is sarcasm or not.

In case it isn't, I'm not a cryptography engineer but those who are generally recommend against using secp256k1 in greenfield deployments, and it doesn't seem (to me at least) to be commonly used in existing software either (with one glaring exception of course).


Two glaring exceptions. Bitcoin and Ethereum both use secp256k1. As well as a host of smaller copycat cryptocurrencies of course.

I don’t believe your statement about seco256k1 being depreciated is correct. There are even tradeoffs between the two curve families, approximately equal software and hardware support, and legitimate reasons for preferring one over the other. They are approximately equal performant in practice, and both have high quality, high security implementations.


Two very closely-related exceptions, and a host of very closely-related copycats. Essentially, one problem space which makes up a small (I would say) percentage of the use of cryptography. In terms of daily impact on the lives of random folks, anyway.

I never said secp256k1 was depreciated (nor deprecated), I said that I have seen recommendations from people far more knowledgeable than I who say, effectively, don't use it unless forced to interoperate with Bitcoin. For example: https://soatok.blog/2022/05/19/guidance-for-choosing-an-elli...

The above link, in case you don't want to read through, basically says "use ed25519. If you can't do that, use secp256r1." This tracks with what I have read from others, and with the direction Filo is taking, so I consider it pretty good advice.


Yea I'm not going to dox myself on this site, but having actually submitted papers and presented at various cryptography conferences and worked as an applied cryptographer in this field for 10 years, I think I have more cryptographic domain experience than some random furry-avatar security blog writer. Nobody recommends secp256*r1*. Nobody. The Koblitz curve is strictly better.

And libsecp256k1 as an implementation has received as much or more review by professional cryptographers (including many members of the well-respected Stanford cryptography group) than any other curve implementation, including many of the curve25519 libraries. It is battle-tested and very secure. There is, essentially, a giant billion-dollar bounty on the security of this curve and the correctness of its implementation. And in the history of cryptocurrency, there has never, not once, been a theft of money due to a cryptographic weakness or implementation error in this library. Unlike curve25519, whose non-unity cofactor for example was a source of inflation bugs in CryptoNote-based projects (e.g. Monero), something which secp256k1 mathematically prevents from being possible in the first place.

If you want to start a new project and not have to worry about potential cryptographic vulnerabilities with your choice of curve or algorithm, there are very few options to pick from. Both curve25519 and secp256k1 are pretty much the only two options, and they are pretty equal in terms of trustworthiness. If you are doing anything novel with publicly-visible digital signatures, I would absolutely recommend secp256k1 over curve25519 due to the cofactor situation.


Eh, secp256r1 is fine. These days we even have complete formulas for it. secp256k1 is also fine, but in practice only ever used in somewhat legacy blockchain applications. (Recent ones tend to use pairing-friendly curves and more esoteric constructions on top.) Curve25519 is definitely a bad choice for anything but EdDSA and ECDH (and even there it's annoying) due to the cofactor, but ristretto255 reuses most of Curve25519's code and implements a prime-order group with a similar performance profile, so that's what I would pick if I needed a prime-order group in 2023.

Still, point is that there are pretty much two applications using secp256k1, who already have pretty good implementations for themselves, and the broader Go ecosystem doesn't get much from having us spend resources on maintaining secp256k1 alongside secp256r1 (which is definitely staying because of TLS and FIPS, amongst other reasons).

> I think I have more cryptographic domain experience than some random furry-avatar security blog writer.

oof


> Yea I'm not going to dox myself on this site, but having actually submitted papers and presented at various cryptography conferences and worked as an applied cryptographer in this field for 10 years, I think I have more cryptographic domain experience than some random furry-avatar security blog writer.

I believe the random furry-avatar blog writer works in the field as well, and writes under a pseudonym to "avoid dox[ing himself]. But I dunno, I'm not a regular reader

> Nobody recommends secp256r1. Nobody. The Koblitz curve is strictly better.

Well again, you have to see that from my perspective, your credentials are the same as the other guy. To be clear, I believe you; I just also believe the Soatok guy.

> Unlike curve25519, whose non-unity cofactor for example was a source of inflation bugs in CryptoNote-based projects (e.g. Monero).

Ah, I hadn't heard of that issue/those issues. I'll search for them tomorrow.

To be clear, I don't recall anyone ever saying secp256k1 is insecure, hence why I didn't want you to think I was calling it deprecated. I appreciate your insight.


> Ah, I hadn't heard of that issue/those issues. I'll search for them tomorrow.

It's pretty easy to summarize: basically for every curve25519 secret, there are 8 (cofactor=8) corresponding public keys, whereas for secp256k1 there is a 1:1 mapping between secret values and public keys. Normally this isn't an issue because when doing things like ECDH key agreement, or digital signatures, honestly who tf cares? Hence why djb opted for a cofactor curve for a small performance gain.

However as a broad generalization, zero-knowledge proofs basically operate by doing manipulation of public keys instead of the underlying secrets/committed values. So for every committed value there are 8 different possible commitments on the curve. Any time you have a zero-knowledge proof using curve25519, anyone can come by and (without knowing the secret value), manipulate it to create an entirely different, but valid proof.

In the case of Monero, spending authority derived from knowing a secret corresponding to an output, but you only ever revealed a "key image" which (simplifying greatly) was essentially a public key / elliptic curve point derived from that secret in a particular way. Double spends were prevented by keeping track of which key images had been seen already. But of course there are 8 possible key images, so every output was spendable up to 8 times. It was a number of years before this flaw was caught!

The cofactor is a huge foot-gun opportunity for anybody doing anything more complicated than simple ECDH, signing a message, or login authentication. It is the main reason why I don't use curve25519 in new protocols, nor recommend its use to others. Better to just not take the risk of getting burned.

There is a sub-curve (Ristretto) which is of prime order which can be used on top of curve25519. But doing zero-knowledge proofs of Ristretto operations is significantly more complicated due to the fact that it is a sub-curve, and the security arguments are subtly different in nontrivial ways. Easier to just use secp256k1 and know you are safe.


> There is a sub-curve (Ristretto) which is of prime order which can be used on top of curve25519.

I'm sure you are aware of this and are just abbreviating, but I think it's important to clarify this for other readers: Ristretto255 is not a sub-curve of Curve25519. It is not a curve at all. Ristretto is abstractly a prime-order group, and concretely a scheme for encoding group elements such that round-trip decode-and-encode produces the same element encoding.

It happens to have been designed to have the same order as the prime-order subgroup of Curve25519, for convenience of implementation, but it is possible to implement it using a different elliptic curve under the hood (though in general you're better to use Curve25519 under the hood for the same reason as avoiding Ed448: there are many more eyes working on solid implementations of that curve's arithmetic).

> But doing zero-knowledge proofs of Ristretto operations is significantly more complicated due to the fact that it is a sub-curve

Using Ristretto255 in zero-knowledge proofs is not much more complex than using an elliptic curve. It just isn't as _simple_ as using a plain elliptic curve, because you cannot witness a Ristretto element as a coordinate pair (x, y), again because Ristretto255 is not a curve. You instead have to witness the encoding of the group element, and then implement the decoding operation inside the circuit to constrain the encoding to its arithmetic representation (e.g. a Curve25519 curve point). This would actually be pretty straightforward to implement in a PLONK-style arithmetisation, but it's also only 12 field element multiplications and an inversion constraint, so it's perfectly feasible in R1CS as well. And thanks to the decoding constraints providing a firewall between the group element encoding and its arithmetic representation, once you have the latter you can safely just use it, and don't need to propagate subgroup checks or cofactor elimination throughout your protocol (like you would using Curve25519 directly).

In practice though, I agree that if you can use a prime-order elliptic curve, then you should definitely do so. Ristretto255 was designed at a time when people wanted to use cofactor curves for performance and curve-safety at the cost of group-safety; Ristretto255 provides that group-safety. Nowadays we have complete and performant formulae for all of the prime-order elliptic curves people want to use, so if you have the option then both secp256r1 and secp256k1 are decent options. But if you're in an ecosystem where Curve25519 is a necessary primitive, then Ristretto255 is a great option.

(Disclosure, since I'm not using a pseudonym here: Filippo and I are authors on the Ristretto IETF draft [0].)

[0] https://datatracker.ietf.org/doc/draft-irtf-cfrg-ristretto25...


I was abbreviating, but I see how it could cause confusion. Thank you for writing a more detailed clarification.


Why not just use Ristretto with Curve25519 to avoid cofactor issues?


Ristretto adds complexity over just using secp256k1, which also has very good Rust support, is quite fast and well-optimized, compiles to WASM, and supports ECDSA, ECDH, and Schnorr key aggregation, so it's quite full-featured.


Well the scamcoin guys can just use separate library for it. I doubt any of the crypto uses would care about timing attacks either


The libsecp256k1 library has been ahead of standard curve25519 implementations in terms of protection against timing attacks, implementation foot-guns, and runtime efficiency.


I wouldn't call it the most important curve in the history of cryptography lol, but there's an implementation in geth: https://github.com/ethereum/go-ethereum/tree/master/crypto/s...


It has no mainstream use outside of cryptocurrency projects, which already require major dependencies and will never be implemented in the stdlib. Prime facie, it doesn't belong in stdlib.


I'm under the impression that Go's original crypto libs were written by none other than djb himself. Is there a rationale for the deficiencies laid out in the OP, or am I putting djb on too much of a pedestal?


The libraries were originally written by Adam Langley, who did a great job compared to the state of the art at the time. They have been serving us well for over 13 years.

Bernstein was never involved.


Interesting, could have sworn I heard this back in 2011 or so, but searching for a source now I have no clue how I got that impression. :) Thank you!


> Is there a rationale for the deficiencies laid out in the OP

Yes. The previous package had a dependency on the general purpose math/big package which was not constant-time and had a large surface area (that was not required for crypto). This has lead to security bugs in the crypto package.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: