Hacker News new | past | comments | ask | show | jobs | submit | briansmith's comments login

Actions have special integration with GitHub (e.g. they can annotate the pull request review UI) using an API. If you forgo that integration, then you can absolutely use GitHub Actions like "a container you run scripts in." This is the advice that is usually given in every thread about GitHub Actions.


That helps a bit but doesn't solve everything.

If you want to make a CI performant, you'll need to use some of its features like caches, parallel workers, etc. And GHA usability really fall short there.

The only reason I put up with it is that it's free for open source projects and integrated in GitHub, so it took over Travis-ci a few years ago.


Devil's advocate: They could make the github CLI capable of doing all of those things (if it's not already), and then the only thing the container needs is a token.


There are multiple ways you can do this already from within a script


Ah, the Dropbox comment.

> For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.


Ive had good luck using: https://github.com/actions/github-script

When the cli didnt have support for what I needed


i just told the op that there are multiple ways to archive that without using api keys when in a script.

one of them beeing echoing text to a file. to me, your comparison makes no sense.


At https://rwc.iacr.org/2025/program.php you can see there is a talk scheduled to be given in a couple weeks titled "Testing Side-channel Security of Cryptographic Implementations against Future Microarchitectures" with the following bits in the abstract: "Using this framework, we conduct an empirical study of 18 proposed microarchitectural optimizations on 25 implementations of eight cryptographic primitives in five popular libraries. We find that every implementation would contain secret-dependent leaks, sometimes sufficient to recover a victim’s secret key, if these optimizations were realized. Ironically, some leaks are possible only because of coding idioms used to prevent leaks under the standard constant-time model."


Which CA's will issue short-lived certificates without negotiating a custom ($$$) contract with them?


Google Trust Services lets you go down to 1 day using ACME. No payment required.


Let's Encrypt, they said a while back they want to allow issuing certs for fewer than 90 days.


That's true, although it's still not possible to actually get one yet.


Again, this is just a temporary situation, and a matter of burning down a list of small tasks. Not that the OpenSSL license issue is a big deal for most anyway. Feel free to help; see this issue filed by Josh Triplett: https://github.com/briansmith/ring/issues/1318#issuecomment-...


> see this issue filed by Josh Triplett

Check who you are replying to ;)


Yeah, I realize, and I am looking forward to having multiple options to choose from that all have the same license compatibility. It's nice that there's a short-term solution that's available for people who need to ship things soon, and it's nice that there's a longstanding library (ring) that'll long-term will be capable of providing a compatible solution as well.


> Maybe so, but pretty much all cryptographic primitives have to be written in assembly anyway to achieve constant time operation.

This really oversimplifies the situation. Even at my most pessimistic, I believe just a very few, very small, parts need to be written in assembly language to maintain the "constant time" properties, and that's just until we can work together better with the Rust language team to eliminate these small gaps. Even before then, the Rust language team is doing a good job at independently improving and expanding the building blocks we need to get to entirely-safe (in the Rust `unsafe` sense) and high-performance crypto libraries in Rust.

> evidently faster than ring itself[1].

If you're running on an AVX-512 system, there is a notable performance gap, temporarily. This state will persist for a few months at most, most likely. It's inevitable that we (all the OpenSSL forks, and even including non-OpenSSL-forks like rust-crypto) all converge on more-or-less the same implementations and/or different implementations of the same optimizations.


What kind of improvement are you looking for? A `blackbox` intrinsic with stronger guarantees?


> But it took until 2023 for someone to do the legwork to figure out how broken it was.

It took until 2023 for somebody to publicly disclose the problem.

The first fix for it was described in RFC 5647, which was published in August 2009 (first draft was submitted in June 27, 2008).


So how did it happen that GCM modes for SSH contained a fix, but ChaCha20-Poly1305 did not?


I think that's a really good question. The way this worked out is worth studying in detail. What was the process with which the AES-GCM cipher suites for SSH were developed? What was the process with which the ChaCha20-Poly1305 cipher suites were developed? How did the difference in processes lead to the difference in results? Will anybody change their process based on these results?


There are multiple reasons for a user to want "dark mode":

* I just want everything to be dark on my screen because I like it.

* I am trying to use this device in a dark place.

* I want a dark, low-contrast background that doesn't compete with image colors.

The solutions to these three problems are all some kind of "dark mode" but each one needs a different solution. Sometimes one "dark mode" might work in conjunction with the user manually changing their brightness settings depending on the lighting in the environment, but not many people seem to design for that.


The the old yanking policy was extra work I did with the intent to help people. It was unfortunate that Cargo had that bug, but also I should have been much more diplomatic in how I dealt with it.

I've just returned from a long break and I do have a concrete plan to catch up on the backlog. I have concrete plans for making it easier for people to get their PRs merged, making ring portable to all platforms, and eliminating all the remaining bits of C code in the next two quarters.

Feel free to reach out privately if you want to talk: brian@briansmith.org.


That's awesome. Thanks heaps, Brian. I really appreciate your re-commitment. Apropos, Re: making ring portable to all platforms: IBM have been graciously maintaining a up to date patchset for Ring for years now and there's an outstanding PR here you may not have seen since they filed it in 2020... https://github.com/briansmith/ring/pull/1057


> he situation described above just generated an "Invalid certificate" message. More use of anyhow::Context would be helpful. I don't disagree with Rustls disallowing decade-obsolete crypto. It's the "silently ignores" part that's a problem.

Because of how X.509 certificate validation works, in general it's not possible to tell you why an issuer couldn't be found, because there are many possible reasons.

Regardless https://github.com/briansmith/webpki/issues/206 tracks improving the situation.


Brian surely knows this, but for other people's context: Certificates in the Web PKI [and likely in any non-trivial PKI] form a Directed Graph with cycles in it. So "is this end entity certificate trustworthy?" is already a pretty tricky graph problem that you're going to have to solve unless you either out-source the problem entirely to the application (lots of older or toy TLS implementations) or just rely on your peer showing you exactly the right set of certificates matching your requirements (the "chain" you may have seen in tools like Certbot) without you ever saying what those actually are.

In a sense it's going to have a big undigestible list of reasons the certificate wasn't found trustworthy, like if you asked grep to tell you why all the non-matching lines in a file don't match a regular expression. "The first letter on this line wasn't a match, and then the second letter wasn't a match, and then the third..."

However, as that ticket says, one relatively easy thing the code could do is notice if there was only one consistent reason and if so tell you what that was.

Also I agree with several commenters that webpki's current behaviour, in which it says "Unknown issuer" even when that's not the problem at all is undesirable and an even vaguer error might actually be better for these cases. See also, languages in which the parser too easily gets confused and reports "Syntax error" when your syntax was correct but something else is wrong, "Parse failed" is vaguer but at least doesn't gaslight me.


Are you assuming that the set of certificates that is given as input is the complete search space? It isn't; the server might have failed to send some certificate that, if present, would have fixed the "unknown issuer" problem. Also, we fail fast on errors; as soon as we find a problem, we stop the search on that path. Because of these reasons, identifying a single error would be, potentially, misleading.

In some cases, e.g. the end-entity certificate is signed with an algorithm that isn't supported, or an RSA key that is too small, we could add special logic to diagnose that problem. However, all this special diagnostic logic would probably approach the size of the rest of the path building logic itself. It doesn't seem appropriate for something in the core of the TLB of the system. Perhaps at some point in the not-too-distant future we can provide some mode with more diagnostic logic, and find a way to clearly separate this diagnostic logic from the actual validation logic to ensure that the diagnostic logic doesn't influence the result.


No, I'm pretty sure I wasn't assuming that, and I'm not sure what gave you that impression from what I wrote.

There are a variety of interesting strategies to decide that an end entity certificate you were shown is trustworthy, and you're presumably aware that Mozilla (in Firefox) eventually chose to go with a strategy of

* Requiring all root CAs to disclose intermediate CAs as part of their root programme

* Bundling this set with Firefox

Whereupon Firefox gets to decide whether the end entity certificate it was shown was issued by one of these trustworthy intermediates and short cut to a "Yes" answer regardless of which certificates, if any, the server included in the supplied "chain".

In the general case this isn't very applicable, but I note that webpki is named "webpki" and not "General purpose certificate validator" so actually it could go the same route (with the caveat that this needs frequent updates to avoid surprises with very new CAs)

Mostly though if you're quite determined not to introduce more complicated logic into webpki (which is understandable) I specifically don't like the gaslighting of saying "unknown issuer" as I said, when the reality is that you don't know why you don't trust the certificate, so say that.

If std::fs::File::open() gives me Result with an io:Error that claims "File not found" but the underlying OS file open actually failed due to a permission error, you can see why that's a problem right? Even if this hypothetical OS doesn't expose any specific errors, "File not found" is misleading.


> Bundling this set with Firefox

I love that they did that; it was actually my idea (https://bugzilla.mozilla.org/show_bug.cgi?id=657228). I believe the list is pretty large and changes frequently and so they download it dynamically.

> short cut to a "Yes"

Do they really do that? That's awesome if so. Then they don't even need to ship the roots.

> I specifically don't like [...] saying "unknown issuer"

https://github.com/briansmith/webpki/issues/221

> If std::fs::File::open() gives me Result with an io:Error that claims "File not found" but the underlying OS file open actually failed due to a permission error, you can see why that's a problem right? Even if this hypothetical OS doesn't expose any specific errors, "File not found" is misleading.

A more accurate analogy: You ask to open "example.txt" without supplying the path, and there is no "example.txt" in the current working directory. You will get "file not found."

Regardless, I agree we could have a better name than UnknownIssuer for this error.


> it was actually my idea

I didn't know that. Congratulations, I see this survived an early WONTFIX that tried to downplay the privacy implications of AIA chasing, which is even more impressive in today's "Who cares about privacy" world.

> Do they really do that? That's awesome if so. Then they don't even need to ship the roots.

I don't in fact know if they do that. They will always conclude that a typical Let's Encrypt certificate from R3 is trustworthy via ISRG Root X1, even when the "chain" provided by the server leads to the (still trusted) DST Root CA X3 but they could actually be choosing that path on the fly rather than just short cutting.

They do need to ship the roots still because

1. The UX does actually show roots, I still have Certainly Something installed here, but the built-in viewer also shows them.

2. Users can manually distrust a root. It would be weird if either: we expected users to go in and manually distrust dozens of weirdly named intermediates that chain back to that root, or, disabling the root was possible but just silently didn't work in most cases.

3. Some trustworthy intermediate CAs can exist that aren't captured. Imagine if Let's Encrypt spins up R5 tomorrow because of some disaster that makes both R3 and R4 unusable, they can sign it with the ISRG root, and it'll work for lots of people - it's not great, but it's workable, however even if they tell Mozilla immediately, there's just no way everybody's Firefox learns about this instantly. So it's good that if your server shows leaf -> R5 -> ISRG X1 (or indeed leaf -> R5 -> DST Root CA X3) the Firefox browser can still conclude that's trustworthy even though it didn't know about R5.

I look forward to seeing issue 221 resolved, thanks.


> The EverCrypt primitives are formally proven, whereas ring has no such formal proofs.

We do use some of the Fiat Crypto stuff for elliptic curve computations. I am not opposed to switching some stuff to use EverCrypt or other things that might be better, as long as the performance is the same or better, and as long as there's a clear path towards that code being in Rust.

> ring has several primitives that are assembly only and have no portable fallback.

Either in the latest release (0.16.20) or the upcoming release, there is a portable non-assembly implementation of everything in ring.


Thanks for the correction!


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: