> Aside: I really wish people would link to court documents whenever they talk about an ongoing lawsuit.
I just want to second that and thank you for the link. Most reporting is just horribly bad at covering legal stuff because all the stuff that makes headlines that people click on is mostly nonsense.
And a big thank you to the wonderful people at the Free Law Project for giving us the ability to find and link to this stuff. They're a non-profit and they accept donations. (hint hint)
It's just a vanilla FOIA lawsuit, of the kind hundreds of people file every month when public bodies fuck up FOIA.
If NIST puts up any kind of fight (I don't know why they would), it'll be fun to watch Matt and Wayne, you know, win a FOIA case. There's a lot of nerd utility in knowing more about how FOIA works!
But you're not going to get the secrets of the Kennedy assassination by reading this thing.
I will draw to your attention two interesting facts.
First, OpenSSH has disregarded the winning (crystals) variants, and implemented hybrid NTRU-Prime. The Bernstein blog post discusses hybrid designs.
"Use the hybrid Streamlined NTRU Prime + x25519 key
exchange method by default ("sntrup761x25519-sha512@openssh.com").
The NTRU algorithm is believed to resist attacks enabled by future
quantum computers and is paired with the X25519 ECDH key exchange
(the previous default) as a backstop against any weaknesses in
NTRU Prime that may be discovered in the future. The combination
ensures that the hybrid exchange offers at least as good security
as the status quo."
Second, Daniel Bernstein has filed a public complaint against the NIST process, and the FOIA stonewalling adds more concern and doubt that the current results are fair.
We (OpenSSH) haven't "disregarded" the winning variants, we added NTRU before the standardisation process was finished and we'll almost certainly add the NIST finalists fairly soon.
What are the aims of the lawsuit? NIST fucked up a FOIA response. The thing you do when a public body gives you an unsatisfactory FOIA response is that you sue them. I've been involved in similar suits. I'd be surprised if NIST doesn't just cough up the documents to make this go away.
"Can NIST's decisions on crystals be overturned by the court?" Let me help you out with that: no, you can't use a FOIA suit to "overturn" a NIST contest.
OpenSSH implemneted NTRU-Prime? What's your point? That we should just do whatever the OpenSSH team decides to do? I almost agree! But then, if that's the case, none of this matters.
I assume that the point was that NSA is against using hybrid algorithms like the one used by OpenSSH, which combine a traditional algorithm with a post-quantum algorithm, arguing that using both algorithms is an unnecessary complication.
The position of D. J. Bernstein and also of the OpenSSH team is that the prudent approach is to use only hybrid algorithms until enough experience is gained with the post-quantum algorithms, to be reasonably certain that they are secure against the possible attacks.
If they obtain the documents requested through FOIA, it is expected that they will support the opinion that the NSA recommendations should be ignored, because they have a very long history in making attempts to convince the public that certain cryptographic algorithms are secure enough, even when they were aware of weaknesses in those algorithms that they could exploit, so it was in their interest that everybody else should use them, to facilitate the NSA's tasks.
As explained at the linked Web page, in the past NSA has forced the standardization of algorithms that had too short keys, i.e. DES and DSA, and has made partially-successful attempts to standardize back-doored algorithms like Clipper and their infamous random bit generator.
Similarly now, they want to enforce the use of only the post-quantum winning algorithm, without the additional protection of combining it with a traditional algorithm.
Fucking everybody's position is to combine classical key exchanges with PQC KEMs. It wasn't NIST's job to standardize a classical+PQC construction. The point of the contest is to figure out which PQC constructions to use. NIST also didn't recommend that everyone implement their cryptographic handshakes in a memory-safe language. But not doing that is going to get a bunch of people owned by NSA too. Do you see how silly this argument is?
Ostensibly, nistpqc is about finding safe crypto, first for TLS, second for ssh. You will argue differently, but we all expect the same end product.
NIST has specifically asked for guidance on hybrid crypto (as well you know), as I documented elsewhere on this page.
You assert that NIST only accepts pure post-quantum crypto. They ask for hybrid.
Color me jaded.
EDIT: Just for you, my fine fellow!
'For example, in email to pqc-forum dated 30 Oct 2019 15:38:10 +0000 (2019), NIST posted technical comments regarding hybrid encryption modes and asked for feedback “either here on the pqc-forum or by contacting us at pqc-comments@nist.gov” (emphasis added).'
It's not the first time either and it won't be the last. NIST chose Rijndael over Serpent for the AES standard even though Serpent won. I vaguely recall they gave some smarmy answer. I don't think anyone submitted a FOIA not that it would matter. I've been through that bloated semi-pseudo process and saw how easy it was to stall people not answer a simple question.
I remember them saying that in a follow-on email on one of the mail list servers. That was not their original statement but I can't remember exactly what they said. I just remember it was quite smarmy and did not sit well with me coming from such an organization. Regardless Serpent won the challenge by their criteria but then they moved the goal posts after the fact.
Both Rijndael and Serpent could have equally become more performant in the AES-NI CPU instruction sets and I am also not ok with how that evolved either. Cipher fixation is a security vulnerability. AES-NI CPU instructions should have included a few ciphers for performance. Probably Rijndael, Serpent and Twofish. There are folks in the cryptography community that are very much against using more than one cipher and that makes it clear to me they have been compromised or manipulated by something.
Please cite for me the most credible cryptographic researcher you can find who advocates cascades of ciphers. I'm not certain, but if I had to bet, I'd bet that you can't even find one.
You can believe whatever you want to believe, but the threshold you've just claimed to have for believing someone is compromised suggests that essentially every academic cryptographic researcher in the world is compromised.
>What are the aims of the lawsuit? Can the NIST decision on crystals be overturned by the court, and is that the goal?
It sounds to me like the goal is to find out if there's any evidence of the NSA adding weaknesses into any of the algorithms. That information would allow people to avoid using those algorithms.
The town I live in just outside of Chicago refused to disclose their police General Orders to me; I had to engage the same attorneys Bernstein did to get them. What can I infer from their refusal? That the General Orders include their instructions from the Lizard People overlords?
> The town I live in just outside of Chicago refused to disclose their police General Orders to me; I had to engage the same attorneys Bernstein did to get them. What can I infer from their refusal? That the General Orders include their instructions from the Lizard People overlords?
Naah, probably just that they include some pretty shitty stuff in general.
The fact (AFAWK) that the town came up with this shitty stuff even without any Lizard People Overlords having ordered them to do so of course makes it even worse for the powers that be of the town; now they can't even put the blame for the shitty stuff on the LPO.
I may believe almost all of this is overblown and silly, as like a matter of cryptographic research, but I'll say that Matt Topic and Merrick Wayne are the real deal, legit the lawyers you want working on something like this, and if they're involved, presumably some good will come out of the whole thing.
Matt Topic is probably best known as the FOIA attorney who got the Laquan McDonald videos released in Chicago; I've been peripherally involved in some work he and Merrick Wayne did for a friend, in a pretty technical case that got fierce resistance from CPD, and those two were on point. Whatever else you'd say about Bernstein here, he knows how to pick a FOIA lawyer.
A maybe more useful way to say the same thing is: if Matt Topic and Merrick Wayne are filing this complaint, you should probably put your money on them having NIST dead-to-rights with the FOIA process stuff.
> "I may believe almost all of this is overblown and silly, as like a matter of cryptographic research ..."
Am I misunderstanding you, or are you saying that you believe almost all of DJB's statements claiming that NIST/NSA is doctoring cryptography is overblown and silly? If that's the case, would you mind elaborating?
> I believe the implication that NIST or NSA somehow bribed one of the PQC researchers to weaken a submission is risible.
Is that even a claim here? I'm on mobile right now so it's a bit hard for me to trawl through the DJB/NIST dialogue, but I thought his main complaint is that NIST didn't appear to have a proper and clear process for choosing the algorithms they did, when arguably better algorithms were available.
So the suggestion wouldn't necessarily be that one of the respected contestants was bribed or otherwise compromised, but rather that NIST may have been tapped on the shoulder by NSA (again) with the suggestion that they should pick a specific algorithm, and that NSA would make the suggestion they have because their own cryptographers ("true believers" on NSA payroll) have discovered flaws in those suggested algorithms that they believe NSA can exploit but hopefully not adversaries can exploit.
There's no need for any novel conspiracies or corruption; merely an exact repeat of previous NSA/NIST behaviour consistent with NSA policy positions.
It's simultaneously about as banal as it gets, and deeply troubling because of that.
I guess I'm not reading it that way. In fact, a FOIA request is going after official records, which I wouldn't expect would contain outright bribery.
Yes, DJB brings up their known bribing of RSA wrt to the whole Dual-EC thing. But my read of that bit of info was the more general 'here's evidence that the NSA actively commits funding towards infecting standards' rather than 'the NSA's playbook just contains outright bribery and that's what we expect to find in the FOIA requests given to NIST'.
You don’t get it clearly. They’re playing dirty. At best the FOIA will receive a document made on the fly with nothing of value. The rules don’t apply to the NSA. You can do exactly nothing. But NIST, you can do something about - reject any standard they approve. It’s your choice what algorithm you use, and we know NIST will select a broken algorithm for the NSA, so just ignore their ‘standard’.
The best solution is using layers of crypto, trusting no single algorithm.
"You shouldn't fight because the baddies are strong!" is a horrible argument in my book. Discouraging and disparaging other people's attempts is even worse.
The actual claim is that NSA may have already spent a lot of time and effort to analyse PQC algorithm underlying problems without making their findings public.
DJB seems to suspect that they may influence NIST to select algorithms and parameters within the range of what they already know how to break.
Huh? Of course NSA spent a lot of time and effort analyzing algorithms without making their findings public. That is their literal job. The peer review NIST is refereeing happened in the open. When people broke SIDH, they didn't whisper it anyone's ear: they published a paper. That's how this stuff works. Bernstein doesn't have a paper to show you; all he has is innuendo. How you know his argument is as limp as a cooked spaghetti noodle is that he actually stoops to suggesting that NSA might have bribed one of the members of the PQC teams.
If he had something real to say, he wouldn't have embarrassed himself like that. How I think I know that is, I think any reasonable person would go way out of their way to avoid such an embarrassing claim, absent extraordinary evidence, of which he's presented none.
It is very hard to not take this comment as being made in bad faith. You either are willfully ignorant or have ulterior motives.
It is a matter of public record (as also detailed again in the article) that the NSA colluded with NIST to get weakened crypto standardised. This happened not only once but multiple times and they when weaknesses become known have repeatedly (and against better knowledge) downplayed the impact. This is undisputed. After the Dual EC skandal they promised that they would be more transparent in the future. DJB alleges that there is important information missing on the decision making processes in the most recent PQC discussion (I am willing to trust him on that, but if you are an expert in the field I'm sure you can elaborate here why it is incorrect). That's why did an FOI request which has not been answered and he is now filing a lawsuit over.
I would argue based on past behaviour we should trust DJB much more than either the NSA or NIST, but it seems you are more occupied with unsubstantially attacking his person than getting to the truth.
> he actually stoops to suggesting that NSA might have bribed one of the members of the PQC teams
I don't know anyone in the teams to judge their moral fiber, but I'm 100% sure the NSA is not above what is suggested and your weird outrage at the suggestion seems surprising knowing what is public knowledge about how the NSA operates.
There are arguments here about NSA pressure on NIST. You miss the point because apparently you're offended that someone suggested your friends can be bribed. I mean, maybe they can't, but this is about the NSA being corrupt, not the researchers.
It can be everybody involved. It should include NIST based on the history alone.
Some of the commentary on this topic is by people who also denied DUAL_EC until (correctly) conceding that it was actually a backdoor, actually deployed, and that it is embarrassing for both NSA and NIST.
This sometimes looks like reactionary denialism. It’s a safe position that forces others to do a lot of work, it seems good faith with some people and not so much with others.
I'm people who denied that Dual EC was a backdoor (my position wasn't an unusual one; it was that Dual EC was too stupid to actually use, which made it an unlikely backdoor). Dan Bernstein didn't educate me about that; like anybody else who held that position, the moment I learned that real products in the industry were built with libraries that defaulted to Dual EC, the jig was up.
I'm honest about what I'm saying and what I've said. You are not meeting the same bar. For instance, here you're insinuating that my problem on this thread is that I think NIST is good, or trustworthy, or that NSA would never have the audacity to try to bribe anybody. Of course, none of that is true.
I don't know how seriously you expect anybody to take you. You wrote 13-paragraph comment on this thread based on Filippo's use of an "It's Always Sunny In Philadelphia Meme", saying that it was a parody of "A Beautiful Mind", which is about John Nash, who was mentally ill, and also an anti-semite, ergo Filippo Valsorda is an anti-semite who punches down at the mentally ill. It's right there for everybody to read.
How could any serious security researcher have been in doubt about Dual EC? The design did not not make any sense at all. Not until you consider that it is designed with a back door, then it is a sleek minimal design that does exactly what it needs to do and not a whole lot more.
If you couldn't see that from a mile away, then you might be too naive to work in security.
> I believe the implication that NIST or NSA somehow bribed one of the PQC researchers to weaken a submission is risible.
Could you elaborate on this? I didn't get this from the article at all. There's no researcher(s) being implicated as far as I can tell.
What I read is the accusation of NIST's decision-making process possibly being influenced by the NSA, something that we know has happened before.
Say N teams of stellar researchers submit proposals, and they review their peers. For the sake of argument, let's say that no flaw is found in any proposal; every single one is considered perfect.
NIST then picks algorithm X.
It is critical to understand the decision making process behind the picking of X, crucially so when the decision-making body has a history of collusion.
Because even if all N proposals are considered perfect by all possible researchers, if the NSA did influence NIST in the process, history would suggest that X would be the least trustable of all proposals.
And that's the main argument I got from the article.
Yes, stone-walling a FOIA request may be common, but in the case of NIST, there is ample precedent for malfeasance.
I don't even support NIST's mission; even if you assembled a trustworthy NIST, I would oppose it.
The logical problem with the argument Bernstein makes about NSA picking the least trustworthy scheme is that it applies to literally any scheme NIST picks. It's unfalsifiable. If he believes it, his FOIA effort is a waste of time (he cannot FOIA NSA's secret PQC attack knowledge).
The funny thing here is, I actually do accept his logic, perhaps even more than he does. I don't think there's any reason to place more trust in NIST's PQC selections than other well-reviewed competing proposals. I trust the peer review of the competitors, but not NIST's process at all.
> The logical problem with the argument Bernstein makes about NSA picking the least trustworthy scheme is that it applies to literally any scheme NIST picks. It's unfalsifiable.
That may be true in the strict sense, but in practice, I think there would be a material distinction between a NIST process of "we defer our decision to the majority opinion of a set of three researchers with unimpeachable reputations" (a characterization from another comment) and a process of "NSA said we should pick X."
In the strict sense, I can't trust either process, but in practice [edit: as an absolute layperson who has to trust someone], I'd trust the first process infinitely more (as I would absolutely distrust the second process).
> The funny thing here is, I actually do accept his logic, perhaps even more than he does.
That's actually what I got from your other comments to this story. But that confused me, because it was also what I got from the article. The first two thirds of the article are spent entirely on presenting NIST as an untrustworthy body based on decades of history. Apart from the title, PQC isn't even mentioned until the last third, and that part, to me, was basically "NIST's claims of reform are invalidated if it turns out that NSA influenced the decision-making process again".
My vibe was that both of your positions are more or less in agreement, though I have to say I didn't pick up on any accusations of corruption of a PQC researcher in the article (I attribute that to me being a layperson in the matter).
I believe you have a very naive and trusting view of these US governmental bodies. I don't intend that to be an insult, but by now I think the jury is out that these agencies cannot be trusted (the NSA less so, than NIST).
I'm not sure about corrupting NIST nor corrupting individual officials of NIST, but I can easily imagine NIST committees not understanding something, being tricked, not looking closely, protecting big orgs by default (without maliciousness), and overall being sloppy.
Running standards without full transparency, in my experiences of web security standards + web GPU standards is almost always due to hiding weaknesses, incompetence, security gaps of big players, & internal politics of these powerful incumbents. Think some hardware vendor not playing ball without guarantee of privacy, some Google/Apple committee member dragging their feet because of internal politics & monopoly plays. Seperately, mistakes may come from standards committee member glossing over stuff in emails because they're busy: senior folks are the most technically qualified yet also most busy. Generally not because some NSA/CIA employee is telling them to do something sneaky or lying. Still FOIA-worthy (and why I rather public lists for standards), but for much lamer reasons.
> ...but I can easily imagine NIST committees not understanding something, being tricked, not looking closely, protecting big orgs by default (without maliciousness), and overall being sloppy.
I agree with this. And I think that this is more likely to be the case. But I really think with all that we now know about US governmental organisations the possibility of backdoors or coercion should not be ruled out.
Even when you're trying to be charitable, you're wildly missing the point. I don't give a fuck about NIST or NSA. I don't trust either of them and I don't even buy into the premise of what NIST is supposed to be doing: I think formal cryptographic standards are a force for evil. The point isn't that NIST is trustworthy. The point is that the PQC finalist teams are comprised of academic cryptographers from around the world with unimpeachable reputations, and it's ludicrous to suggest that NSA could have compromised them.
The whole point of the competition structure is that you don't simply have to trust NIST; the competitors (and cryptographers who aren't even entrants in the contest) are peer reviewing each other, and NIST is refereeing.
What Bernstein is counting on here is that his cheering section doesn't know the names of any cryptographers besides "djb", Bruce Schneier, and maybe, just maybe, Joan Daemen. If they knew anything about who the PQC team members were, they'd shoot milk out their nose at the suggestion that NSA had suborned backdoors from them. What's upsetting is that he knows this, and he knows you don't know this, and he's exploiting that.
My reading wasn't that he thinks they built backdoors into them, but that the NSA might be aware of weaknesses in some of them, and be trying to promote the algorithms they know how to break.
Peer review and "informal standards". Good examples of things that were, until long after their widespread adoption, informal standards include Curve25519, Salsa20 and ChaCha20, and Poly1305. A great example of an informal standard that remains an informal standard despite near-universal adoption is WireGuard. More things like WireGuard. Less things like X.509.
Both formal and informal peer review are why I like the FOIA, and standards / competition discussion to be open in general. I actually dislike closed peer review, or at least without some sort of time-gated release.
Likely scenarios, and that closed review hides:
- Peer review happened... But was lame. Surprisingly common, and often the typical case.
- If some discussion did come up on a likely attack... What? Was the rebuttal and final discussion satisfactory?
It's interesting if some gov team found additional things... But I'm less worried about that, they're effectively just an 'extra' review committee. Though as djb fears, a no-no if they ask to weaken something... And hence another reason it's good for the history of the alg to be public.
Edit: Now that storage and video are cheap, I can easily imagine a shift to requiring all emails + meetings to be fully published.
Edit: I can't reply some reason, but having been an academic reviewer, including for security, and won awards for best of year/decade academic papers, I can say academic peer review may not be doing what most people think, eg, it is often more about novelty and trends and increments from a 1 hour skim. Or catching only super obvious things outsiders and fresh researchers mess up on. Very diff from say a yearlong $1M dedicated pentest. Which I doubt happened. It's easy to tell which kind of review happened when reading a report... Hence me liking a call for openness here.
You get that the most important "peer review" in the PQC contest took the form of published academic research, right? NIST doesn't even have the technical capability to do the work we're talking about. My understanding is that they refereed; they weren't the peer reviewers.
Replying to your edit I've been an academic peer reviewer too. For all of its weaknesses, that kind of peer review is the premise of the PQC contest --- indeed, it's the premise of pretty much all of modern cryptography.
As much as I like the design of WireGuard, the original paper made stronger claims of security than were achieved with respect to key exchange models. Peer review and informal standards failed in catching this. From my perspective, the true benefit of a formal standardisation process such as this is that it dangles such a publishable target in front of researchers that we formally verify/disprove these claims out in the open.
WireGuard's design is superior to that of its competitors, and one of its distinctive features is that it lacks formal standardization. It's not as if we don't have decades of experiences with attempts to standardize our way into strong cryptography; see IPSEC for a particularly notorious example of how badly standards processes handle this stuff.
For sure, if a standardization process had been called to design a VPN protocol, I'd agree that the resulting design would almost certainly be less than WireGuard. I think that the competitive nature of the PQC process as well as soliciting completed submissions as opposed to a process to build from the ground-up helps in this regard. I don't think that engages with the point I was making, however: the original submission of WireGuard made claims that were incorrect, which would have arguably been caught sooner if it were a part of a formal standardization process, since researchers would have been incentivized to analyse it sooner.
Having come from a community that is often cleanup duty for unfounded claims (PL) and having to spend ~decade+ $100M+ efforts to do so... I didn't realize that about wireguard. That's pretty strange to read in 2022.
To be clear, WireGuard is a good VPN protocol, and definitely a secure design. I wouldn't recommend another over it. It's just the initial claims of security in the NDSS paper were incompatible with its design.
I'm sure it's a pretty good one, but it's quite hard to trust more than that both on the design + impl side if you ever have tried to verify (vs just test) such a system. Think the years of pain for something much more trivial like paxos + an impl of it.
In this case, looks like the community does value backing up its claims, and the protocol is verified: https://www.wireguard.com/formal-verification/ . Pretty awesome! The implementation itself seems to be written unsafely, so TBD there.
You're probably right about my original comment, and I apologize. These threads are full of very impassioned, very poorly-informed comments --- I'm not saying I'm well-informed about NIST PQC, because I'm not, but, I mean, just, wow --- and in circumstances like that I tend to play my cards very close to my chest; it's just a deeply ingrained message board habit of mine. I can see how it'd be annoying.
I spent almost 2 decades as a Daniel Bernstein ultra-fan --- he's a hometown hero, and also someone whose work was extremely important to me professionally in the 1990s, and, to me at least, he has always been kind and cheerful; he even tried to give us some ideas for ECC challenges for Cryptopals. I know what it's like to be in the situation of (a) deeply admiring Bernstein and (b) only really paying attention to one cryptographer in the world (Bernstein).
But talk to a bunch of other cryptographers --- and, also, learn about the work a lot of other cryptographers are doing --- and you're going to hear stories. I'm not going to say Bernstein has a bad reputation; for one thing, I'm not qualified to say that, and for another I don't think "bad" is the right word. So I'll put it this way: Bernstein has a fucked up reputation in his field. I am not at all happy to say that, but it's true.
Based only on random conversations and no serious interrogation of what happened, so take it for the very little this pure statement of opinion is worth, I'd say he has, chiefly, and in my own words, a reputation for being a prickly drama queen.
He has never been that to me; I've had just a few personal interactions with him, and they've been uniformly positive. My feeling is that he was generous with his time and expertise when I had questions, and pleasant and welcoming in person.
He has, in the intervening years, done several things that grossed me the fuck out, though. There are certainly people who revel in hating the guy. I'm not one of them.
> If they knew anything about who the PQC team members were, they'd shoot milk out their nose at the suggestion that NSA had suborned backdoors from them.
> the motivation behind those requests is risible.
It is quite hilarious that NIST suckered the industry into actually using Dual-EC, despite being worse than the other possible choices in nearly every respect. And this ignores the fact that the backdoor was publicly known for years. This actually happened; it’s not a joke.
The motivation behind the FOIA requests is to attempt to see whether any funny business is going on with PQ crypto.
If the NSA actually suckers any major commercial player into using a broken PQ scheme without a well-established classical scheme as a backup, that will be risible too.
Dual_EC keeps getting brought up, but I have to ask: does anybody have any real evidence that it was widely deployed? My recollection is that it basically didn't appear anywhere outside of a handful of not-widely-used FIPS-certified libraries, and wasn't even the default in any of them except RSA's BSAFE.
The closest thing we have to evidence that Dual_EC was exploited in the wild seems to be a bunch of circumstantial evidence around its role in the OPM hack which, if true, is much more of a "self own" than anything else.
It was widely deployed. NSA got it into BSAFE, which I would have said "nobody uses BSAFE, it's not 1996 anymore", but it turned out a bunch of closed-source old-school hardware products were using BSAFE. The most notable BSAFE victims were Juniper/Netscreen.
Everybody who claimed Dual EC was a backdoor was right, and that backdoor was materially relevant to our industry. I couldn't believe something as dumb as Dual EC was a real backdoor; it seemed like such idiotic tradecraft. But the belief that Dual EC was so bad as tradecraft that it couldn't be real was, apparently, part of the tradecraft! Bernstein is right about that (even if he came to the conclusion at basically the same time as everyone else --- like, the instant you find out Juniper/Netscreen is using Dual EC, the jig is up).
I don't think Juniper used BSAFE in ScreenOS -- they seem to have put together their own Dual EC implementation on top of OpenSSL, sometime around 2008. (This doesn't change your point, of course.)
Yeah, I think you're right; the Juniper revelation also happened months after the BULLRUN stuff --- I remember being upset about how Greenwald and his crew had hidden all the Snowden docs in a SCIF to "carefully review them", with the net result that we went many months without knowing that one of the most popular VPN appliances was backdoored.
ECDSA is almost universally used. It's deeply suboptimal in a variety of ways. But that's because it was designed in the 1990s, not because it's backdoored. This isn't a new line of argumentation for Bernstein; he has also implied that AES is Rijndael specifically because it was so commonly implemented with secret-dependent lookups (S-boxes, in the parlance); he's counting on a lay audience not knowing the distinction between an engineering principle mostly unknown at the time something was designed, and a literal backdoor.
What's annoying is that he's usually right, and sometimes even right in important new ways. But he runs the ball way past the end zone. Almost everybody in the field agrees with the core things he's saying, but almost nobody wants to get on board with his wild-eyed theories of how the suboptimal status quo is actually a product of the Lizard People.
Is he claiming that it is a literal backdoor though? Couldn't Bernstein have a point that the NIST picked Rijndael as the winner of the AES competition because the way it was usually implemented was susceptible to timing attacks? Even if it the engineering principle was mostly unknown at the time, one might guess that e.g. NSA was aware of it and may have provided some helpful feedback.
> he's counting on a lay audience not knowing the distinction between an engineering principle mostly unknown at the time something was designed, and a literal backdoor.
When you discount his theories with that argument, your own reductio ad Lizardum (?) doesn’t help. There’s a world of distinction between NSA inserting backdoors, for which there’s good evidence but maybe not every time, and whatever you’re trying to paint his theory as by invoking the Lizard People.
You haven't explained how my argument discounts his theories. You're just unhappy that I used the term "Lizard People". Ok: I retract "Lizard People". Where does that leave your argument?
I don't care about his theories. What matters that US export controls on encryption were reduced due to his previous lawsuit and he has offered alternative encryption in the public domain.
> I believe the implication that NIST or NSA somehow bribed one of the PQC researchers to weaken a submission is risible.
maybe you don't know what risible means, but it reads like you're saying that the NSA "somehow" coercing someone is unlikely, which i'm sure you can agree is a "very naive and trusting view"
Nowhere does the comment say that the NSA "somehow" coercing someone is unlikely. Hence, it's fair question whether the comment had been comprehended, because it seems it hasn't in this thread. If comprehension begets intelligence than conclusions born from misunderstanding exude stupidity.
And, dropping the pedantry, it's quite frustrating to be deliberately or casually or in whatever way misrepresented by drive-by commenters in an otherwise apt discussion thread. Your comment and the one tptacek responded to are patronizing and dismissive and really don't contribute to any interesting discourse on the topic. I think it's fair to dismiss stupid drive-by low-effort quips, personally.
You used obscure language to make yourself look smart and deal with the resulting confusion by calling people stupid instead of clarifying what was said. Please get your ego in order.
The person is saying one thing then denying saying that thing and being a jerk about it. Either a bot or someone with a broken thesaurus. Glad you pointed it out because it’s ridiculous/risible.
That person is very well known in this community, and in other communities as well.
They are also known for making very specific arguments that people misinterpret and fight over, but the actual intent and literal meaning of the statements is most often correct (IMO).
Whether this is a byproduct of trying to be exacting in the language used that tends to cause people interpretive problems or a specific tactic to expose those that are a combination of careless with their reading and willing to make assumptions rather than ask questions is unknown to me, but that doesn't change how it tends to play out, from my perspective.
In this case, I'll throw you a bone and restate his position as I understand it.
NIST ran the competition in question in a way such that all the judges referred each other, and all are very well known in the cryptographic field, and the suggestion by someone with more common game that they could be bribes in this manner (note not that the NSA would not attempt it, but the implication they would succeed with the people in question) is extremely unlikely, and that DJB would suggest as much knowing his fame may matter to people more than the facts of who these people are, is problematic.
I'm not sure I'd use the same words, but yeah, the argument I'm refusing to dignify is that NSA could have been successful at bribing a member of one of the PQC teams. Like, what is that bribed person going to do? Look at the teams; they're ridiculously big. It doesn't even make sense. Again: part of my dismissiveness comes from how clear it is that Bernstein is counting on his cheering section not knowing any of this, even though it's a couple of Google searches away.
One trivial example implied by the blog post: Such corruption could be involved in the non-transparent decision making process at NIST.
Regarding Dual_EC: we still lack a lot of information about how this decision was made internally at NIST. That’s a core point: transparency was promised in the wake of discovered sabotage and it hasn’t arrived.
What do you mean, "how" the decision about Dual EC was made? It's an NSA-designed backdoor. NIST standardized it because NSA told them to. I'm sure NSA told NIST a story about why it was important to standardize it. The Kremlinology isn't interesting: it is NSA's chartered job to break cryptography, and nobody should ever trust them; the only thing NSA can do to improve cryptography is to literally publish secret attacks, and they're not going to do that.
What do I mean? Iran-Contra, Watergate, or a 9/11 report style report, like levels of investigation. Given how widely read the BULLRUN stories were, it’s not credible to suggest the details aren’t important.
The American people deserve to know who picked up the phone or held a meeting to make this happen. Who was present, who at NIST knew what, and so on. Who internally had objections and indeed who set the policy in the first place. What whistleblower protections were in place and why didn’t the IG have involvement in public? Why did we have to learn about this from Snowden?
NSA has a dual mandate, on that I hope we can agree. It’s my understanding that part of their job is to secure things and that part of their job is to break stuff.
NIST has no such dual mandate, heads should roll at NIST. We probably agree that NSA probably won’t be accountable in any meaningful sense, but NIST must be - we are stuck with them. Not trusting them isn’t an option for anyone who files their taxes or banks or does any number of other regulated activities that require using NIST standards.
If that is the case, then what is the explanation for NIST (according to DJB) 1. not communicating their decision process to anywhere near the degree that they vowed to, and 2. stone-walling a FOIA request on the matter?
> Whether this is a byproduct of trying to be exacting in the language used that tends to cause people interpretive problems or a specific tactic to expose those that are a combination of careless with their reading and willing to make assumptions rather than ask questions is unknown to me
Communicating badly and then acting smug when misunderstood is not cleverness (https://xkcd.com/169/).
If many people do not understand the argument being made, it doesn't matter how "exacting" the language is - the writer failed at communicating. I don't have a stake in this, but from afar this thread looks like tptacek making statements so terse as to be vague, and then going "Gotcha! That's not the right interpretation!" when somebody attempts to find some meaning in them.
In short: If standard advice is "you should ask questions to understand my point", you're doing it wrong. This isn't "HN gathers to tease wisdom out of tptacek" - it's on him to be understood by the readers (almost all of which are lurkers!). Unless he doesn't care about that, but only about shouting (what he thinks are) logically consistent statements into the void.
The explanation for the FOIA process is that public bodies routinely get intransigent about FOIA requests and violate the statutes. Read upthread: I have worked with Bernstein's FOIA attorneys before. Like everyone else, I support the suit, even as I think it's deeply silly for Bernstein to equate it to Bernstein v US.
If you made me guess about why NIST denied his FOIA requests, I'd say that Bernstein probably royally pissed everyone at NIST off before he made those requests, and they denied them because they decided the requests were being made in bad faith.
But they don't get to do that, so they're going to be forced to give up the documents. I'm sure when that happens Bernstein will paint it as an enormous legal victory, but the fact is that these outcomes are absolutely routine.
When we were FOIA'ing the Police General Orders for all the suburbs of Chicago, my own municipality declined to release theirs. I'd already been working with Topic on a (much more important) FOIA case from a friend of mine, so I reached out asking for him to write a nastygram for me. The nastygram cost me money --- but he told me having him sue would not! It was literally cheaper for me to have him sue my town than to have him write a letter, because FOIA suits have fee recovery terms.
I really can't emphasize enough how much suing a public body to force compliance with FOIA is just a normal part of the process. It sucks! But it's utterly routine.
> If that is the case, then what is the explanation for NIST (according to DJB) 1. not communicating their decision process to anywhere near the degree that they vowed to, and 2. stone-walling a FOIA request on the matter?
Why are you asking me, when I was clear I was just stating my interpretation of his position, and he had already replied to me with even more clarification to his position?
> Communicating badly and then acting smug when misunderstood is not cleverness
I don't disagree. My observations should not be taken as endorsement for a specific type of behavior, if that's indeed what is being done.
That said, while I may dislike how the conversation plays out, I can't ignore that very often he has an intricate and we'll thought out position that is expressed succinctly, and in the few cases where someone treats the conversation with respect and asks clarifying questions rather than makes assumptions the conversation is clear and understanding is quickly reached between most parties.
I'm hesitant to lay the blame all on one side when the other side is the one jumping to conclusions and then refusing to accept their mistake when it's pointed out.
At the risk of belaboring the obvious: An attacker won't have to say "Oops, researcher X is working in public and has just found an attack; can we suppress this somehow?" if the attacker had the common sense to hire X years earlier, meaning that X isn't working in public. People arguing that there can't be sabotage because submission teams can't be bribed are completely missing the point.
He goes on to say:
I coined the phrase "post-quantum cryptography" in 2003. It's not hard to imagine that the NSA/IDA post-quantum attack team was already hard at work before that, that they're years ahead of the public in finding attacks, and that NSA has been pushing NISTPQC to select algorithms that NSA secretly knows how to break.
Does this seem unreasonable, and if so, why?
He also remarks:
Could such a weakness also be exploited by other large-scale attackers? Best bet is that the answer is yes. Would this possibility stop NSA from pushing for the weakness? Of course not.
Doesn’t sound to me like he only has concerns about bribery. Corruption of the standards to NSA’s benefit is one overarching issue. It’s not the only one, he has concerns about non-American capabilities as well.
The are many methods for the NSA to achieve a win.
Ridiculing people for worrying about this is totally lame and is harmful to the community.
To suggest a few dozen humans are beyond reproach from attack by the most powerful adversaries to ever exist is extremely naive at best. However that literally isn’t even a core point as Bernstein notes clearly.
FFS nobody is saying that the general idea of being skeptical is unreasonable. And nobody is being ridiculed for doing such. This subthread is about the contents of tptacek’s comment, which doesn't do what you are saying. Saying DJB’s claims are inconceivable is the mischaracterization. People are very eager to paint a picture nobody intended so they can say something and be right.
I use djb’s crypto. Everybody knows his speculation. Everybody knows why he’s pursuing more information. Nobody disagrees more information would be a public good. Some people are more skeptical than others that he’ll find anything substantial.
> If you RTFA you'd know it pertains to bribery, not coercion
By quoting the article it seems the text directly contradicts your summary as being too narrow. General coercion is also be included as part of the concerns raised by TFA. He isn’t just talking about NSA giving a person a sack of money.
Meanwhile in this thread and on Twitter, many people are indeed doing the things you say that nobody is doing.
We almost all use Bernstein’s crypto — some as mere users, others as developers, etc. I’m not sure what that brings to the discussion.
I’m glad we agree that his work to gather more information is a public good.
The article discusses it generally but uses bribery as the example. Perhaps that’s the confusion. Someone said the idea that we’re gonna find bribes is silly. Someone else said that’s insane, how could you not imagine the govt doing something coercive. Reply was that’s not what I said. Another challenge follows asserting that the gov’t is generally shady and coercive. I tried to clarify what I see as the confusion (bribery vs coercion as an example used in the article). Sorry if my statement was overly broad, my intention was to say we’re probably mostly on the same side and arguing over semantics. Maybe not all of the world is (e.g. Twitter), but it seemed like the case here. Maybe not and tptacek believes the gov’t is infallible. IDK. I like DJB and appreciate what he’s doing.
Maybe he does know what risible means and is in fact extremely well informed, much better informed than you are, to the point where offering sarcasm on the apparent basis of absolutely nothing but what you've learnt from the internet is actually not a valuable contribution to the conversation but instead embarrassing. Have you considered this possibility as well?
I think it's naive and trusting only on the surface, but with some clear intent and goal underneath. In the past he has held a different stance, but it suddenly changed some time after Matasano.
Can I ask that, if you're going to accuse me of shilling in an HN thread, you at least come up with something that I'm shilling? I don't care what it is; you can say that I'm shilling for Infowars Life ProstaGuard Prostate Health Supplement with Saw Palmetto and Anti-Oxidant, for all I care, just identify something.
It's very disconcerting, for the sake of open and honest discourse, that you or someone else decided to flag (and thus censor) my reply to this request.
I don't think it's a bad thing to push back and demand transparency. At the very least the pressure helps keep NIST honest. Keep reminding them over and over and over again about dual-EC and they're less likely to try stupid stuff like that again.
Speaking of dual-EC -- it does seem like 2 questions seem to be often debated, but it can't be neglected that some of the vocal debaters may be NSA shills:
1. does the use of standards actually help people, or make it easier for the NSA to determine which encryption method was used?
2. are there encryption methods that actually do not suffer from reductions in randomness or entropy etc when just simply running the algorithm on the encrypted output multiple times?
It seems that these question often have piles of people ready to jump in saying "oh, don't roll your own encryption, ooh scary... fear uncertainty doubt... and oh whatever you do, don't encrypt something 3X that will probably make it easier to decrypt!!" .. but it would be great if some neutral 3rd party could basically say, ok here is an algorithm that is ridiculously hard to break, and you can crank up the number of bits to a super crazy number.. and then also you can run the encryption N times and just not knowing the number of times it was encrypted would dramatically increase the complexity of decryption... but yea how many minutes before somebody jumps in saying -- yea, don't do that, make sure you encrypt with a well known algorithm exactly once.. "trust me"...
1. Formal, centralized crypto standards, be they NIST or IETF, are a force for evil.
2. All else equal, fewer dependencies on randomness are better. But all else is not equal, and you can easily lose security by adding determinism to designs willy-nilly in an effort to minimize randomness dependencies.
Nothing is, any time in the conceivable future, change to make a broken RNG not game-over. So the important thing remains ensuring that there's a sound design for your RNG.
None of our problems have anything to do with how "much" you encrypt something, or with "cranking up the number of bits". That should be good news for you; generally, you can run ChaPoly or AES-CTR and trust that a direct attack on the cipher isn't going to be an issue for you. Most of our problems are in the joinery, not the beams themselves.
The problem with formal centralized standards is that they tend to become ceilings rather than floors for quality, and it's hard to write them otherwise. They do however serve a function in keeping total snake oil crypto out of government and industry. Having some rubber stamp from people who at least know something keeps people with no knowledge of cryptography from buying the latest absolutely uncrackable post-quantum military grade AES-4096 cryptography product.
I'm also not sold on the idea that informal popularity contests or academic processes (which are often themselves opaque) are always superior to formalized cryptography standards. It's absolutely possible for modern intelligence agencies to infiltrate, steer, and subvert decentralized communities and private sector institutions. We see it all the time.
IMHO Internet culture is unbelievably naive about this. Everyone of course believes that they are hip and smart enough to spot astroturf and could never be conned. Everyone thinks only other people who are obviously less savvy and smart than them could be conned. "Wake up sheeple!" is never spoken to the mirror.
For all we know the NIST curves and AES are stronger than the other stuff and there's an astroturf effort to get non-government entities not to use them! Get the hipsters using vulnerable stuff while NIST/NSA keep recommending the good stuff for classified government use. How do we know DJB doesn't work for the NSA? (I do not believe any of this.!)
This way is madness. So I stick with the rule of "solid evidence or go home" when it comes to allegations and with general consensus of people who seem to know more than myself when it comes to algorithms and constructions.
And is astroturfing the most likely attack vector? That might work on big social media where it's easy to feel like you've got a finger on the pulse of public opinion by scrolling down a long list of anonymous content, but it presumably wouldn't work in crypto (or crypto adjacent) communities which are much smaller and where individual reputations are quite important.
>2. are there encryption methods that actually do not suffer from reductions in randomness or entropy etc when just simply running the algorithm on the encrypted output multiple times?
I think all block ciphers (e.g. AES) meet that definition. For AES, for a specific key, there's a 1-to-1 mapping of plaintexts to ciphertexts. It's impossible that running a plaintext through AES produces a ciphertext with less entropy, because if the ciphertext had less entropy, it would be impossible to decrypt to get back the plaintext, but AES always allows decryption.
> are there encryption methods that actually do not suffer from reductions in randomness or entropy etc when just simply running the algorithm on the encrypted output multiple times?
Unless you can prove that all e.g. 2^256 possible 256 bit inputs map to 2^256 different 256 bit outputs (for every key, in the case of encryption), then chances are you lose strength with every application because multiple inputs map to the same output (and consequently some outputs are not reachable).
I have no doubt that they are great at their job, but when it comes to lawsuits the judge(s) are equally as important. You could get everything right but a judge has extreme power to interpret the law or even ignore it in select cases.
I wouldn't say they ignore the law, but legislation like FOIA has a lot of discretion to balance competing interests and that's where a judge would make the most different despite all the great articulations of the most brilliant lawyers.
There are very few public bodies that do a solid, to-the-letter job of complying with their open records requirements. Almost all FOIA failings are due to the fact that it isn't staffed adequately; FOIA officers, clerks, and records attorneys are all overworked. When you do a bunch of FOIA stuff, you get a feel for what's going on with the other side, and you build a lot of empathy (which is helpful in getting your data over the long run).
And then other times you run into bloody-mindedness, or worse.
I don't think NIST has many excuses here. It looks like they botched this straightforwardly.
It's a straightforward case. My bet is that they'll lose it. The documents will get delivered. That'll be the end of it.
Near the end of the post – after 50 years of axe grinding – djb does eventually get to the point wrt pqcrypto. I find the below excerpt particularly damning. Why not wrap nascent pqcrypto in classical crypto? Suspect!
--
The general view today is that of course post-quantum cryptography should be an extra layer on top of well-established pre-quantum cryptography. As the French government cybersecurity agency (Agence nationale de la sécurité des systèmes d'information, ANSSI) put it at the end of 2021:
Acknowledging the immaturity of PQC is important: ANSSI will not endorse any direct drop-in replacement of currently used algorithms in the short/medium term. However, this immaturity should not serve as an argument for postponing the first deployments. ANSSI encourages all industries to progress towards an initiation of a gradual overlap transition in order to progressively increase trust on the post-quantum algorithms and their implementations while ensuring no security regression as far as classical (pre-quantum) security is concerned. ...
Given that most post-quantum algorithms involve message sizes much larger than the current pre-quantum schemes, the extra performance cost of an hybrid scheme remains low in comparison with the cost of the underlying post-quantum scheme. ANSSI believes that this is a reasonable price to pay for guaranteeing an additional pre-quantum security at least equivalent to the one provided by current pre-quantum standardized algorithms.
But NSA has a different position: it says that it "does not expect to approve" hybrids. Publicly, NSA justifies this by
- pointing to a fringe case where a careless effort to add an extra security layer damaged security, and
- expressing "confidence in the NIST PQC process".
Does that mean the original NISTPQC process, or the current NISTPQC process in which NIST, evidently surprised by attacks, announced plans to call for new submissions?
Of course, if NSA/IDA have secretly developed an attack that works for a particular type of post-quantum cryptosystem, then it makes sense that they'd want people to start using that type of cryptosystem and turn off the existing pre-quantum cryptosystem.
This is the least compelling argument Bernstein makes in the whole post, because it's simply not the job of the NIST PQC program to design or recommend hybrid classical/PQC schemes. Is it fucky and weird if NSA later decides to recommend against people using hybrid key establishment? Yes. Nobody should listen to NSA about that, or anything else. But NIST ran a PQC KEM and signature contest, not a secure transport standardization. Sir, this is a Wendy's.
It’s compelling in context. If the NSA influenced NIST standards 3x in the past — DES, DSA, Dual EC — then shouldn’t we be on high alert this 4th time around?
That NSA is already recommending against hybrid, instead of waiting for the contest results, might signal they’ve once again managed to game the standardization process itself.
At the very least — given the exhaustive history in this post — you’d like to know what interactions NSA and NIST have had this time around. Thus, djb’s FOIA. And thus the lawsuit when the FOIA went unanswered. It all seems very reasonable to me.
It's pretty obvious, right? What Bernstein is saying here can (and probably is) a load of horseshit, and it's still a terrible idea to trust NIST. Seems like a simple argument.
Uh, would you take your simple argument and give me even a single sentence pointing out proof or clear indication it's horseshit. So far I only saw you arguing over words mostly
I'm comfortable that, in the zillion words I've self-indulgently written on this thread, I've established both my bona fides and where I'm coming from with respect to the issues at play here, so in the interests of not repeating myself, I'm not going to repeat myself.
An interesting thing that is happening on Bitcoin mailing list is that although it would be quite easy to add Lamport signatures as an extra safety feature for high value transactions, as they would be quite expensive and easy to misuse (they can be used only once, which is a problem if money is sent to the same address twice), the current concensus between developers is to ,,just wait for NSA/NIST to be ready with the algorithm''. I haven't seen any discussion on the possibility of never being ready on purpose because of a sabotage.
An expert, prominent, and someone who the whole cryptography community listens to, and he calls out the lies, crimes, and blatant hypocrisy of his own government.
I genuinely fear that he will be suicided one of these days.
I think the United States is more about charging people with crimes and ruining their lives that way rather than disappearing people. Russia might kill you with Polonium and make sure everyone knows it, but America will straight up “legally“ torture you in prison via several means and then argue successfully that those methods were legal and convince the world you weren’t tortured. Anyone who’s a target for that treatment, though, knows that’s a lie.
The FBI will just interview you over whatever and then charge you for lying to a federal agent or dig up some other unrelated dirt. While the original investigation gets mysteriously dropped a year later.
It seems silly to me how so many people immediately dismiss anyone even suggesting that something fishy was going on with those cases, when we already know about MKUltra, Tuskegee expirement, etc.
I just want to say, the problem here is worldwide standards bodies for encryption need to be trustworthy. It is incredibly hard to know what encryption is actually real without a deep mathematics background and even then, a choir of peers must be able to present algorithms, and audits of those algorithms with a straight face.
Presenting broken-by-design encryption undermines public confidence in what should be one of our most sacrosanct institutions: the National Institute of Standards and Technology (NIST). Many enterprises do not possess the capability to audit these standards and will simply use whatever NIST recommends. The danger is that we could be engineering embedded systems which will be in use for decades which are not only viewable by the NSA (which you might be ok with depending on your political allegiance) but also likely viewable by any capable organization on earth (which you are probably not ok with irrespective of your political allegiance).
In short, we must have trustworthy cryptography standards. If we do not, bedlam will follow.
There's an easier problem here, which is that our reliance on formal standards bodies for the selection of cryptography constructions is bad, and, not hardly just at NIST, has been over the last 20 years mostly a force for evil. One of the most important "standards" in cryptography, the Noise Protocol Framework, will probably never be a formal standard. But on the flip side, no formal standards body is going to crud it up with nonsense.
So, no, I'd say that bedlam will not follow from a lack of trustworthy cryptography standards. We've trusted standards too much as it is.
No. I don't think we should rely on formal standards, like FIPS, NIST, and the IETF. Like Bernstein himself, I do think we should rely on peer-reviewed expert cryptography. I use Chapoly, not a stream cipher I concocted myself, or some bizarro cipher cascade posted to HN. This is what I'm talking about when I mentioned the Noise Protocol Framework.
If IETF standards happen to end up with good cryptography because they too adopt things like Noise or Ed25519, that's great. I don't distrust the IETF's ability to standardize something like HTTP/3. I do deeply distrust the process they use to arrive at cryptographic architectures. It's gotten markedly better, but there's every reason to believe it'll backslide a generation from now.
(There are very excellent people who contribute to things like CFRG and I wouldn't want to be read as disparaging any of them. It's the process I have an issue with, not anything happening there currently.)
Standards are for people who are not experts in the field or don't have the time and energy to research the existing crypto and actually sift through them to try and decide what to trust and what not to trust.
Lack of standardization might just make it harder for Joe to filter through the google searches and figure out what algorithm to use. He may just pick the first result on Google, which is an ad for the highest bidder on some keywords which may or may not be good.
Maybe not "essential" but I'm willing to guess that a huge percentage of the modern web communicates over HTTP or HTTPS.
"Essential" is interesting because you could definitely argue that HTTP isn't essential, but I don't think there is any feasible way of denying that the formalization and acceptance of early internet protocols (UDP, TCP, HTTP, FTP, etc) have played a significant role in shaping our modern technology world.
In a similar way, having reasonable standards makes it easier for everyone that isn't an expert in a particular field to just use something that is likely to work reasonably well while they worry about some other special part of their idea.
> No. I don't think we should rely on formal standards, like FIPS, NIST, and the IETF.
I assume your concerns are with the process of standardization, and not the idea of standards themselves. After all, there are plenty of expert peer-reviews going on in NIST and in the IRTF.
Noise is useful for building your own bespoke kit, but there does need to be an agreement to use it in the same manner if you hope for interoperability. Things like public key crypto are precisely useful because the other side can read the information back out at the end of the process, even if they aren't running e.g. the same email client version.
NIST is procedurally the least objectionable of all of these standards bodies. Contests are better than collaborations. But NIST itself is a force for evil, not for the lurid message board reason of a shadowy cabal of lizard people trying to weaken PQC, but because "NIST standardization" keeps a lot of 1990s-era crypto in use and prevents a lot of modern crypto from being deployed in the industry.
I guess this is my point: If you have strong mathematicians and cryptographers, you don't end up using NIST.
There are lots of companies who have need for cryptography who don't know who to trust. What should they do in a world where the standards bodies are adversarial?
Maybe this is just the future, if you don't know crypto you're doomed to either do the research or accept that you're probably backdoored? Seems like a rough place to be...
So use whatever crypto Signal uses, or that WireGuard uses. You're not working in a vacuum. You don't even trust NIST to begin with, and yet we still encrypt things, so I'm a little confuddled by the argument that NIST's role as a trusted arbiter of cryptography is vital to our industry. NIST is mostly a force for evil!
Signal’s crypto doesn’t solve all problems (neither does wireguard).
For example, we built private information recovery using the first production grade open source implementation of oblivious RAM (https://mobilecoin.com/overview/explain-like-i'm-five/fog you’ll want to skip to the software engineer section) so that organizations could obliviously store and recover customer transactions without being able to observe them. The signal protocol’s techniques might be part of a cryptographic solution but it is not a silver-bullet.
I guess, notably, we never looked at NIST when designing it so maybe that’s the end of the discussion there.
I didn't say Signal and WireGuard "solved all problems", and neither does any given NIST standard! The track record of cryptosystems built to, say, FIPS standards is extremely bad.
Look, my point is that there are lots of companies around the world who can’t afford highly skilled mathematicians and cryptographers on staff. These institutions rely on NIST to help them determine what encryption systems may make sense. If NIST is truly adversarial, the public has a right to know and determine how to engage going forward.
They don't have to (and shouldn't) retain highly skilled mathematicians. Nobody is suggesting that everyone design their own ciphers, authenticated key exchanges, signature schemes, and secure transports. Peer review is good; vital; an absolute requirement. Committee-based selection processes are what's problematic.
I'm just saying, you're speaking as an expert in the field. Let's say you don't want to do design any of that stuff but you need some parts of those systems for the thing you're building. How do you decide what you can or can't trust without having deep knowledge of the subject matter?
How do you know that Noise is a good design and that a cipher cascade isn't? Whatever (correctly) told you that, apply it to other cryptographic problems.
I see. So maybe what you’re really saying is “why are you writing a system that has cryptographic primitives if you’re not a cryptographer/mathematician?”
Let me ask this another way. I know how we determined noise was a good standard and that was talking to a lot of people who had built sophisticated crypto systems and then doing the research ourselves, but that’s only because we had the people on staff who had the capacity to evaluate such systems.
If we didn’t have those people, how would you suggest figuring out which system to implement?
Peer review is a good start. Noise, and systems derived from it like WireGuard, are peer reviewed (check scholar.google.com for starters), and NIST had nothing at all to do with it.
It is incredibly hard to get a good grasp of the consensus in a literature as a non-expert just by searching Google Scholar. People spend years in graduate school to learn to do that.
Are there reputable journals or conference proceedings that you specifically recommend reading for high-quality literature reviews?
There's nothing you're going to read, with or without trustworthy standards, that is going to enable you to design safe novel applications of cryptography. Encrypting a file, setting up a secure transport, and (if you're extraordinarily careful and do a lot of reading) exchanging secure messages are all within reach without anything resembling postgraduate education.
I got a lot of mileage out of attending IACR in person. Lots of amazing content there every time. A lot of it is addressable even if you aren't going to do the math.
I think this is a sloppy take. If you read the full back-and-forth on the FOI request between D.J. Bernstein and NIST, it becomes readily apparent that there is _something_ rotten in the state of NIST.
Now of course that doesn't necessarily mean that NIST's work is completely compromised by the NSA (even though it has been in the past), but there are other problems that are similarly serious. For example, if NIST is unable to explain how certain key decisions were made along the way to standardisation, and those decisions appear to go against what would be considered by prominent experts in the field as "good practice", then NIST has a serious process problem. This is important work. It affects everyone in the world. And certain key parts of NIST's decision making process seem to be explained with not much more than a shrug. That's a problem.
All you're saying here is that NIST failed to comply with FOIA. That's not unusual. No public body does a reliably good job of complying with FOIA, and many public bodies seem to have a bad habit of pre-judging the "merits" of FOIA requests, when no merit threshold exists for their open records requirements.
NIST failing to comply with FOIA makes them an intransigent public body, like all the rest of them, from your local water reclamation board to the Department of Energy.
It emphatically does not lend support to any of this litigants concerns about the PQC process. I don't know enough (really, anything) about the PQC "contest" to judge claims about its validity, but I do know enough --- like, the small amount of background information needed --- to say that it's risible to suggest that any of the participating teams were compromised by intelligence agencies; that claim having been made in this post saps its credibility.
So, two things I think a reasonable person would want to establish here: first, that NIST's behavior with respect to the FOIA request is hardly any kind of smoking gun, and second that the narrative being presented in this post about the PQC contest seems somewhere between "hand-wavy" and "embarrassing".
> It emphatically does not lend support to any of this litigants concerns about the PQC process.
I agree with most of what you're saying except for this. In my view, unlike some of the other organisations you mentioned, the _only value_ of NIST is in the quality and transparency of its processes. My reading of the DJB/NIST FOI dialogue is that there is reason to believe NIST has serious process problems that go far beyond simply handling an FOI well. From their own responses, it reads as if they aren't able to articulate themselves why they would choose one contestant's algorithm over another's. That kind of undermines the entire point of having an open contest.
The peer review NIST is refereeing happened in the open. Thus far, Bernstein is the only person making these claims. For all the words he burns on NIST's sordid history, he chose to participate in this NIST-run process, and imploded publicly only after the results were announced. There are dozens of cryptographers with reputations in the field comparable to Bernstein's who also participated. Bernstein is the only one suggesting that NSA bribed the contest winners.
From what I can tell, nobody who actually works in this field is taking any of this seriously; what I see is a whole lot of eye rolling and "there he goes again". But you don't get any of that on HN, because HN isn't a forum for cryptography researchers. All you get is Bernstein's cheering section.
I was part of Bernstein's cheering section! I understand the feeling. And, like, I'm still using ChaPoly and 25519 in preference to any of the alternatives! He's done hugely important work. But he has, not to put too fine a point on it, a fucked up reputation among his peers in cryptography research, and he's counting on you not to know that, and to confuse a routine, workaday FOIA lawsuit with some monumental new bit of litigation.
It's a deeply cynical thing for him to be doing.
He could have just announced, in his lovably Bernsteinian† way, that NIST had failed in its FOIA obligations, and he was holding them to account. I'd be cheering too. But he wrote a screed that culminated in an allegation that NSA had bribed members of PQC teams to weaken their submissions. Simply risible; it's embarrassing to be part of a community that dignifies that argument, even if I absolutely get why it's happening. I have contempt for him for exploiting all of you.
None of this is to take anything away from his FOIA suit. I stan his FOIA attorneys. The suit, boring as it is, is a good thing. He should win, and he almost certainly will; L&L wouldn't have taken the case if he wasn't going to. Just keep in mind, people sue and win over FOIA mistakes all the time. In Illinois, you even get fee recovery when you win. This isn't Bernstein v United States!
† I'm not being snarky; I was a multiple-decades-long admirer of that style.
The main concern that I have is the NIST refusal to consider a hybrid design as described in the blog, coupled with the fact that OpenSSH has disregarded NIST and standardized on hybrid NTRU-Prime.
There had to be substance to accomplish this, and it moves all of UNIX plus Microsoft away from crystals. It would seem hugely damaging to crystals as the winner of the latest round.
I don't think you understand what's going on here. The point of the PQC "contest" is to figure out which PQC constructions to use. It's not to design hybrid classical/PQC schemes: everybody already knows how to do that. The idea that NIST should have recommended CRYSTALS-Kyber+Curve25519 is a little like suggesting that they should have recommended Rijndael+DES-EDE.
It's simply not NIST's job to tell tls-wg how to fit PQC into HTTPS, or the OpenSSH team how to fit it into SSH.
If you trust the OpenSSH team more than NIST, that's fine. I think that's a reasonable thing to do. Just do whatever OpenSSH does, and you don't have to worry about how corrupt NIST's process is. I don't even think NIST is corrupt, and I still think you'd be better off just paying attention to whatever OpenSSH does.
That would make it seem that the lengthy hybrid discussion in the blog is a misdirection.
I will grant you that this does support your argument.
EDIT: Actually, what you have said does not seem at all correct.
In DJB's Apon complaint, we find this text:
'For example, in
email to pqc-forum dated 30 Oct 2019 15:38:10 +0000 (2019), NIST posted technical
comments regarding hybrid encryption modes and asked for feedback “either here on
the pqc-forum or by contacting us at pqc-comments@nist.gov” (emphasis added).'
If hybrid encryption is entirely beyond the purview of the NIST PQC competition, then why did this discussion and feedback request ever take place?
Look, I'm just not going to dignify the argument that there is somehow some controversy over the NIST PQC contest not recommending higher-level constructions to plug PQC KEMs into Curve25519 key exchanges. I get that this seems like a super interesting controversy to you, because Bernstein's blog post is misleading you, but this simply isn't a real controversy.
Repeating this here. We (OpenSSH) have not disregarded NIST, we just added a PQ algorithm before NIST finished their competition and we'll almost certainly add support for the finalist fairly soon.
What's with the infighting here? Nothing about the post comes across as conspiracy theory level or reputation ruining. It makes me question the motives of those implying he's crazy, to be honest.
Post-quantum cryptography is essentially a full-employment program for elite academic public key cryptographers, which is largely what the "winning" PQC teams consist of. So, yeah, suggesting that one of those teams was compromised by an intelligence agency is "conspiracy theory level".
Nobody is denying the legitimacy of the suit itself. NIST is obligated to follow public records law, and public records law is important. Filippo's message, which we're all commenting on here, says that directly.
Has the general notion of "conspiracy theory" ever carried any positive value? It only seems to exist to discredit "doubters against the majority consensus" without substance. But I guess words like "crank" wouldn't even exist if there weren't many people like it, so it carries some "definitional" value.
Because they show total disregard for someones opinion (in a more formal way: "unlike you/them, i completely agree with the (apparent) majority consensus (which it also implies), these words probably don't belong into a serious discussion.
Our notion of "crank"/conspiracy theory is a logical consequence of "extraordinary claims require extraordinary evidence." When that evidence isn't provided all that remains is an exceptionally convoluted explanation, generally involving more parties than necessary, hence "conspiracy."
These words probably also tend to describe people who rather hold a belief in these theories, instead of treating them as statements without evidence (which both sides probably should, because there is no evidence against it).
On the other hand, discussing statements without evidence (even if they are not presented as beliefs), has some (opportunity?) cost, which the "theorists" are willing to pay
One says he’s doing it wrong. The other says he hopes that he wins, of course!
Meanwhile they go on to attack Bernstein, mischaracterize his writing, completely dismiss his historical analysis, mock him with memes as a conspiracy theorist, and to top it off they question his internal
motivations (which they somehow know) as some kind of a sore loser which is demonstrably false.
The plot twist for the last point: he is still in the running for round four and his former PhD students did win major parts of round three.
Two things can easily be true: that NIST mishandled a FOIA request, and that there isn't especially good reason to accept on faith Bernstein's concerns about the PQC process, which is unrelated to how they handle FOIA.
Meanwhile: you haven't actually added any light to this subthread: the tweets we're talking about do not dismiss the suit. Cryptographic researchers that aren't stans of Daniel Bernstein (there are a lot of those) are also unhappy about NIST clowning up FOIA.
You are in a deeply weird and broken place if you think you can divide the world into "people who take what Daniel Bernstein says on faith" and "people who trust NIST". I don't know if you're in that place! But some people on this thread clearly are.
You wrote a large number of comments on this so I am asking this here since it's fresh.
Can you comment on why you think djb thinks it is worth investigating if the NSA is attempting to destroy cryptography with weak pqc standards? I read through some of the entries NIST just announced and there are indeed attacks, grave attacks, that exist against Kyber and Falcon. I have no reason to believe the authors of those specs work with the NSA. Wouldn't a more reasonable conclusion be that we need to do more work on pqc? Maybe I have it wrong and he is just trying to rule out that possibility but his long rant which was 80% about NIST and their history with the dual EC backdoor really points at djb concluding the NSA is deliberately trying to weaken crypto by colluding with a bunch of people who probably don't care about money or the NSA's goals that much.
You'd have to ask Bernstein. I think it's helpful to take a bit of time (I know this is a big ask) to go see how Bernstein has comported himself in other standards groups; the CFRG curve standardization discussion is a good example. The reason I said there's a lot of eye-rolling about this post among cryptographers is that I think this is pretty normal behavior for Bernstein.
I used to find it inspiring; he got himself crosswise against the IETF DNS working group, which actively ostracized him, and I thought the stance he took there was almost heroic (also, I hate DNSSEC, and so does he). But when you see that same person get in weird random fights with other people, over and over again, well: there's a common thread there.
Is it worth investigating whether NSA is trying weaken PQC? Sure. Nobody should trust NSA. Nobody should trust NIST! There's value in NIST catalyzing all the academic asymmetric cryptography researchers into competing against each other, so the PQC event probably did everybody a service. But no part of that value comes from NIST blessing the result.
It's probably helpful for you to know that I think PQC writ large is just a little bit silly. Quantum computers of unusual size? I don't believe they exist. I think an under-appreciated reason government QC spending happens is because government spending is a goal in and of itself; one of NSA's top 3 missions is to secure more budget for NSA --- it might even be the #1 goal. Meanwhile, PQC is a full-employment program for academic cryptographers working on "Fun" asymmetric schemes that would otherwise be totally ignored in an industry that has more or less standardized on the P-curves and Curve25519.
Be that as it may: whether or not NSA is working to "weaken" CRYSTALS-Kyber is besides the point. NSA didn't invent CRYSTALS. A team of cryptographers, including some huge names in modern public key crypto research, did. Does NSA have secret attacks against popular academic crypto schemes? Probably. You almost hope so, because we pay them a fuckload of a lot of money to develop those attacks. But you can say that about literally every academic cryptosystem.
You probably also don't need me to tell you again how much I think formal cryptographic standards are a force for evil in the industry.
ok, thanks. I didn't know that about djb's history as far as picking fights with standards groups. I don't know much about him outside of the primitives he designed. That makes some sense in context now because the implication just seemed like a stretch. Cryptosystems break and have flaws in them, that's nothing new. It's just strange to leap to "The NSA did it", but again, I didn't know he just tends to accuse people of that.
I agree about the PQC stuff and committees. Anyways, thanks for clarifying this.
Just bear in mind that this is just opinions and hearsay on my part. Like, I think there's value in relaying what I think I know and what I've heard, but I'm not a cryptographer, I paid almost no attention to the PQC stuff (in fact, I pretty much only ever swapped PQC into my resident set when Bernstein managed to start drama with other cryptographers whose names I knew), and there are possibly other sides to these stories. I've seen Bernstein drama where it's pretty clear he's deeply in the wrong, and I've seen Bernstein drama where it's pretty clear he wasn't.
The suit is good. NIST isn't allowed to clown up FOIA; they have to do it right.
I am definitely not in that place. We clearly disagree on a few points.
The issues raised in the blog post aren’t just about NIST mishandling the FOIA. By reducing it to the lawsuit, this is already a bad faith engagement.
The blog post is primarily about the history of NSA sabotage as well as contemporary efforts, including (NIST’s) failures to stop this sabotage. Finally it finishes the recent history by raising that there are mishandling issues in the pq-crypto competition. The lawsuit is at the end of a long chronological text with the goal of finding more information to extend the facts that we know. This is a noble goal, and it’s hard to accept any argument that the past in this area hasn’t been troubled.
Weirdly there is an assumption made immediately by Filippo, made without basis in fact: he supposes Bernstein somehow lost the contest and that this is his motivation for action. Bernstein hasn’t lost, though some structured lattices have won. He still has submitted material in the running as far as I understand things. None the less we see that Filippo tells us the deepest internal motivations of Bernstein, though we don’t learn how he learned these personal secrets. This is simply not reasonable. Maybe it could be phrased as a question but then the rhetorical tool of denying questions as a valid form of engagement would start to fade away.
Back to the core of the tweets: One of the two says he hopes he wins the suit, the other says he’s doing it wrong. We could read that as they’re both hoping he wins, and yet… it’s hard to believe when the rhetoric centers around Bernstein’s supposedly harmful rhetoric in the blog post and lawsuit as being harmful to the community at large.
Bernstein isn’t attacking a singular person as Filippo is attacking Bernstein. Filippo even includes a meme to drive home the personal nature of the attacks.
For me personally, I used to find this meme funny until I learned the history of the meme. This strikes me as blind spot, my very own once. The context and history of that meme and that scene is dark.
So then, here is some light for you: This meme is a parody from a comedy. In turn it is a parody of a famous scene from a film portraying John Nash. It’s about a very famous mentally ill mathematician. Nash in this scene is the iconic, quintessential conspiracy theorist insane person once considered a genius. Nash is drawing connections that aren’t there and that aren’t reasonable. He was deeply mentally ill at that point in his life. That is a brutal thing to say in itself about anyone, but… it gets worse.
Nash was also famously a virulent antisemitic in some of his psychological breaks and outbursts. I don’t hold him responsible for his ravings as he was a paranoid schizophrenic, but wow I would not throw up that specific meme at a (Jewish) mathematician while implying he’s a crazy conspiracy theorist. It’s some really gross mental health hate mixed with ambiguity about the rest. It could be funny in some contexts, I suppose, but not this one.
So in summary: that is a gross meme to post in a series of ad-hominem tweet attacks calling (obviously Jewish family name) Bernstein a conspiracy theorist, saying he is making obviously crazy, baseless connections. The root of his concern is not insane and ignoring the history of sabotage in this area by NSA is unreasonable.
I assume this meme subtext is a mistake and it wasn’t intended as antisemitic. Still after processing the mental health punching down part of the meme, I had trouble assuming good faith about any of it. Talk about harmful rhetoric in the community.
I also note that they attack him in a number of other bad faith ways which make me lose my assumption of good faith generally about their well wishing on his lawsuit being successfully.
Meanwhile, I don’t take Bernstein on faith. I find his arguments and points in the blog post convincing. I find his history of work in the public interest convincing. I don’t care about popularity contests or personal competition. Meanwhile you say you’re not following the contest.
Corruption of NIST and other related parties isn’t just possible, we know it has happened. We should be extra vigilant that it doesn’t repeat. FOIA is a weak mechanism but it’s something. Has any corruption or sabotage happened here? We don’t know yet, and more important NIST have promised transparency that they haven’t delivered. A promise is a good start but it’s not sufficient.
NIST have slipped their own deadlines, they have been silent in concerning ways, and they’re still failing to provide critical details about the last round of NSA sabotage that directly involved NIST standardization.
I just want to jump back here for a second, because when I responded to this comment last night, I hadn't really read it (for I think obvious reasons, but also I was watching Sandman). So I wrote some replies last night that I'm not super proud of --- not because I said anything wrong, but because I didn't acknowledge the majesty of the argument I had been confronted with.
This right here is a comment that makes the following argument, which I will helpfully outline:
* Filippo Valsorda wrote a tweet that included a meme from "It's Always Sunny In Philadelphia"
* That meme is a parody of "A Beautiful Mind"
* "A Beautiful Mind" is about John Nash --- hold on to that fact, because the argument is about to bifurcate
* John Nash was mentally ill
* John Nash was virulently anti-semitic (hold on to your butts...)
* Ergo, Filippo Valsorda is both bigoted against the mentally ill, and also an anti-semite.
Can we do other memes like this? I'd like your exegesis of the "Homer Simpson dissolves backwards into the hedges" meme next!
> … I didn't acknowledge the majesty of the argument I had been confronted with.
Gee, thanks, I think. Sorry to say we don’t agree on your summary of my comment.
> Filippo Valsorda wrote a tweet that included a meme from "It's Always Sunny In Philadelphia"
From this, we already have serious disagreements. It’s part of a series of tweets amplified by others. It isn’t a single tweet in isolation even when we only look at the direct author. We do agree on the source of the clip, though I think you weren’t familiar with the background of the subject parodied in the clip as I raised it. Perhaps you do not believe it or perhaps you think that the parody somehow erases what was parodied originally. Reasonable people can read it many ways.
Yep. The implication of using such a meme to punch down is mirrored in the words of the related tweets calling him a conspiracy theorist. This wasn’t as you tried to say, a single tweet, it’s presented in a context that is harsh, and condemning.
> John Nash was virulently anti-semitic
Maybe, it’s unclear if it was a byproduct of his mental illness or a sincerely held belief. It’s a third rail, regardless. I won’t hold a mentally ill person accountable for stuff they say during an episode, and I also won’t use it as a joke.
> Ergo, Filippo Valsorda is both bigoted against the mentally ill, and also an anti-semite.
This isn’t my claim. My claim is that it’s completely inappropriate on many levels to post not only that meme but to use it in tandem with direct personal attacks on Bernstein. This seems especially relevant in a thread supposedly about damaging behavior of other people in the community.
I would prefer you don’t cover for mental health stigmatization or antisemitic dog whistling even a tiny bit, especially if it was not intended. Painting me as crazy for my analysis is shitty. You asked me to bring some light and then attack me for sharing my actual thoughts. You didn’t acknowledge my insight about Jewish names, either. Was that news to you? Dismissively omitting anything about that insight is weird.
Please leave no room for ambiguity here, it is a very dangerous time in the world, and in America, especially after the Tree of Life murders. There are many many other examples of terrible stuff like that - and anything that even remotely smells like that must be immediately challenged in my view. No doubt this personal context makes myself and others extra sensitive. That is exactly why I explained my understanding of the meaning.
I am happy to provide an analysis of Homer Simpson memes in context if it can help us break the ice and not end this thread on hard or hateful terms.
"Crazy" is not the word I would use about what you've written here. It is unlikely you and I are going to have any productive dialog after this, which is totally fine; I'm happy to disengage here.
There's nothing "bad faith" about it. The tweet is supportive of the lawsuit, and not supportive of Bernstein's weird, heavily-telegraphed, long-predicted claims that a NIST contest he opted to participate in was corrupted by dint of not prioritizing his own designs.
Your bit about the "obviously Jewish family name" thing is itself risible, and you should be embarrassed for trying to make it a thing.
Your augment that the selection doesn’t pick his designs doesn’t square with SPHINCS+ winning, and with others remaining in the running. His former PhD student won with Kyber. Bernstein did very well here and you’re misleading people by suggesting he had his ass handed to him.
He has published (and it is linked from the blog) his views on how to run cryptographic contests before their recent selection finished (late). His comments are not simply the result of the round three announcement.
As to the offensive meme, I note that you don’t even dispute the punching down about mental health. Gross.
Bernstein is a German-Jewish name. These names were given and in some cases forced on people in history to give a signal to others, usually negative. This is a hint, not a fact of his beliefs. My understanding is that he does come from a Jewish family. I won’t presume to speak for Bernstein’s beliefs, just that I see something obviously tense and probably wrong.
It’s your choice to not care to comment about the antisemitic connotations that I raised. My point was that for some people this is impossible to not see. It is highly offensive given the context. Now I understand that you refuse to do so when shown. Also extremely gross.
I didn't even notice a "punching down about mental health" thing. You wrote a long comment, I skimmed it. Your allegation that Filippo and Matt Green are antisemitic is ludicrous.
I didn't say Bernstein had his ass handed to him. I said that he wrote thousands and thousands of words about his reasons to mistrust NIST (not just here but elsewhere, and often), but still participated in the PQC contest, raising these concerns only at its conclusion.
> I didn't even notice a "punching down about mental health" thing. You wrote a long comment, I skimmed it.
That tracks, okay. It’s the weekend and I’m a nobody on the internet. Thank you for talk the time to continue to engage with me.
> Your allegation that Filippo and Matt Green are antisemitic is ludicrous.
That isn’t an allegation that I am making, you are misunderstanding and misrepresenting my statements. My comment even disclaimed that this probably isn’t intentional, merely that it is one read of that meme. My core point is this: posting that meme is unhelpful in a thread about Bernstein’s supposedly harmful behavior. Maybe you think it’s a funny joke, I don’t.
Either way - funny joke or not - it certainly isn’t a healthy discourse for “the community” to call someone names and to dismiss them as some kind of unhinged conspiracy theorist.
> I didn't say Bernstein had his ass handed to him.
Indeed, I did not claim to quote you there. I am characterizing your words into what I understand as your point. Let’s call this “the sore loser discourse” - it is repeated in this thread by others. It seems to be implied by my read when you say: “…he opted to participate in was corrupted by dint of not prioritizing his own designs.” I preemptively acknowledge that I may have misunderstood you.
What do you mean to convey by “dint of not” roughy? Don’t SPHINCS+ (Standardized in round three) and Classic McEliece (still in the running) count as prioritizing his designs? Also, what is wrong with participating in this standardization process? He seems to be unhappy with NIST before, and during the process, and with ample cause. By participating, it’s clear he has learned more and by winning parts of the competition, he’s not a sore loser.
If he wasn’t a part of this competition, people would probably dismiss his criticism as simply being outside. It’s harder to dismiss him if he is part of it, and even harder when his submissions win. It isn’t a clean sweep, but it’s lifetime achievement levels for some people to have a hand in just one such algorithm, selected in such a process. He has a hand in several remaining submissions as far as I understand the process and the submissions.
> I said that he wrote thousands and thousands of words about his reasons to mistrust NIST (not just here but elsewhere, and often),
So you note he has been saying these things for a long time. On that we agree.
> but still participated in the PQC contest,
You go on to note that he then participated in the process. He is documented in his attempts to use the process tools to raise specific issues and to try to have them settled by NIST as promised, with transparency. NIST has failed to bring that transparency.
Confusingly (to me anyway) your next statement continues with a contradiction:
> raising these concerns only at its conclusion
Which is it? Was he constantly raising these issues or only raising them at the end (of round three)?
Alternatively I could read this as “at its (the blog post) conclusion” which would be extremely confusing. I presume this isn’t what you meant but if so, okay, I am really missing the point.
Yes, he appears to be unreasonably dismissive of the blindly obvious history and the current situation.
As an aside, this tracks with his choice of employers - at least one of which was a known and documented NSA collaborator (as well as a victim, irony of irony) before he took the job with them.
As Upton Sinclair remarked: “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”
Joining Google after Snowden revealed PRISM and BULLRUN, as well as MUSCULAR, is almost too rich to believe, Meanwhile he asserts and dismisses Bernstein as a conspiracy theorist. It’s a classic bad faith ad-hominem coincidence theory.
First, last I checked, Filippo does not in fact work at Google.
Second: the guidelines on this site forbid you to write comments like this; in fact, this pattern of comments is literally the most frequent source of moderator admonitions on HN.
Filippo hardly needs me to defend his reputation, but, as a service to HN and to you in particular, I'd want to raise your awareness of the risk of beclowning yourself by suggesting that he, of all people, is somehow compromised.
> The same people tend to have trouble grasping that most of the vulnerabilities exploited and encouraged by NSA are also exploitable by the Chinese government. These people start with the assumption that Americans are the best at everything; ergo, we're also the best at espionage. If the Chinese government stole millions of personnel records from the U.S. government, records easily usable as a springboard for further attacks, this can't possibly be because the U.S. government made a policy decision to keep our computer systems "weak enough to still permit an attack of some nature using very sophisticated (and expensive) techniques".
I'm not sure if I understand this part. I was under the impression that the OPM hack was a result of poor authn and authz controls, unrelated to cryptography. Was there a cryptography component sourced somewhere?
If, rather than hoarding offensive tools & spying, the NSA had interpreted its mission as being to harden the security of government infrastructure (surely even more firmly within the remit of national security) and spent its considerable budget in that direction, would authn and authz controls have been used at the OPM?
This is my understanding as well. I asked this very same question less than a week ago[1], and now it's the first Google result when you search "OPM Dual_EC_DRBG."
The response to my comment covers some circumstantial evidence. But I'm not personally convinced; human factors are a much more parsimonious explanation.
Why don’t we require that all internal communications and records be public, available within 24 hours on the web, and provide a very painful mechanism involving significant personal effort of high level employees for every single communication or document that is to be redacted in some way? The key is requiring manual, personal (non-delegatable) effort on the part of senior bureaucrats, and to allow a private cause of action for citizens and waiver of immunity for bureaucrats.
We could carve out (or maybe not) specific things like allowing automatic redaction of employee PII and PII of citizens receiving government benefits.
After many decades, it’s clear that the current approach to FOIA and sunshine laws just isn’t working.
The carve-out you mention is a decent idea on paper, but in practice is a difficult process. There's really no way to do it in any significant degree without basically putting all gov to a complete halt. Consider that government is not staffed with technical people, nor necessarily critically minded people to implement these systems.
There are ways to push for FOIA improvements that don't require this sort of drastic approach. Problem is, it takes a lot of effort on the parts of FOIA requesters, through litigation and change in the laws. Things get surprisingly nuanced when you really get down into what a "record" is, specifically for digital information. I definitely wouldn't want to have "data" open by default in this manner, because it would lead to privacy hell.
Another component of this all is to consider contractors and subcontractors. Would they fall under this? If so, to what degree? If not, how do we prevent laundering of information through contractors/subcontractors?
To a large degree, a lot of "positive" transparency movements like the one you suggest can ironically lead to reduced transparency in some of the more critical sides of transparency. A good example of that is "open data", which gives an appearance of providing complete data, but without the legal requirements to enforce it. Makes gov look good but it de-incentivizes transparency pushback and there's little way to identify whether all relevant information is truly exposed. I would imagine similar would happen here.
A private right of action and waiver of immunity solves most of the “bad actor” problems.
The big issue is how to preserve what actually needs to be secret (in the interest of the USA, not the interests of the bureaucracy) while forcing everything else to be public.
A lot of things are secret that don’t need to be secret; that’s a side effect of mandatory data classification and normal bureaucratic incentives- you won’t get in trouble for over-classifying, and classified information is a source of bureaucratic power. So you have to introduce a really strong personal incentive to offset that or nothing will ever change.
Personally, I don’t think that information should be classified if it came from public sources. Or maybe only allow such information to be classified for a short period of time, eg one year.
The longer and/or higher the classification level, the more effort should be involved, to create disincentives to over-classification.
I'm sorry, but very little of what you're saying makes sense in practice. I suggest submitting some FOIA requests to your local government to get some context and understanding of the difficulties.
The old Abe rhetoric was powerful but it always felt like it was only hitting home on two of the three points. Obviously government, by definition really, is of the people. The much better parts were for the people and by the people.
They are erroring on the side of caution because people have determined secret information from public information - like the energy in a nuclear bomb (censored) by the blast radius (public).
Another example is they want to protect their means and methods. But those means and methods are how they know most information. Often times it's easy to work backwards from they know x therefore y is compromised.
It's a hard problem similar to how to release anonymized data. See K-anonymity attacks and caveats.
Not sure if the US with it's torture-base aka Guantanamo and torture-safe-houses around the world really has the right to call someone else "evil", i don't mean that as "whataboutissm" but that human lives are not more "worth" in the US as in Mainland China
I've only recently started to digg a bit deeper into crypto algorithms ( looking into various types of curves etc), and it gave me the uneasing feeling that the whole industry is relying on the expertise of only a handful of guys to actually ensure that crypto schemes used today are really working.
Am i wrong ? are there actually thousands and thousands of people with the expertise to actually proove that the algorithms used today are really safe ?
I don’t know if that’s easily quantifiable, but I had a cryptography professor (fairly well-known nowadays) several years ago tell us that she only trusted 7 people (or some other absurdly low number), one of them being djb, to be able to evaluate the security of cryptographic schemes.
Perhaps thousands of people in the world can show you proofs of security, but very few of them may be able to take into account all practical considerations like side channels and the like.
There may be thousands of people in the entire world who understand cryptanalysis well enough to accurately judge the security of modern ciphers. Most aren't living or working in the U.S.
It's very difficult to do better. The mathematics is complex and computer science hasn't achieved proofs of the hypotheses underlying cryptography. The best we can achieve is heuristic judgements about what the best possible attacks are, and P?=NP is an open question.
> The mathematics is complex and computer science hasn't achieved proofs of the hypotheses underlying cryptography.
No unconditional proofs (except for the OTP ofc), but there are quite a few conditional proofs. For example, it's possible to show that CBC is secure if the underlying block cipher is.
Proof! the entire field of cryptography can prove absolutely nothing other than that a single use of One time pad is secure. the rest is all hand waving, that boils down to no-one I know knows how to do this, and I cant do it myself, so I believe it's secure.
So the best we have in cryptography is trusting "human instincts/judgements" about various algorithms. Which then further reduces to trusting humans.
Most programmers don't need to prove crypto algorithms. There are many situations where you can just use TLS 1.3 and let it choose the ciphers. If you really need to build a custom protocol or file format, you can still use libsodium's secretbox, crypto_box, and crypto_kx functions which use the right algorithms.
This is completely unrelated to the question being asked by the parent. They aren't asking about the average programmer. They are asking how many people in the world can truly 'prove' (to some reasonable degree) that the cryptography in use and the algorithms that are implementing that cryptography are 'secure' (to some reasonable degree).
Put another way, they are asking how many people in the world could verify that the algorithms used by libsodium, crypto_box, etc. are secure.
My point was that you don't need "thousands and thousands of people with the expertise to actually proove that the algorithms used today are really safe".
If the demand existed, there would be a lot more of those people.
Again, parent poster didn't say there was a need for thousands. They were asking how many there is a demand for. One? Ten? Hundred? That's the question that is being asked.
Tangential question: while some FOIA requests do get stonewalled, I continue to be fascinated that they're honored in other cases. What exactly prevents the government from stonewalling practically every request that it doesn't like, until and unless it's ordered by a court to comply? Is there any sort of penalty for their noncompliance?
Tangential to the tangent: is there any reason to believe FOIA won't be on the chopping block in a future Congress? Do the majority of voters even know (let alone care enough) about it to hold their representatives accountable if they try to repeal it?
I know someone who works in gov (Australia, not US) who told me all about a FOI request that he was stonewalling. From memory, the request was open ended and would have revealed more than it possibly intended it to, and would have revealed some proprietary trade secrets from a third party contractor. That said, it was probably a case that would attract some public interest.
The biggest factors preventing governments from stonewalling every FOI case are generally time and money. Fighting FOI cases is time consuming and expensive and it's simply easier to hand over the information.
At least in Australia I gather it it somewhat common for FOI offices to work with an FOI applicant to ask them to narrow the request if it is so broad as to cost too much or take too long to process, or is likely to just to be returned as hundreds of black pages.
Previous FOI responses show more savvy FOI applicants in the past have also (when they don't get the outcome they desired):
1. Formally requested review of decisions to withhold information from release. This almost always lead to more information being released.
2. Waited and tried requesting the same or similar information again in a later year when different people are involved.
3. Sent a follow up FOIA request for correspondence relating to how a previous (or unanswered) request was or is being processed by the FOI office and other parties responding to the request. This has previously shown somewhat humorous interactions with FOI offices such as "We're not going to provide that information because {lame excuse}" vs FOI office "You have to. CC:Executives" vs "No" vs Executives "It's not your information" etc etc.
4. Sent a follow up FOIA request for documentation, policies, training material and the likes for how FOI requests are assessed as well as how and by whom decisions are made to release or withhold information.
5. Sent a follow up FOIA request for documentation, policies, staffing levels, budgets, training material and the likes for how a typical event that the original FOIA request referred to would be handled (if details of a specific event are not being provided).
Responses to (2), (3) and (4) are probably more interesting to applicants than responses to (1), (2) and original requests, particularly when it is clear the applicant currently or previously has knowledge of what they're requesting.
> The biggest factors preventing governments from stonewalling every FOI case are generally time and money.
Is there any backpressure in the system to make the employee(s) responsible for responding/signing off on the disclosure actually care about how expensive it is to fight a case? I would've thought they would think, "Well, the litigation cost doesn't affect me, I just approve/deny requests based on their merits."
If there's the suspiscion that NIST interests aren't aligned with the public ones (at least wrt cryprography, I hope they're at least honest with the physical constants), why do we still allow them do dictate the standards?
I mean, there's plenty of standards bodies and experts in the cryptography community around the world that could probably do a better job. At this point NIST should be treated as a compromised certificate authority: just ignore them and move along.
Good god, this guy is a bad communicator. Bottom line up front:
> NIST has produced zero records in response to this [March 2022] FOIA request [to determine whether/how NSA may have influenced NIST's Post-Quantum Cryptography Standardization Project]. Civil-rights firm Loevy & Loevy has now filed suit on my behalf in federal court, the United States District Court for the District of Columbia, to force NIST to comply with the law.
So... Common pattern:. NSA, it's representatives or affiliates make claims that longer key lengths are unnecessary or have too much of a performance cost.
So... I make the claim again. Let's multiply all key lengths by 10. Ie. 2048 bit RSA becomes 20480 bit RSA.
Who here thinks that's a bad idea? Previously on HN such ideas have been downvoted and comments have been made against them. I wonder, who has it been doing that, and what were their motives?
Here's an interesting question. Even if post-quantum cryptography is securely implemented, doesn't the advent of neurotechnology (BCIs, etc.) make that method of security obsolete?
With read and write capability to the brain, assuming this comes to fruition at some point, encryption as we know it won't work anymore. But I don't know, maybe this isn't something we have to worry about just quite yet.
The thing you're missing is that BCIs and friends are, themselves, computers, and thus securable with post-quantum cryptography, or any cryptography for that matter, or any means of securing a computer. And thus, for somebody to read-write to your computers, they need to read-write to your brain(s), but to read-write to your brain(s), they need to read-write to the computers implanted in your brain(s). It's a security cycle whose overall power is determined by the least-secure element in the chain.
Any sane person will also not touch BCIs and similar technology with a 100 lightyear pole unless the designing company reveals every single fucking silicon atom in the hardware design and every single fucking bit in the software stack at every level of abstraction, and ships the device with several redundant watchdogs and deadmen timers around it that can safely kill or faraday-cage the implant on user-defined events or manually.
Alas, humans are very rarely sane, and I come to the era of bio hacking (in all senses of the word) with low expectations.
Cryptographic secrets stored in human brains are already vulnerable to an attack mechanism that requires $5 worth of interface hardware that can be procured and operated with very little training. Physical security controls do a decent job of preventing malicious actors from connecting said hardware to vulnerable brains. I assume the same would be true with the invention of BCIs more sophisticated than a crescent wrench.
I think we are a long way away from being able to wirelessly read a few specific bytes of data from the brain of an unknowing person. Far enough away that I'm not sure it's productive to begin thinking of how to design encryption systems around it.
Memory and experience aren't encoded in the brain like traditional computers. There's no concept of a "byte" when thinking about the human computational model.
There is the concept of "byte" when talking about a string of characters which make up a password, though, which is why I said bytes. But yes, I am aware, and your statement just further supports my point.
Not necessarily. A person could remember a password that contains name of their loved one differently in their brain than some arbitrary string of letters and numbers. Those letters and numbers can each be "encoded" differently in their brain - e.g. maybe the letter 'S' is linked in their brain to snakes because it kind of looks like one. Or any kind of weird connections of certain parts of the password to a smell they smelled twenty years ago. This would all deeply affect how the actual string of character is actually "stored" in the brain.
Yes, after you'd extract the password from their brain, you would then convert it to a string of bytes and store it on your digital storage device, but you were talking about accessing data in a human brain.
The point is, human brain is weird when looked at from point of view of data storage. :)
>[...] but you were talking about accessing data in a human brain.
No, I wasn't. I used bytes as a unit of measurement of data. I guess if I said "characters" instead of "bytes" people would stop trying to explain this to me. Although I sort of doubt that, because I said "yes, I know" and then get another paragraph explaining the same thing to me.
No, you're moving the goalposts. You were specifically talking saying "to wirelessly read a few specific bytes of data from the brain of an unknowing person".
You do not read bytes of data from the brain, because there are no bytes in the brain. You read information (in whatever weird form and format the brain has it), and in order to store it in whatever digital storage device you have, only then convert it into bytes and store those bytes.
It's like if you were saying "I read three words of English text from this book written in Chinese".
But yeah, at this point, we're arguing pure semantics. :)
You can beat me to a pulp, doesn't make me suddenly remember a specific N byte string any faster.
Passwords are to be remembered, private keys are to be stored. I suppose I'll tell you where it's stored, but often even that doesn't help. (E.g. It's on a USB key I didn't label and lost, or this is totally the admin pin to my smartcard, ok you got me these 3 are the real pins, uh oh it's physically wiped itself? Sad face for you)
Yeah I’ve even had very personal dreams where my Linux root password was spoken in the dream. I’m glad I don’t talk in my sleep. There’s also truth serums that can be weaponized in war scenarios to extract secrets from the enemy without resorting to torture.
I have the feelings the govs around the world get more and more sued related to serious digital matters.
Here, once the heat wave is finally over, I will see again my lawyer about the interoperability of gov related sites with noscript/basic (x)html browsers.
So the TLDR is… you do roll your own crypto? I mean you probably need to know how to create a RNG that passes Practrand and smasher first and also a hash function that does the same but cool.
If you care about preventing those kinds of leaks, do not use mainstream browsers (they are likely to leak even your https URLs to the browser company), and do not access those pages directly using your home connection (there may be mitms between you and the page).
Weirdly, any time I've suggested that maaaybe being too trusting of a known bad actor which has repeatedly published intentionally weak cryptography is a bad idea, I've received a whole lot of push-back and downvotes here on this site.
The related “just ignore NIST” crowd is intentionally or unintentionally dismissing serious issues of governance. Anyone who deploys this argument is questionable in my mind, essentially bad faith actors, especially when the topic is about the problems brought to the table by NIST and NSA.
It is a good sign that those people are actively ignoring the areas where you have no choice and you must have your data processed by a party required to deploy FIPS certified software or hardware.
I'm working on a project that involves a customized version of some unclassified, non-intelligence software for a defense customer at my job (not my ideal choice of market, but it wasn't weapons so okay with it). Some of the people on the project come from the deeper end of that industry, with several TS/SCI contract and IC jobs on their resumes.
We were looking over some errors on the sshd log and it was saying it couldn't find the id_ed25519 server cert. I remarked that that line must have stayed even though the system was put in FIPS mode which probably only allowed the NIST-approved ECC curve and related this story, how everyone else has moved over to ed25519 and the government is the only one left using their broken algorithm.
One of the IC background guys (who is a very nice person, nothing against them) basically said, yeah the NSA used to do all sorts of stuff that was a bad idea, mentioning the Clipper chip, etc. What blew my mind is that they seemed to totally have reasonable beliefs about government surveillance and powers, but then when it comes to someone like Snowden, thinks their are a traitor and should have used the internal channels instead of leaking. I just don't understand how they think those same people who run NSA would have cared one bit, or didn't know about it already. I always assumed the people that worked in the IC would just think all this stuff was OK to begin with I guess.
I don't know what the takeaway is from that, it just seems like a huge cognitive dissonance.
It’s not doublethink to say the programs should have been exposed and that Snowden was a traitor for exposing them in a manner that otherwise hurt our country.
He could have done things properly, instead he dumped thousands of files unrelated to illegal surveillance to the media.
Regarding trying internal channels, Snowden says he tried this
> despite the fact that I could not legally go to the official channels that direct NSA employees have available to them, I still made tremendous efforts to report these programs to co-workers, supervisors, and anyone with the proper clearance who would listen. The reactions of those I told about the scale of the constitutional violations ranged from deeply concerned to appalled, but no one was willing to risk their jobs, families, and possibly even freedom
The fleeing to a foreign adversary part would have been completely avoidable if the US had stronger whistleblower protections. It's perfectly reasonable to see what happened to Chelsey Manning and Julian Assange and not want to suffer a similar fate.
There is no record that he attempted to use internal channels. He would have been afforded whistleblower protection had he went to Congress with his findings.
> There is no record that he attempted to use internal channels
From the beginning of the Snowden quote:
> I could not legally go to the official channels that direct NSA employees have available to them
In addition, I find it difficult to take any congressional report on this matter, including the one you cited, seriously given that their primary source is a group of people who have repeatedly lied to Congress without consequence.
Why do you take Snowden's word as gospel but dismiss a bipartisan Congressional Committee's findings? I think that you are biased and nothing will change your mind. Let's agree to disagree.
Many government or government affiliated organizations are required to comply with NIST approved algorithms by regulation or for interoperability. If NIST cannot be trusted as a reputable source it leaves those organizations in limbo. They are not equipped to roll their own crypto and even if they did, it would be a disaster.
"Other people have no choice but to trust NIST" is not a good argument for trusting NIST. Somehow I don't imagine the NSA is concerned about -- and is probably actively in favor of -- those organizations having backdoors.
One wonders if NIST can be fixed or if it should simply be abolished with all archives opened in the interest of restoring faith in the government. The damage done by NSA and NIST is much larger than either of those organizations.
Would you really want every random corporation having some random person pick from the list of open source cipher packages? Which last I checked , still included things like 3DES, MD5, etc.
You might as well hand a drunk monkey a loaded sub machine gun.
Every random corporation having some random person picking from a list of open source cipher packages isn't the only alternative to strictly requiring the algorithm be NIST approved. It may be the worst possible alternative one could conceive though, and one that would probably take more work to do than something more reasonable anyways.
Surely I'm misunderstanding, are you really advocating that people should roll their own encryption algorithms from scratch? As in, they should invent novel and secure algorithms in isolation? And this should happen.... at every major enterprise or software company in the world?
I'm saying some standards body is appropriate for validating/vetting algorithms, and having a standards body advocate for known reasonable ones is... reasonable and desirable.
That NIST has a history of being compromised by the NSA (and other standards bodies would likely similarly be a target), is a problem. But having everyone 'figure it out' on their own is even worse. 'hand a drunk monkey a loaded submachine gun' worse.
> That NIST has a history of being compromised by the NSA is a problem.
It's a disqualifying problem. If you go to a standards body to prevent yourself from making unintentional mistakes, and they have introduced intentional mistakes, any other reasonable option is better.
Personally I'm of the opinion that everyone is expecting the NSA to try now, so the odds of them pulling it off are essentially zero (same with other actors) at NIST.
If you specialize as a cat burglar after all, hitting the ONE PLACE everyone expects you to hit while they're watching goes against the grain.
More likely they're suborning us somewhere else. But hard to say for sure.
Is it your view that the only way a group of humans can come together to make intelligent decisions and a group, is part of a national government? Why can't an organization of private individuals do so?
Another upvote from someone with many friends and colleagues in NIST. I hope transparency prevails and NISTers side with that urge as well (I suspect many do).
They could and should leak more documents if they have evidence of malfeasance.
There are both legal safe avenues via the IG process and legally risky many journalists who are willing to work for major change. Sadly legal doesn’t mean safe in modern America and some whistleblower have suffered massive retribution even when they play by “the rules” laid out in public law.
The history in this blog post is excellently researched on the topic of NSA and NIST cryptographic sabotage. It presents some hard won truths that many are uncomfortable to discuss, let alone to actively resist.
The author of the blog post is also well known for designing and releasing many cryptographic systems as free software. There is a good chance that your TLS connections are secured by some of these designs.
Given his track record, and the actual meat of this suit, I think he has a good chance.
- He is an expert in the domain
- He made a lawful request
- He believes he's experiencing an obstruction of his rights
I don't see anything egregious here. Being critical of your government is a protected right for USA. Everyone gets a moment to state their case if they'd like to make an accusation.
Suing sounds offensive, but that is the official process for submitting an issue that a government can understand and address. I'm seeing some comments here that seem aghast at the audacity to accuse the government at your own peril, and it shows an ignorance of history.
There was the Clipper Chip [2] and the super-weak 40-bit 'export strength' cryptography [3] and the investigation of PGP author Phil Zimmerman for 'munitions export without a license' [4].
So there was a substantial effort to weaken cryptography, decades before 9/11.
On the dragnet surveillance front, there have long been rumours of things like ECHELON [1] being used for mass surveillance and industrial espionage. And the simple fact US spies were interested in weakening export SSL rather implied, to a lot of people, they had easy access to the ciphertext.
Of course, this was before so much stuff had moved online, so it was a different world.
Every piece of mail that passes through a high-speed sorting machine is scanned, front and back, OCR'd, and stored - as far as we know, indefinitely. That's how they deliver the "what's coming in your mailbox" images you can sign up to receive via email.
Those images very often show the contents of the envelope clearly enough to recognize and even read the contents, which I'm quite positive isn't an accident.
The USPS is literally reading and storing at least part of nearly every letter mailed in the United States.
The USPS inspectors have a long history of being used as a morality enforcement agency, so yes, this should be of concern.
Agreed. It’s even worse: they also have the capability with the “mail covers” program to divert and tamper with mail. This happens to Americans on U.S. soil and I’m not just talking about suspects of terrorism.
I've heard rumors that this was going on for a long time before it's been publicly acknowledged to have -- before OCR should have been able to handle that sort of variety of handwriting (reliably), let alone at scale. Like a snail-mail version of the NSA metadata collection program.
Wikipedia gives the impression the modern incarnation of photographing the US mail at scale began in 2001: “created in the aftermath of the 2001 anthrax attacks that killed five people, including two postal workers” [0]
However research on photographs of mail was already taking place as far back as 1986 [1]
TFA says: «The European Parliament already issued a 194-page "Report on the existence of a global system for the interception of private and commercial communications (ECHELON interception system)" in 2001» (July 2001, that is)
I don’t like the collateral damages of many policies. But it’s not fair to say that the policies “have not prevented anything” because we simply don’t know. The policies could have stopped in-progress evil acts (but they were never revealed to the public for intel reasons) or prevented attempts of an evil acts (well, nothing happened, nothing to report).
It also could have stopped the Gods from smiting us all, but there's no evidence that it has.
This article[1] is a good start at realizing the costs outweigh the benefits. There's little or no evidence of good caused, but plenty of evidence of harms caused.
There is evidence of that, in fact. There were many serious terrorist attacks in Europe, like in Spain's subway (300 dead) and Frankfurt, in the aftermath of 9/11 and other...uh howmy gonna say this...other stuff, the Spanish terrorist attacks were done by Basque nationalists or such, not Muslims.
I find it rather funny that we know about the parallel construction which they attempt to keep hidden, yet don't know about any successful preventions. I would assume they would at least want people to know if a program was a success. To me, the lack of information speaks volumes
This is on top of all the entrapment that we also know about, performed by the FBI and associated informants on Islamic/Muslim communities
Considering that they do not obey the law, if they had actually stopped any terrorists we would be hearing all about it from "anonymous leakers" by now.
because citizens want to see that their tax money is being used successfully. The same would likely be done by the surveillance authorities if they saw significant success in their mission.
One cannot prove a negative, but given how much public recording of everything there is these days (and in the last decade+), I'd say it's safe to err on the side of them not having prevented much of consequence. ("Absence of evidence..." doesn't really apply when evidence should be ample for the phenomenon to be explained.)
>Being critical of your government is a protected right for USA. Everyone gets a moment to state their case if they'd like to make an accusation.
Unless a kangaroo “FISA court” says you can’t - in which case you’re screwed, and can’t even tell anyone about the “sentence” if it included a gag order. Still better than getting droned I suppose.
the author was also part of the Linux kernel SPECK cipher talks that broke down in 2013 due to the nsa's stonewalling and hand waving for technical data and explanations.
I remember reading about this in Steven Levy's crypto and elsewhere, there was a lot of internal arguing about lots of this stuff at the time and people had different opinions. I remember that some of the suggested changes from NSA shared with IBM were actually stronger against a cryptanalysis attack on DES that was not yet publicly known (though at the the time people suspected they were suggesting this because it was weaker, the attack only became publicly known later). I tried to find the specific info about this, but can't remember the details well enough. Edit: I think it was this:https://en.wikipedia.org/wiki/Differential_cryptanalysis
They also did intentionally weaken a standard separately from that and all the arguing about 'munitions export' intentionally requiring weak keys etc. - all the 90s cryptowar stuff that mostly ended after the clipper chip failure. They also worked with IBM on DES, but some people internally at NSA were upset that they shared this after the fact. The history is a lot more mixed with a lot of people arguing about what the right thing to do is and no general consensus on a lot of this stuff.
You are not accurately reflecting the history that is presented in the very blog post we are discussing.
NSA made DES weaker for everyone by reducing the key size. IBM happily went along. The history of IBM is dark. NSA credited tweaks to DES can be understood as ensuring that a weakened DES stayed deployed longer which was to their advantage. They clearly explain this in the history quoted by the author:
“Narrowing the encryption problem to a single, influential algorithm might drive out competitors, and that would reduce the field that NSA had to be concerned about. Could a public encryption standard be made secure enough to protect against everything but a massive brute force attack, but weak enough to still permit an attack of some nature using very sophisticated (and expensive) techniques?”
They’re not internally conflicted. They’re strategic saboteurs.
>IBM happily went along. The history of IBM is dark.
Then, as of now, I'm confused why people expect these kinds of problems to be solved by corporations "doing the right thing" rather than demanding some kind of real legislative reform.
Agreed. It can be both but historically companies generally do the sabotage upon request, if not preemptively. This hasn’t changed much at all in favor of protecting regular users, except maybe with the expansion of HTTPS, and a few other exceptions.
Libertarian and capitalist propaganda. The answer is always a variation of “if you don’t like it, don’t buy it/let the market decide.” Even if the “market” heads towards apocalypse.
> "NSA credited tweaks to DES can be understood as ensuring that a weakened DES stayed deployed longer which was to their advantage. They clearly explain this in the history quoted by the author"
I'm not sure I buy that this follows, wouldn't the weakened key size also make people not want to deploy it given that known weakness? To me it reads more that some people wanted a weak key so NSA could still break it, but other people wanted it to be stronger against differential cryptanalysis attacks and that they're not really related. It also came across that way in Levy's book where they were arguing about whether they should or should not engage with IBM at all.
It follows: entire industries were required to deploy DES and the goal was to create one thing that was “strong enough” to narrow the field.
Read the blog post carefully about the role of NBS, IBM, and NSA in the development of DES.
It’s hard to accept because the implications are upsetting and profound. The evidence is clear and convincing. Lots of people try to muddy the waters, don’t help them please.
They had a privately known way to weaken DES that effectively shortens the key length. They could have pretended to allow a longer key length while secretly retaining their privately known attack that lets them shorten it (without also acting to strengthen DES against it). They knew this in the 70s 20 years before it would become publicly known. They actively strengthened DES against this while not revealing the exploit. Doing this secretly doesn't narrow the field (doing it publicly might have), it's also inconsistent with their argument for short keys.
I read the blog post and I've read a lot about the history of this - what you're saying isn't really convincing. Often people I mostly agree with, maybe 90% just take it to the extreme where everything must fit their world view 100%. Rarely imo is that the case, often reality is more mixed.
If they’re related maybe they wanted DES to be strong so they could use it, but wanted the public to only have access to short keys so they could also break the public's use of it. Still, it's interesting they didn't leave in a weakness they could exploit secretly despite a longer key size.
You’re making a lot of assumptions and guesses to imply they helped overall when we know they weakened DES by reducing the key size such that it was practically breakable as a hobby project. At the time of DES creation, Hellman remarked that this was a bad enough problem to fix it by raising the key size. NSA and IBM and others ignored the cryptographers who were not compromised. Any benefit against DC attacks seems clearly like a hedge against DES being replaced sooner and against known adversary capabilities. When did the Russians learn that technique? Probably before the public did, I would wager.
The longer DES stays, the longer NSA retain their capabilities. Any design changes made by NSA are for their benefit first. That’s the primary lesson from my perspective.
I don’t think they helped overall, I’d agree on net they acted to make things less secure by arguing for the small key sizes. We mostly agree. I just think strengthening public DES based on a security issue that was not public at the time is an interesting example of a time they did the opposite of inserting a backdoor, people were afraid their suggestions were weakening DES, but they were strengthening it. That paired with the history suggested some internal arguing about priorities.
> I remember that some of the suggested changes from NSA shared with IBM were actually stronger against a cryptanalysis attack on DES that was not yet publicly known
So we have that and other examples of NSA apparently strengthening crypto, then we have the dual-EC debacle and some of the info in the Snowden leaks showing that they've tried to weaken it.
I feel like any talk about NSA influence on NIST PQ or other current algorithm development is just speculation unless someone can turn up actual evidence one way or another. I can think of reasons the NSA would try to strengthen it and reasons they might try to weaken it, and they've done both in the past. You can drive yourself nuts constructing infinitely recursive what-if theories.
The NSA wants "NOBUS" (NObody-But-US) backdoors. It is in their interest to make a good show of fixing easily-detected vulnerabilities while keeping their own intentional ones a secret. The fantasy they are trying to sell to politicians is that people can keep secrets from other people but not from the government; that they can make uncrackable safes that still open when presented with a court warrant.
This isn't speculation either; Dual_EC_DRBG and its role as a NOBUS backdoor was part of the Snowden document dump.
Here's the counter-argument that I've seen in cryptography circles:
Dual EC, a PRNG built on an asymmetric crypto template, was kind of a ham fisted and obvious NOBUS back door. The math behind it made such a backdoor entirely plausible.
That's less obvious in other cases.
Take the NIST ECC curves. If they're backdoored it means the NSA knows something about ECC we don't know and haven't discovered in the 20+ years since those curves were developed. It also means the NSA was able to search all ECC curves to find vulnerable curves using 1990s technology. Multiple cryptographers have argued that if this is true we should really consider leaving ECC altogether. It means a significant proportion of ECC curves may be problematic. It means for all we know Curve25519 is a vulnerable curve given the fact that this hypothetical vulnerability is based on math we don't understand.
Speck is incredibly simple with very few places a "mystery constant" or other back door could be hidden. If Speck is backdoored it means the NSA knows something about ARX constructions that we don't know, and we have no idea whether this mystery math also applies to ChaCha or Blake or any of the other popular ARX construction gaining so much usage right now. That means if we (hypothetically) knew for a fact that Speck was backdoored but not how it's backdoored it might make sense to move away from ARX ciphers entirely. It might mean many or all of them are not as secure as we think.
SM2 (Chinese), GOST (Russian) and NIST P (American) parameters are "you'll just have to straight up assume these are something up our sleeve numbers".
ECGDSA/brainpool (German) and ECKCDSA (Korean) standards make an attempt to explain how they chose recommended parameters but at least for brainpool parameters, the justifications fall short.
The DiSSECT[1] project recently published this year is an excellent approach to estimating whether parameters selected (often without justification) are suspicious. GOST parameters were found to be particularly suspicious.
I wonder if a similar project could be viable for assessing parameters of other types of cryptographic algorithms e.g. Rijndael S-box vs. SM4 S-box selection?
Interesting link, and yes it does look like the GOST curves are really suspect. I didn't see a graph for the NIST curves and they do not appear to have called them out.
There's a big difference though with the GOST curves. They were generated in what seems to be a 100% opaque manner, meaning they could have been back-calculated from something.
The NIST curves were generated in a way that was verifiably pseudorandom (generation involved a hash of a constant) but the constant was not explained. This makes it effectively impossible to straight-up back-calculate these curves from something else. NIST/NSA would have had to brute force search for parameters giving rise to breakable curves, which is the basis of the reasoning I've seen by cryptographers I quoted above.
Note that the cryptographers I've seen make this argument aren't arguing that the NIST curves could not be suspect. What they're arguing is that if they are in fact vulnerable and were found by brute force search using 90s computers, all of elliptic curve cryptography may be suspect. If we (hypothetically) knew for a fact they were vulnerable but did not know the vulnerability, we'd know that some troubling percentage of ECC curves are vulnerable to something we don't know and would have no way of checking other curves. We'd also have no way of knowing if other ECC constructions like Edwards curves or Koblitz curves are more or less vulnerable.
So the argument is: either the NIST curves are likely okay, or maybe don't use ECC at all.
Bruce Schneier was for a time a proponent of going back to RSA and classical DH but with large (4096+ bit) keys for this reason. RSA has some implementation gotchas but the math is better understood than ECC. Not sure if he still advocates this.
Personally I think the most likely origin of the NIST constants was /dev/urandom. Remember that these were generated back in the 1990s before things like curve rigidity was a popular topic of discussion in cryptography circles. The goal was to get working curves with some desirable properties and that's about it.
Regarding Simon and Speck: one simple answer is that the complicated attacks may exist and simple attacks certainly exist for smaller block and smaller key sizes.
However, it’s really not necessary to have a backdoor in ARX designs directly when they’re using key sizes such as 64, 72, 96, 128, 144, 192 or 256 bits with block sizes of 32, 48, 64, 96 or 128 bits. Especially so if quantum computers arrive while these ciphers are still deployed. Their largest block sizes are the smallest available for other block ciphers. The three smallest block sizes listed are laughable.
They have larger key sizes specified on the upper end. Consider that if the smaller keys are “good enough for NSA” - it will be used and exploited in practice. Not all bits are equal either. Simon’s or Spec’s 128 bits are doubtfully as strong as AES’s 128 bits, certainly with half the bits for the block size. It also doesn’t inspire confidence that AES had rounds removed and that the AES 256 block size is… 128 bits. Suite A cryptography probably doesn’t include a lot of 32 bit block sizes. Indeed BATON supposedly bottoms out at 96 bits. One block size for me, another for thee?
In a conversation with an author of Speck at FSE 2015, he stated that for some systems only a few minutes of confidentiality was really required. This was said openly!
This is consistent in my view with NSA again intentionally pushing crypto that can be broken in certain conditions to their benefit. This can probably be practically exploited though brute force with their computational resources.
Many symmetric cryptographers literally laugh at the NSA designs and at their attempts at papers justifying their designs.
Regarding NIST curves, the safe curves project shows that implementing them safely is difficult. That doesn’t seem like an accident to me, but perhaps I am too cynical? Side channels are probably enough for targeted breaks. NIST standardization of ECC designs don’t need to be exploited in ways that cryptographers respect - it just needs to work for NSA’s needs.
NSA leadership has policies to propose and promote the NOBUS dream. Even with Dual_EC_DRBG, the claims of NOBUS were incredibly arrogant. Just ask Juniper and OPM how that NOBUS business worked out. The NSA leadership wants privileged access and data at nearly any cost. The leadership additionally want you to believe that they want NOBUS for special, even exceptional cases. In reality they want bulk data, and they want it even if the NOBUS promises can fail open.
Don’t believe the hype, security is hard enough, NOBUS relies on so many assumptions that it’s a comedy. We know about Snowden because he went public, does anyone think we, the public, would learn if important keys were compromised to their backdoors? It seems extremely doubtful that even the IG would learn, even if NSA themselves could discover it in all cases.
I think it's just both. It's a giant organization of people arguing in favor of different things at different times over its history, I'd guess there's disagreement internally. Some arguing it's critical to secure encryption (I agree with this camp), others wanting to be able to break it for offense reasons despite the problems that causes.
Since we only see the occasional stuff that's unclassified we don't really know the details and those who do can't share them.
There are plenty of leaked classified documents from NSA (and others) that have been verified as legitimate. Many people working in public know stuff that hasn’t been published in full.
Right came here to make the same point. The first lawsuit alluded to in the blog post title resulted in an important holding that source code can be protected free expression.
Why? Http is simpler, less fragile, not dependent on good will of third parties, the content is public, and proving authenticity of text on Internet is always hard, even when served via the https scheme. I bet Bernstein thinks there is little point in forcing people to use https to read his page.
Troy Hunt points out that HTTP traffic is sometimes MITMed in a way that clients and servers do not like, and HTTPS sometimes prevents that. I never said otherwise. I am saying for certain kinds of pages, it's not a major concern. Like for djb website.
Why not use HTTPS for everything? Because it also has costs, not just benefits.
That's not really true. Certificates have been free for a long time and every CPU made within the last 10 years has AES acceleration. You can google white papers from companies like Cloudflare and Google, which actually show speedups with HTTP 2 or 3.
There are other costs, with deployment and maintenance. Well built HTTP site works on its own until browsers intentionally stop accepting it. HTTPS site works for few months and then a new certificate needs to be obtained and deployed. This has real cost in additional risk of outages, support requests, and not the least, becoming dependent on the goodwill of the certificate issuer.
Yes. But if you worry about being a target for MITM attacks, https alone does not fix that problem. You need some reliable verification mechanism that is hard to fool. The current CA system or "trust on first use" are only partial, imperfect mechanisms.
If you think I went around looking to dig up dirt, I didn't. I just searched djb's name on Twitter to find more discussions about the subject, as post-quantum cryptography is an area I'm curious about.
Regarding asking for a disclosure, I thought that was widely accepted around here. If the CEO of some company criticised a competitor's product, we would generally expect them to disclose that fact upfront. I thought that was appropriate here given the dismissive tone of GP.
It doesn't matter, you can't toss stuff like that at people here, never mind characterize it the way you did, as a form of argument. It's in the guidelines, there are lots of moderator comments bout it, don't be doing it.
I thought that was appropriate here given the dismissive tone of GP.
It's not, no matter how 'dismissive' you think a comment is.
He won a case against the government representing himself so I think he would be on good footing. He is a professor where I graduated and even the faculty told me he was interesting to deal with. Post QC is his main focus right now and also he published curve25519.
Personally my favorite part of the history is on the “Dishonest behavior by government lawyers” page:
https://cr.yp.to/export/dishonesty.html - the disclaimer at the top is hilarious: “This is, sad to say, not a complete list.” Indeed!
Are you implying that he didn’t contribute to the first win before or during EFF involvement?
Are you further implying that a stalemate against the U.S. government is somehow bad for self representation after the EFF wasn’t involved?
In my view it’s a little disingenuous to call it a stalemate implying everything was equal save EFF involved when the government changes the rules.
He challenged the new rules alone because the EFF apparently decided one win was enough.
When the judge dismissed the case, the judge said said that he should come back when the government had made a “concrete threat” - his self representation wasn’t the issue. Do you have reason to believe otherwise?
To quote his press release at the time:
``If and when there is a concrete threat of enforcement against
Bernstein for a specific activity, Bernstein may return for judicial
resolution of that dispute,'' Patel wrote, after citing Coppolino's
``repeated assurances that Bernstein is not prohibited from engaging in
his activities.'' - https://cr.yp.to/export/2003/10.15-bernstein.txt
Sure, EFF played a major role in that case as did Bernstein. It made several lawyers into superstars in legal circles and they all clearly acknowledge his contributions to the case.
Still you imply that he shouldn’t have credit for that first win and that somehow he failed in the second case.
EFF shouldn’t have stopped fighting for the users when the government changed the rules to something that was also unacceptable.
The original poster said “he won a case against the government representing himself” and I felt that statement was incomplete, if not inaccurate and wanted to correct the record. I’m pretty sure Dan, if he was here, would do the same.
You appear to be throwing shade on his contributions. Do I misunderstand you?
A stalemate, if you already want to diminish his efforts, isn’t a loss by definition - the classic example is in chess. He brought the government to heel even after EFF bailed. You’re also minimizing his contributions to the first case.
His web page clearly credits the right people at the EFF, and he holds back on criticism for their lack of continuing on the case.
Sorry I didn’t know that part. I have only seen Professor Bernstein once (he had a post QC t shirt on so that’s the only way I knew who he was ). I have never interacted with him really. He is also the only faculty that is allowed to have a non UIC domain. Thank you for correcting me .
Yeah, terrible idea, except this is Daniel Bernstein, who already had an equally terrible idea years ago, and won. That victory was hugely important, it pretty much enabled much of what we use today (to be developed, exported, used without restrictions, etc etc etc)
seems like they just need a judge to force the NSA to comply with a Freedom of Information Act request, its just part of the process
I'm stonewalled on an equivalent Public Record Act request w/ a state, and am kind of annoyed that I have to use the state's court system
Doesn't feel super partial and a couple law journals have written about how its not partial at all in this state and should be improved by the legislature
This is part of a class division where we cannot practically exercise our rights which are clearly enumerated in public law. Only people with money or connections can even attempt to get many kinds of records.
It’s wrong and government employees involved should be fired, and perhaps seriously punished. If people at NIST had faced real public scrutiny and sanction for their last round of sabotage, perhaps we wouldn’t see delay and dismissal by NIST.
Delay of responding to these requests is yet another kind of sabotage of the public NIST standardization processes. Delay in standardization is delay in deployment. Delay means mass surveillance adversaries have more ciphertext that they can attack with a quantum computer. This isn’t a coincidence, though I am sure the coincidence theorists will come out in full force.
NIST should be responsive in a timely manner and they should be trustworthy, we rely on their standards for all kinds of mandatory data processing. It’s pathetic that Americans don’t have several IG investigations in parallel covering NIST and NSA behavior. Rather we have to rely on a professor to file lawsuits for the public (and cryptographers involved in the standardization process) to have even a glimpse of what is happening. Unbelievable but good that someone is doing it. He deserves our support.
> This is part of a class division where we cannot practically exercise our rights which are clearly enumerated in public law. Only people with money or connections can even attempt to get many kinds of records.
As someone with those resources, I'm still kind of annoyed because I think this state agency is playing chess accurately too. My request was anonymous through my lawyer and nobody would know that I have these documents, while if I went through the court - even if it was anonymous with the ACLU being the filer - there would still be a public record in the court system that someone was looking for those specific documents, so that's annoying
Even though I broadly agree with what you've written here ... the situation in question isn't really about NIST/NSA response to FOIA requests at all.
It's about whether the US government has deliberately acted to foist weak encryption on the public (US and otherwise), presumably out of desire/belief that it has the right/need to always decrypt.
Whether and how those agencies respond to FOIA requests is a bit of a side-show, or maybe we could call it a prequel.
> the situation in question isn't really about NIST/NSA response to FOIA requests at all.
I disagree. To my mind, the issue is that a national standards agency with form for certifying standards they knew were broken, still isn't being transparent about their processes. NIST's reputation as been mud since the ECDRBG debacle.
People are not at liberty to ignore NIST recommendations, and use schemes that are attested by the likes of DJB, because NIST recommendations get built into operating systems and hardware. It damages everyone (including the part of NSA that is concerned with national security) that (a) NIST has a reputation for untrustworthiness, and (b) they aren't showing the commitment to transparency that would be needed to make them trustworthy again.
We are probably pretty much in agreement. It looks like they’ve got something to hide and they’re hiding it with delay tactics, among others.
They aren’t alone in failing to uphold FOIA laws, but they’re important in a key way: once the standard is forged, hardware will be built, certified, deployed, and required for certain activities. Delay is an attack that is especially pernicious in this exact FOIA case given the NIST standardization process timeline.
As a side note, the NIST FOIA people seem incompetent for reasons other than delay.
Though 99% of the time I would agree with you, the public has to have faith in people who claim to be fighting (with previously noted successes in Bernstein v. US) in our best interests.
> It could even generate an algorithm so complicated it would be close to impossible for a human mind to comprehend the depth of it.
Okay... then some nefarious actor's above-human-intelligence neural network instantly decodes the algorithm deemed too complicated for human understanding?
I don't see how opaque neural nets are suddenly going to make security-through-obscurity work.
There are many, many papers that show how you can make innocuous changes to inputs to make neutral nets produce the wrong result. You might be overestimating the difficulty of this process.
So, question then, isn't one of the differences between this time's selection, compared to previous selections, that some of the algorithms are open source with their code available.
Not really. For the same reason that "here's your github login" doesn't equate to you suddenly being able to be effective in a new company. You might be able to look things up in the code and understand how things are being done, but you don't know -why- things are being done that way.
A lot of the instances in the post even show the NSA giving a why. It's not a particular convincing why, but it was enough to sow doubt. The reason to make all discussions public is so that there isn't an after the fact "wait, why is that obviously odd choice being done?" but instead a before the fact "I think we should make a change". The burden of evidence is different for that. A "I think we should reduce the key length for performance" is a much harder sell when the spec already prescribes a longer key length, than an after the fact "the spec's key length seems too short" "Nah, it's good enough, and we need it that way for performance". The status quo always has inertia.
Thanks for the response, that's making sense. I've also tried following the PQC Google Groups but a lot of the language is beyond my grasp.
Also... I don't understand why I've been downvoted for asking a question, I'm trying to learn but HN can certainly be unwelcoming to the 'curious' (which is why I thought we are here)
Who cares about a particular piece of source code? Cryptanalysis is about the mathematical structure of the ciphers. When we say the NSA backdoored an algorithm, we don't mean that they included hidden printf statements in "the source code". It means that mathematicians at the NSA have knowledge of weaknesses in the construction, that are not known publicly.
Worth noting DJB (the article author) was on two competing (losing) teams to Kyber[0] in Round 3. And has an open submission in round 4 (still in progress). That's going to slightly complicate any FOIA until after the fact, or it should. Not that there's no merit in the request.
It is wrong to imply he is unreasonable here. NIST has been dismissive and unprofessional towards him and others in this process. They look terrible because they’re not doing their jobs.
Several of his student’s proposals won the most recent round. He still has work in the next round. NIST should have answered in a timely manner.
On what basis do you think any of these matters can or may complicate the FOIA process?
This definitely has the sting of bitterness in it, I doubt djb would have filed this suit if NTRU Prime would have won the PQC NIST contest. It's hard to evaluate this objectively when there are strong emotions involved.
When it comes to the number of times DJB is right versus the number of times that DBJ is wrong, I'll fully back DJB. Simply put the NSA/NIST cannot and should not be trusted in this case.
They’re not in question for many people carefully tracking this process. He filed his FOIA before the round three results were announced.
The lawsuit is because they refused to answer his reasonable and important FOIA in a timely manner. This is not unlike how they also delayed the round three announcement.
If NTRU Prime had been declared the winner, would this suit have been filed? It's the same contest, same people, same suspicious behavior from NIST. I don't think this suit would have come up. djb is filing this suit because of alleged bad behavior, but I have doubts that it's the real reason.
Yes, I think so. His former PhD students were among the winners in round three and he has other work that has also made it to round four. I believe he would have sued if he won every single area in every round. This is the Bernstein way.
Does that seem like reasonable behavior by NIST to you?
To my eyes, it is completely unacceptable behavior by NIST, especially given the timely nature of the standardization process. They don’t even understand the fee structure correctly, it’s a comedy of incompetence with NIST.
His FOIA predates the round three announcement. His lawsuit was filed in a timely manner, and it appears that he filed it fairly quickly. Many requesters wait much longer before filing suit.
Perhaps the old advice (“never roll your own crypto”) should be reevaluated? If you’re creative enough, you could combine and apply existing algorithms in such ways that it would be very difficult to decrypt? Think 500 programmatic combinations (steps) of encryption applying different algorithms. Content encrypted in this way would require knowledge of the encryption sequence in order to execute the required steps in reverse. No amount of brute force could help here…
> Would require knowledge of the encryption sequence...
This is security by obscurity. Reputable encryptions work under the assumption that you have full knowledge about the encryption/decryption process.
You could however argue that the sequence then becomes part of the key. However, this key [ie. the sequence of encryptions] would then be at most as strong as the strongest encryption in this sequence, which kindof defeats the purpose.
No, an important property of a secure cryptographic cipher is that it should be as close to a random permutation of the input as possible.
A "randomly assembled" cipher that just chains together different primitives without much thought is very unlikely to have that, which will mean that it will probably have "interesting" statistical properties that can be observed given enough plaintext/ciphertext pairs, and those can then be exploited in order to break it.
No not at all, that advice is still good. Even more important if your are talking about modifying algorithms. Your gonna want proofs of resistance or immunity to certain classes of attacks. A subtle change can easily make a strong primitive useless.
(And somebody has already kindly uploaded the documents to RECAP, so it costs you nothing to access.)
Aside: I really wish people would link to court documents whenever they talk about an ongoing lawsuit.