I admit that, reading the code, I found it challenging to find a severe crypto bug simply because everything is wrong. Even their way of packing the data into a base64 is nonsense. The parser is nonsense. Almost every line of code is wrong. Without more context, I’m not quite convinced that the key is reliably anything other than a constant due to the use of PHP and the failure to validate that the variables in question are initialized.
It’s like the most severe bugs are buried in code that is 100% bug!
No, no, the key is not a constant. See, they already changed it once! /s
(Honestly, that was the biggest red flag for me. They had to change the key and that STILL didn't give them the hint that hardcoding it is not a sane option.)
Any company that was involved in this disaster and either implemented or gave their seal of approval to hardcoded crypto keys needs to be permanently excluded from government contracts.
"Proper" PHP with a modern framework such as Laravel is decent and as far as I know would forbid this (there's a global exception handler that returns a 500 page with a stack trace).
Bare PHP with no framework or anything to patch PHP's shortcomings is IMO unacceptable in today's day and age.
The entire project is an unsalvageable mess and so is their response - clearly they've outsourced the entire thing to idiots and are trying to cover their ass now that it's been publicized.
Wow. Worrying about the cryptography on this thing is like worrying about the quality of the padlock you're using to secure the flaps of a cardboard box.
I thought this might be an exaggeration, but... wow, just... wow.
IMO the NPCC should sue Pervade and take them to the cleaners. Pervade are clearly incapable of developing a secure system, and have completely misrepresented their abilities. They also appear to have blatantly lied to the NPCC numerous times.
> NPCC should sue Pervade and take them to the cleaners
You're assuming that the point was to actually deliver something of value that would help track cybercrime and that the contract was awarded fairly. I'm not sure this was the case.
Most likely, the brokenness of the system is a feature to justify endless busywork for various people at all levels of the stack, where as a working implementation would not only require little/no maintenance but would actually deliver actionable evidence forcing them to do real police work.
I can understand why you'd think that, but it's too cynical a take even for me ;)
For an infosec company to behave like, there is a huge risk of completely and utterly decimating their reputation, possibly forever.
My guess is that either (1), or both - (2) and (3):
1. There was skulduggery involved during the bidding process
2. Those at the NPCC responsible for awarding the contract did not have the required competence to do so
3. Those at the NPCC tasked with overseeing operations do not have the competence, or even the mindset, to do so
Regardless, to call it an "absolute clusterfuck" would be generous. I'm genuinely disgusted that the NPCC and Pervade have put so many organisations in jeopardy, and furthermore that they continue to do so, even when in possession of the facts. Astonishing.
It's just blatantly obvious that the people building this are simply amateurs. My first guess would be that this is being built 100% by out-sourcing this to a bunch of cheap labor overseas that has no horse in this race and doesn't care about much more except getting it working. If it's not that, then someone hired a couple of interns that are woefully under-equipped(and probably underpaid).
I'd say it's most likely that they paid a fortune to a friend, family member, business partner, or ex-coworker of the people responsible for awarding the contract. The tell will be if they react viciously to criticism, because they're afraid of investigation.
Being incompetent will get you mildly reprimanded. Self-dealing might get you jailed.
According to any MBA, the possibility to do scapegoating is the most important part whenever you are doing outsourcing. It doesn't solve your problem in the first place, but hey, at least you can blame someone - psychological safety not for the faint of heart. ;)
It's ironic that you have opted to castigate an entire cadre of people (MBA holders) rather than to focus solely on the bad behaviour (scapegoating) done by whomsoever.
We need to stop saying don’t roll your own crypto and start teaching it with good examples and good explanations.
The only people who will listen to “don’t roll your own crypto” are people who maybe should be learning crypto since they can approach it with the needed level of humility.
The people who should not be writing crypto are precisely the ones who will ignore this advice.
It’s also sort of elitist sounding and that puts people off and further reduces the odds that they will listen.
Everyone knows that someone “rolls their own” crypto or none of it would exist. How does one get into this mysterious club? As near as I can tell you… roll your own crypto… and manage not to fuck it up. So the way you get to become someone who gives this sage advice is by disobeying it.
(Of course there are degrees and Ph.Ds and publications but that’s the realm of real cryptographers of the sort that design ciphers. I’m talking about programmers using crypto.)
People shouldn't "roll their own" cryptography any more than they should "roll their own" padlocks or door locks.
> The people who should not be writing crypto are precisely the ones who will ignore this advice.
Why would they pay attention to "good examples and good explanations" if they disregard the simplest and most basic advice?
> It’s also sort of elitist sounding and that puts people off and further reduces the odds that they will listen.
Modern cryptography is intrinsically elite. It requires math and computer knowledge far beyond the ken of mere mortals, eh?
> Everyone knows that someone “rolls their own” crypto or none of it would exist. How does one get into this mysterious club? As near as I can tell you… roll your own crypto… and manage not to fuck it up. So the way you get to become someone who gives this sage advice is by disobeying it.
That sounds like, "Medical students dissect corpses to learn anatomy, so non-doctors should be allowed to perform their own surgery."
If you ask a surgeon how to become one, they'll tell you to go to pre-med and then medical school and then become a resident and so on. There's a path to get there. They won't say "don't become a surgeon, go away."
If you ask a crypto developer how to become one, they tell you to go away. "Don't roll your own crypto." Not "here's where you go to learn how to do crypto correctly, and here's a test you can take to determine if you're ready to do it." There's no generally accepted path to learn or to test your knowledge. There's just this informal guild of people we think know what they're doing.
So no, it's nothing like medicine except in the most superficial sense of them both being hard things.
But "roll your own crypto" isn't the same as "become a crypto developer", just like "do your own surgery" isn't the same as "become a surgeon".
"You can't roll your own crypto without becoming a crypto developer first" is just as valid as "You can't do your own surgery without becoming a surgeon first".
Those crypto developers didn't become crypto developers the first time they fucked up rolling their own crypto; it took lots of failures and probably quite a bit of formal study. Exactly like surgeons do lots of formal studying (and do their first fuck-ups on people who are already dead).
So yes, it's pretty much exactly like medicine, down to the most fundamental sense of them both being hard things that one needs to learn by formal studying and lots of preferably harmless fuck-ups.
Keeping with the metaphor, the solution to the situation is to start a medical school, not to encourage people to attempt surgery without any training, eh?
I agree. My criticism is that the majority of the cryptography community just turns up their nose and does neither. They just say "go away," and people are obviously not listening to that and are instead attempting amateur surgery.
The solution is to provide better learning materials.
BTW some people often say "just use libraries." The problem is that even the best designed cryptographic libraries are easy to misuse if you don't at least understand the basics of what those building blocks are doing. Imagine using a networking library without knowing what a TCP connection is or how it behaves. You don't need to understand the deep internals of TCP but you need to at least know the semantics and what the protocol does.
> So the way you get to become someone who gives this sage advice is by disobeying it.
Lawyers have a similar thing, from what I've been told.
"Don't go to law school", "Don't become a lawyer", etc. are what many lawyer parents have told their children (some of whom went on to become lawyers themselves).
Crypto really isn't the sort of thing you should be doing by yourself, especially implementing crypto algorithms. To correctly implement a crypto algorithm from scratch without existing primitives, you need 1) A basic foundation in discrete mathematics, the type from a CS program and not a web dev bootcamp 2) Strong knowledge of computer architecture 3) Security knowledge of common exploits and vulnerabilities.
It is like authoring compilers, most engineers have some knowledge of what's needed. But the few who are lucky enough to get paid to do it full time professionally are not very common. One thing I am thankful for with the hype about cryptocurrencies is that people with understanding of cryptography at the algorithmic are now much more common since the markets are aligned to reward technologies like symbolic execution and probably correct algorithms.
As someone who has "rolled his own crypto" for various reasons, I was just wondering why you're saying this:
> 2) Strong knowledge of computer architecture
Is it so we can do performant things? Avoid side-channel attacks? Computer architecture isn't focal in my thinking when I'm working on cryptography. Very rarely it becomes important (I could leak information because of ... or boy, that is branch heavy and isn't going to be fast ...)
It's both, and your example in parentesys actually provides a good example, as brances can both slow you down and provide a side channel (but in crypto I think that in the particular case of branches only the second occurs).
If you're looking for an example that is more rooted into computer architecture, you can look at how implementing AES with lookup tables can provide a cache timing side channel.
Well, then I'd say the knowledge fits more in "3". It's not general purpose architectural knowledge, but rather the wealth of things we've learned from people thinking about side channels.
I think it's hard to pick one between 2 and 3, as after knowing about a possible side channel from 3, you need 2 to prevent it on the specific architecture you'll run your code on.
Simply speaking, the vast majority of cryptography work has nothing to do with tomfoolery to prevent side channels-- even if we're doing low-level fundamental algorithm stuff, which most of the time cryptographers are not doing. (When's the last time you thought about AES in detail?)
The way people stumble is in protocol errors and fundamental errors of composition.
Man that typo in the last few words cracked me up. "Probably correct algorithm" sounds like "I didn't test the code but it'll probably work fine. Let's deploy it to production!"
Would it not be better to have some accredited auditing sevice that can verify and certify crypto code?
After all, the whole infosec feild moves fast and what was 100% professinal at the time of being trained with the then known aspects does not mean that they would hold true today.
Heck I'm certified to recogise drugs and bombs, yet I'd not have any idea upon many modern drugs or explosives, so in some eye's I qualify as a professional and in my eye's I'm far from it and garantee many who have no such acreditation would be far better than I.
Okay, then we need to tell people where to get this professional training. What should one do before one rolls their own crypto?
But nobody does that. Nobody says "don't roll your own crypto until you learn these things and pass these tests." They just say don't, and the people saying don't are often the ones who are.
So it's ignored, and with no good guidance people write crappy crypto.
> Okay, then we need to tell people where to get this professional training.
I posted a link to a Getting Started guide in my comment above.
Once you've exhausted those resources, your best bet is to join a cryptography team at a tech company to round off your education with some real-world experience.
Are you suggesting that writing explanatory blog posts about technical topics is dangerous because someone will eventually come after you for being elite?
>Are you suggesting that writing explanatory blog posts about technical topics is dangerous because someone will eventually come after you for being elite?
Yes, based on my personal experience and that of others. But I wouldn't call myself elite... I just know a little bit ;-)
I am saying giving them any sort of heads up is not good praxis, since the police have so thoroughly abused the access they already have.
To use crypto right requires knowledge of crypto systems. You're not going to know how to use crypto schemes correctly without knowing about common flaws and a range of side channel attacks. The crypto code itself is usually relatively easy to use (OpenSSL may have a clunky interface, but there are tons of examples out there). The problem here, and I'd argue actually in most cases, isn't that the libraries don't exist or are hard to use. The problem is that the people using said libraries don't know what they're doing.
There are great books out there that will tell you everything you need to know to understand and write safe crypto code. It's not some kind of closely guarded secret kept by an exclusive elite, it just requires reading. Pirate the books if you want to stop information hoarding among the rich, it's trivially easy to do with libgen.
You can roll your own crypto if you're capable of doing so, and have someone as competent as you check your work.
Some of these flaws aren't that bad (== vs === shouldn't matter too much in this case, though it's a sign of bad PHP code when the types are supposed to be well known). Others are either bad code practice (allowing a variable to be uninitialized) or well-known cryptographic mistakes that you'll learn to catch after reading a book or two (timing attack, shared keys). I won't pretend I would've spotted all of those at a glance, but I can admit that my crypto course has been too long ago for me to write or review such code.
I've got a basic grasp of electrical cirtuits but that doesn't mean I should be allowed to design highly secure electronic security circuits. I will miss important problems and dangerous flaws because I lack in-depth knowledge and experience. It's not because there's not an easy 1-2-3 guide on how to design safe circuits or because a cabal of academic elites are keeping this knowledge to themselves and pretending to do so just shifts the blame for my lack of effort to others.
Feel free to do your own crypto. Tons of companies do. You'd better put in the work to make sure you're doing it right, though! If you're working on some kind of new cryptographic structure and an academic writes up a theoretical attack in a paper somewhere, you're doing it right, assuming that you extend your scheme to solve the problems discovered. If you make textbook mistakes in important code, you're clearly doing it wrong.
If you're interested in learning cryptography so you can try to roll your own, I recommend starting with books like these:
- Cryptography Made Simple by Nigel Smart
- Applied Cryptography by Bruce Schneier
- Cryptography Engineering: design principles and practical applications by Neil Ferguson, Bruce Schneier, and Tadayoshi Kohno
These books require some prerequisite mathematical knowledge (like all crypto does) which you can probably pick up on Khan Academy. They go in depth into how common cryptography works, why these schemes are secure and how they can be broken (in theory and in practice). Some of them are getting old, though, so you may need to find some newer material if you're doing stuff like modern elliptic curve or post-quantum cryptography.
> The problem is that the people using said libraries don't know what they're doing.
I would like to suggest that the problem is that the skilled people that understand how to write crypto systems are providing libraries that are easy to misuse. Instead of providing a library that is likely to be used incorrectly without a lot of specialized knowledge, provide infrastructure that manages the crypto so the average developer doesn't need to become an expert on crypto systems.
HTTPS is a useful example. Instead of providing webapp authors a library of cypher/hash functions and warning that they shouldn't roll their own transport layer security and then acting shocked when those authors try to use that library and make a lot of mistakes, we instead separate the crypto step into a separate layer of infrastructure so the average webapp author can easily use crypto without having to learn a lot of specialized knowledge. Someone writing a Ruby on Rails app shouldn't have to write functions like pervade_encrypt($data)/pervade_decrypt($data) to move out of the plaintext world of HTTP and utilize encrypted transport. They only need to buy/LetsEncrypt a cert they can install in their webserver. They can even delegate that to their hosting provider.
"If it's possible for a human to hit the wrong button and [cause a catastrophic failure] by accident [or inexperience], then maybe the problem isn't with the human - it's with that button. [...] People make mistakes, and all of our systems need to be designed to be ready for that."
> I would like to suggest that the problem is that the skilled people that understand how to write crypto systems are providing libraries that are easy to misuse. Instead of providing a library that is likely to be used incorrectly without a lot of specialized knowledge, provide infrastructure that manages the crypto so the average developer doesn't need to become an expert on crypto systems.
I agree, but for the encryption basics, those libraries are available. Higher level languages like Python/Ruby/etc. all have libraries that do TLS transparently. Lower level languages have libraries like BoringSSL/GnuTLS/OpenSSL/(Windows|macOS) APIs available which are relatively straight forward; you need to do error handling yourself so there's a whole bunch of API calls, but that's because an application needs to deal with errors and thrown exceptions aren't always available/desirable.
In the example, the code authors wanted to encrypt a file and then sign it using their own algorithm. There are open source utilities for this, of course, but they chose to implement it their own way. They didn't build a bad implementation of AES, they didn't write a bad hashing algorithm, they didn't mess up RSA.
They could've done this perfectly safely by switching the order around, adding a second key, and picking a better file format, they were very close! They could've prevented all these problems by just using GCM instead of CBC+HMAC inside their own format.
Personally, I would've used GPG and skip the custom implementation all together. This is a software product that exposes an endpoint that will execute the command in a GET query, so why not use a bit of exec() to do all the weird crypto for you.
> HTTPS is a useful example. Instead of providing webapp authors a library of cypher/hash functions and warning that they shouldn't roll their own transport layer security and then acting shocked when those authors try to use that library and make a lot of mistakes, we instead separate the crypto step into a separate layer of infrastructure so the average webapp author can easily use crypto without having to learn a lot of specialized knowledge. Someone writing a Ruby on Rails app shouldn't have to write functions like pervade_encrypt($data)/pervade_decrypt($data) to move out of the plaintext world of HTTP and utilize encrypted transport. They only need to buy/LetsEncrypt a cert they can install in their webserver. They can even delegate that to their hosting provider.
For TLS you still need to configure a proxy. You need to make sure not to paste the private key in the cert field or every browser will receive your private key; you need to have _some_ crypto knowledge and this is the easiest process you can probably think of. You also need to check your auth/blacklisting/security code to accept the proxy and make sure you use the right headers for checking the remote user IP against blacklists etc. Enabling TLS is easy, but it's still not a transparent process, simply because it's not an easy task.
IMO people shouldn’t roll their own anything. Use simple off the shelf solutions and services that are proven and ready to go, and leave the rolling to professionals doing true research and development.
You're really not wrong. There are so many ways a novice can screw up writing public-facing software. Even so-called experienced professional software "engineers" and billion-dollar tech companies do it regularly.
I can annicdote many instances that support your purported sarcasim.
Only recently had disagreement with two officers who thought something was a civil law issue and not their domain when it was actually a common law issues and for context was fraud related. Of note I esculated to their managment who confirmed they were wrong and I was right.
Police in that respect fine in the UK, now the crakhead junies below me who is a protected police informer - well they get away with murder...litteraly :(
In process of big outing of Met Police issues, failings, negligence and sheer incompetance. But they are on notice already.
It's astounding how long the Met have apparently kept a continuity of corruption over generations. From what I've read I doubt that this can be seriously uprooted.
What do you mean ‘a civil law issue and not a common law issue’? The UK’s civil laws[0] are very largely composed of common law; the two concepts are orthogonal, not opposites, and both or one or neither can easily be true at once.
A common law issue could be either civil or criminal and says nothing about whether it’s the police’s domain (which obviously would entail its being criminal law).
[0] In the only one of two ways that you could possibly be using the word ‘civil’ in this context, given we don’t have a civil law legal tradition.
And the management was police as well, chain of command, perhaps a sheriff. There you go. A third police officer said you were right, without you...
Going to ACLU or others to sue the Police Department, or going to Internal Affairs, or any of the new legislation they have to follow or other emasculations.
Simple stuff, two said you were wrong, you escalated, and you were found to be right. Doesn't work that badly in real life.
Yes but 99% of the populas would trust the police to know the law and as such would of accepted their mistake as a rule. Luckly I'm a bit more knowledgable in the law and why I pushed the point.
So for most people, it wouldn't of worked out well.
This is par for the course for law enforcement software, and demonstrates tidily why every line of code used in a LE or legal context must either be open source or available for legal and public review.
> This is par for the course for law enforcement software
That is also true for the software running on Breathalyzers. Long story short, the few times they were independently tested, the software quality was found to be about the same as that of the crap described here, and test results were wildly inaccurate. People are literally jailed every day based on bad law enforcement software.
“ Not that it matters much, since you can just bypass the security control entirely, but == is not the correct way to compare hash function outputs.”
I read an excellent whitepaper on brute forcing this over the internet in ~2012 but I cannot find it. Anyone happen to know the paper I’m referring to, or if it’s still relevant with modern cpu and compiler optimisations?
>Now you’ve successfully downgraded the message to the legacy format, which didn’t provide any authentication over the ciphertext.
... and the system accepts the old format? Wouldn't that be the actual problem here? Is there something wrong with the old format that caused a security weakness?
>Using either of the two methods, with the HMAC check completely bypassed, you’ve reduced the security of this construction to unauthenticated AES-CBC mode, which is vulnerable to a Padding Oracle Attack.
Well was this particular system actually vulnerable to a padding oracle attack?
I hate these security writeups that prime the reader but never actually get to the punch line. If something has a security weakness you should explicitly state exactly what it is. Don't leave it to the reader to figure it out. If it turns out that there was no actual issue then you are being misleading by the implication that there was.
> > Using either of the two methods, with the HMAC check completely bypassed, you’ve reduced the security of this construction to unauthenticated AES-CBC mode, which is vulnerable to a Padding Oracle Attack.
> Well was this particular system actually vulnerable to a padding oracle attack?
Yes. That's what the thing you just quoted says.
This isn't a theoretical leap, either. It's unauth CBC with PKCS#7 padding with OpenSSL's API.
> I hate these security writeups that prime the reader but never actually get to the punch line.
But it did. "X uses Y which is vulnerable to Z" does, emphatically, state that X is vulnerable to Z. Your rant is unnecessary.
I asked for an explicit statement that it was vulnerable, not if it could be vulnerable. To have a padding oracle attack against CBC, the attacker needs:
* The ability to submit a lot of messages for decryption.
* A error specific to a padding oracle issue.
* Some sort of provided reverse channel to get that error information back to the attacker.
Does the system in question have these attributes?
I note that you have not explicitly stated that the system is susceptible to a padding oracle attack either. That's because you have read the same article and can not.
The blog post is written in conversational English. It is not meant to be a technical, formal paper about their protocol. If you're going to get hung up on that, that's entirely your problem, not mine.
Yes, their fucking thing is vulnerable, because of how PHP + ext/openssl handles padding errors by default. They can mitigate this behavior by suppressing error reporting, but the underlying behavior will still be observable later in the application when `false` is provided instead of a string.
To trigger the vulnerability, you would need the ability to modify a ciphertext. This requires privileged access to where the encrypted data is stored.
Exploit procedure:
1. Modify ciphertext
2. Access PHP script that processes said ciphertext under-the-hood
3. Did it spit out a PHP error?
* Yes -> Invalid padding
* No -> Valid padding
Am I going to laboriously describe one of the most well-tested padding oracle exploit paths in the web programming ecosystem in an informal blog post whose purpose is to describe coding/design flaws? No, because that's a waste of everyone's time.
I don't generally write articles with the premise that my audience needs every logical conclusion spoon-fed to them.
> I note that you have not explicitly stated that the system is susceptible to a padding oracle attack either. That's because you have read the same article and can not.
>They can mitigate this behavior by suppressing error reporting,
Did they?
I am not trying to annoy. I have seen very misleading articles where a weakness is strongly implied but after digging into things did not and could not exist.
I don't know what their PHP configuration is set to in production.
As a rule, I don't test systems that require me to send packets. I only look at code. This keeps me from having to care about the Computer Fraud and Abuse Act.
What I know: The particular bit of code that Paul tweeted is vulnerable, and the mitigation you're asking about only reduces it from a "grep the HTTP response body" oracle to a timing oracle, so it doesn't actually eliminate the issue.
As I understand it, the CSPRNG check thing isn't a security feature, it's there in case the CSPRNG spits out an IV with the separator bytes in it. (That's also very silly, I'm just saying).
The strpos() check is what you're describing, but openssl_random_pseudo_bytes() accepts an optional second by-reference argument and sets it to true or false depending on the behavior of RAND_pseudo_bytes().
If I get that part right, it would generally have the same return value throughout the lifetime of a process. So in the off-chance that it does trigger, it gets stuck in an infinite loop.
If, I assume the data is being sent back to their servers over HTTPS .. wouldn't that make this process of encrypting the "data" superfluous and have no impact on the overall security - or did I miss something?
The vulnerability here is that their application-layer cryptography is vulnerable to adaptive chosen-ciphertext attacks, not that a passive observer could sniff packets and see plaintext.
I mean any sophisticated adversary will use the traditional methods of bribery and coercion with violence to obtain intelligence and influence the course of events.
They're cheaper, more widely available, and more reliable than monkeying around with code, e.g. https://xkcd.com/538/
Yes it doesn't handle an edge case in the same way body armor doesn't handle a 20mm depleted uranium chain gun, but it probably handles the likely threats well enough.
And it is available at a price the city council is willing to pay when facing a we-must-do-something-this-is-something-therefore-we-must-do-it in an area where it lacks expertise and the money to buy expertise...good cybersecurity has a diminishing value for dollar curve.
> I mean any sophisticated adversary will use the traditional methods of bribery and coercion with violence to obtain intelligence and influence the course of events.
Okay, but we're not talking about someone who failed to meet an ultra paranoid threat model. We're discussing an encryption technique that can be bypassed by anyone who bothered to do the first 2 sets of the cryptopals challenges.
> Yes it doesn't handle an edge case in the same way body armor doesn't handle a 20mm depleted uranium chain gun, but it probably handles the likely threats well enough.
The edge case is "someone with any knowledge about applied cryptography decided to kick the tires, even if only for the laughs". The code is that bad.
The city council chose from the alternatives to an RFP and probably made the choice in part based on who showed up and pitched at the Tuesday night meeting where the purchase was on the agenda.
And those people probably met with staff beforehand on a sales call.
Whatever better alternative you are imagining would have had to have done those things.
Things which are expensive and more important than code quality.
I could set my bar higher, but for living in the real world and that a higher bar would conflict with the high bar of assuming people are of good will and doing the best they can.
No. This is security theater, and it stands as an obstacle to security. They were better off with plaintext.
> I could set my bar higher, but for living in the real world and that a higher bar would conflict with the high bar of assuming people are of good will and doing the best they can.
Or you could just point PHP developers to my open source libraries, which have a cost of $0.00 to use and are actually secure?
It is probably not better than nothing, because it gives the illusion of security, while patently failing at providing it when challenged by pretty much anyone who might be interested in an attack. While charging full price.
If they’d built it for an RFP asking for a honeypot, the situation would be different.
So think of it as a tough looking cover over a deep pit that a contractor made for a mining company. The mining company paid full price for it to keep people from falling in.
But it turns out when someone stands on it, it collapsed and dumps them into a pit because it was built wrong.
Oops.
Everyone would have been better off with nothing, because at least you’d know to be careful around the giant hole if it didn’t have the cover.
The protagonist makes something poorly in act one.
Act two concludes with a loud Greek chorus…yet the whole thing is inconclusive as the problem is never solved.
Maybe because between faang salaries, the plush lives in the academy and the patriotic satisfaction of state organs the field is left to the world’s mccaffees…and I don’t mean the McDonald’s coffee.
I mean despite the lock picking lawyer, padlocks mostly work.
Unlike others, I think terrible crypto is often better than plaintext.
But they would have gotten an actually secure system implemented more cheaply with less code by just copying some example code from libsodium[1]. The author(s) went to a whole lot of unnecessary effort and the result is an insecure, embarrassing mess.
I would respect the argument of "oh, they're just picking a different point along the Pareto frontier between secure / high effort and insecure / low effort". But this is both insecure and high effort. People are piling on because it has the trademark smell of bad judgement.
Its also a great learning opportunity. Its interesting hearing about all the security problems people who are knowledgeable in the field can spot! I noticed barely any of these problems!
Something that is known to be insecure is often better than something that is believed incorrectly to be secure, because people can (not always, but sometimes) treat it appropriately.
It’s like the most severe bugs are buried in code that is 100% bug!
edit: the key is a constant. I’m speechless. See https://paul.reviews/cyberalarm-an-independent-security-revi...