Hacker News new | past | comments | ask | show | jobs | submit | CiPHPerCoder's comments login

> There is not any reason to NOT run hybrid cryptography schemes right now, when the use case allows for it.

This is reasonable, but runs contrary to the stance taken by CNSA 2.0.


Gee, I wonder why.


> All the criticism of cryptographic agility that I have seen has involved an attacker negotiating a downgrade to a broken protocol.

Consider this an additional data point, then: https://paragonie.com/blog/2019/10/against-agility-in-crypto...

In the years since I wrote that, several people have pointed out that "versioned protocols" are just a safe form of "crypto agility". However, when people say "crypto agility', they usually mean something like what JWT does.

What JWT does is stupid, and has caused a lot of issues: https://www.howmanydayssinceajwtalgnonevuln.com/

If you want to use JWT securely, you have to go out of your way to do so: https://scottarc.blog/2023/09/06/how-to-write-a-secure-jwt-l...

> But if the protocol is not yet broken, then being agile isn't a concern, and if/when the protocol does become broken, then you can remove support for the broken protocol, which is what you'd be forced to do anyway, so a flexible approach just seems like a more gradual way to achieve that future transition.

This makes sense in situations where you have versioned protocols :)

This doesn't work if you're required to support RSA with PKCS1v1.5 padding until the heat death of the universe.


Hmmm, some recent protocols (thinking of MLS[1] here) have moved into a middle territory where they have a lot of options for piecing together a cryptographic suite, but then version that whole suite within the protocol. You can still change suites without changing the protocol, but it's not nearly as 'agile' as earlier suite and primitive negotiations.

Maybe something more like "cryptographic mobility" instead of "agility"? You can carefully decamp and move from one suite (versioned protocol) to another without changing all your software, but you're not negotiating algorithms and semantics on the fly.

[1] https://datatracker.ietf.org/doc/rfc9420


> So I'm not so sure what's the point of encryption at rest in AWS except just to tick off a compliance and regulatory checklist.

> The private key is with them anyway, just don't encrypt and save few milliwatts of power.

"Them" is Amazon, a company with over 1 million employees, last I checked.

It's perfectly reasonable to trust the KMS team to keep your keys secure, even if you don't trust the RDS team to never try to look at your data.

I know it's tempting to think of all of AWS as a sort of "Dave" who wears multiple hats, but we're talking about a large company. Protecting against other parts of the same company is still a worthwhile and meaningful security control.


> It's perfectly reasonable to trust the KMS team to keep your keys secure, even if you don't trust the RDS team to never try to look at your data.

If the database is live, then the data is able to be decrypted and who knows where it ends up. Encryption at rest solves only the threat scenario where the RDS team has access to the database storage layer. It doesn't do anything to mitigate any threats after it has been read from storage.


As a customer, I don't know neither I do care how they have teamed up internally. Not my problem.

From my perspective, the secret keys I don't have. Just AWS has and they can decrypt whatever and whenever they want maybe because they have a warrant or some three letter agency has them do it.


> The author is confusing "it costs us nothing (now that encryption can be done in hardware and is integrated into most desktop operating systems) and protects in some scenarios, so yeah, we just decided to mandate it always be done" with "PEOPLE THINK ENCRYPTION AT REST IS A MAGIC BULLET LOOK AT ME I'M INSIGHTFUL, POST LINKS TO MY BLOG ON LINKEDIN!"

What in the article gave you that impression?

I do not hold this confusion in my mind, nor did I deliberately encode such a statement in my blog. I'm curious why you think this is what I was saying.

> The whole post is insulting to the intelligence of even a fairly junior desktop support technician.

If that was true, every time someone posts "Show HN: My Hot New Database Encryption Library in Haskell", they would be mitigating the confused deputy attack by design, rather than what we see today: Namely, failing to even protect against padding oracle attacks.

That's what the article was actually talking about.


If you're using a secure disk encryption technology, and you manage to clear the keys from the TPM or overwrite the header containing the KDF salts and other metadata, that should render the device data unrecoverable.


> I find it very hard to believe Amazon's or Google's servers do not already have full disk encryption.

I am confident that they do. Even better, they can be configured to use your KMS key rather than the service key, and you can configure KMS to use external key stores (i.e., an HSM in your datacenter outside of AWS's control, that you could theoretically pull the plug on at any time).


This is an interesting and important point that you raised.

> if you're storing the data on AWS and generating your keys on AWS and managing your keys with AWS ... you're doing it wrong.

This is a reasonable thing to do if you've decided that you trust AWS and expect any violations of this trust to be dealt with by the legal department.

It's less reasonable if you're concerned about AWS employees going rogue and somehow breaking the security of KMS without anyone else knowing.

It's even less reasonable to do this if you're concerned about AWS credentials being leaked or compromised, which in turn grants an attacker access to KMS (i.e., a government would be more successful by compelling IAM to grant access than they would trying to subpoena KMS for raw keys).

(Sure, you can audit access via CloudTrail, but that's a forensics tool, not a prevention tool.)

But that's kind of the point I wrote in the article, no? You need to know your threat model. You've stated yours succinctly, and I think it's a commendable one, but many enterprises are a bit more relaxed.


> It's less reasonable if you're concerned about AWS employees going rogue and somehow breaking the security of KMS without anyone else knowing.

That's the least of the concerns. Remember AWS is subject to court orders of all types (legitimate ones and NSLs). Even if nobody goes rogue, any data that AWS (or any cloud/SaaS provide) could access, must be assumed to be compromised.


Sure, that's why I addressed the government in the immediate statement that followed the thing you quoted.


> What's the best practice in managing user data keys so that data is available only when there's an authenticated user around?

What does it mean for an authenticated user to be "around"?

If you want a human to manually approve the decryption operations of the machine, but it can still store/encrypt new records, you can use HPKE so that only the person possessing the corresponding secret key can decipher the data.

At least, you can until a quantum computer is built.


A working definition for some apps could be: The user's data should not be available to the system if there isn't an active user session, such that the user's privacy interests are cryptographically protected in event of a breach or data leak occurring when the user is not actively using the system.

I wasn't thinking of manual approval of any cryptographic steps. Just that when you log in to work on your data stored in the system, the system can only then decrypt the data, and when you log out, the system forgets the keys until next time.

It all depends on the type of app of course.


Okay, this sounds vaguely like a problem that may be solved by "HPKE where the secret key is reconstructed from a threshold secret sharing scheme" (>=2 of N shares needed, 1 held by the service and 1 held by the employee's hardware device, where 1 additional share is held in cold storage for break-glass reasons).

I would need to actually sit down and walk through the architecture, threat model, etc. to recommend anything specific. I'm not going to do that on a message board comment, because I probably am missing something.


“Around” usually means proving possession of a signing key on a connection.


> Obvious limitations is that you can't use the fields for sorting in queries (ORDER BY), and depending if deterministic encryption is not enabled, you can't use it in filters (WHERE) either. Same applies for any T-SQL logic on the data fields - because the encrypted blob is opaque to SQL Server - it is decrypted client-side. There is no workaround, except for pulling the data locally and sorting client-side.

This is a reasonable limitation when you're aware of the attacks on Order Revealing Encryption: https://blog.cryptographyengineering.com/2019/02/11/attack-o...


The thing that's security theater isn't encrypting at rest in general.

The thing that's security theater is encrypting insecurely, or failing to authenticate your access patterns, such that an attacker with privileged access to your database hardware can realistically get all the plaintext records they want.


>> The thing that's security theater isn't encrypting at rest in general > The thing that's security theater is encrypting insecurely

Security theater should be defined as:

Doing things that outwardly appear to improve security but have de minimus or less effect on actual security.

The 93 section questionnaire from bigco's IT department is security theater. Filling it out does zero to improve security for bigco or myco or my users.


IDK, I have multiple times seen significant practical security improvements as a direct consequence of some "93 section questionnaire" because the very first section had a few questions "Are you doing this simple, well-known best practice thing?", which they were not, because it took some time, effort and/or money and they just didn't care.

But once the questionnaire mattered, they started doing it just so they could legally answer "yes" to that question. Things like finally changing the default admin passwords on that service they installed a year ago, and testing backup recovery to find out that it actually can't be done due to a bug in the backup script skipping some key data.


I do agree, well my boss would most likely never allow me to spend time on security until he got hit by "93 section questionnaire" from big co.

Once contract with big co was on the line I got permission to do security and do it good.

Even though 80% of questionnaire was not applicable it still did the good job.


> Doing things that outwardly appear to improve security but have de minimus or less effect on actual security.

Right. And that's exactly the situation the article describes.

The accusation of "security theater" was only levied when IT departments reached for the "full disk encryption" potion to mitigate the ailment of "attacker has active, online access to our database via SQL injection", when that's not at all what it's designed to prevent.

They can insist that they're "encrypting their database", but does it actually matter for the threats they're worried about? No. Thus, security theater.

The same is true of insecure client-side encryption.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: