> The age backend is an experimental crypto backend [for gopass] based on age. It adds an encrypted keyring on top (using age in scrypt password mode). It also has (largely untested) support for specifying recipients as github users. This will use their ssh public keys for age encryption. It is well positioned to eventually replace gpg as the default crypto backend.
I don't see how age solves any of the problems of GPG that really matter i.e. how to write a simple secrets manager that can scale from 1 to enterprise.
We used gopass at one of the earliest startups I worked for until we realized that all secrets needed to be manually rotated every time an employee left. I still use gopass for personal use but the idea of using it in a teams environment is just untenable unless you have nothing more than a handful of secrets to worry over.
Age doesn't fix this. It just tries to make things simpler but in return you lose out on a bunch of standard interops that have propagated around GPG.
I was legitimately excited about a new secrets backend until I understood nobody is actually writing it to solve real world problems. It largely appears to be a case of "writing it to do fewer things because that's what Unix hackers like".
> We used gopass at one of the earliest startups I worked for until we realized that all secrets needed to be manually rotated every time an employee left.
No password manager that I’m aware of solves this or even claims to. When an employee leaves, you need to assume that they retain control of all passwords they previously had access to. They could have made a copy that’s beyond the control of your password manager (for example saved the password in the browser). What helps are reducing shared passwords, relying less on passwords in general (SSO, IAM, …) but for some cases you just have to bite the bullet and rotate the credentials.
Could it work by using 2FA with a key, and the employee has to return the key when leaving the company? (in the sense that you wouldn't have to rotate the passwords then)
No. You have to assume that the employee copied the password to a place that you have no control over - for example a browser-based password manager attached to a private sync account, possibly enrolled a second token that you have no control over. You don’t even have to assume malice for that to happen.
Using hardware based keys for accounts that require shared access is a pain, sometimes even effectively impossible (AWS allows a single U2F token on its root account, effectively making it impossible to grant 2 or more people access to it, if using a hardware token)
And then, not all services provide 2fa, less with a physical key and for some, 2fa is comparatively easy to circumvent. But all of that holds true for every password management solution that manages long-term credentials (that is: including api tokens, access keys, certificates, …)
The only thing that saves you is personalized accounts that you can deprovision - from a management perspective I love SCIM and SAML, even with all their technical flaws.
Most services don't have a good way to restrict 2fa to certain keys which would be required for that.
Also that would still mean rotating any tokens/m2m keys that have been issued.
The "proper way" would be to try to minimize how many tokens/keys are readable by employees (they should probably be only read by deployment jobs etc.) and use SSO for interactive logins. When offboarding a user it should be enough to remove the user from the SSO provider and rotate any tokens/m2m keys that the employee actually had access to (as long as the employee did not get access to issue new tokens).
Unfortunately a lot of services treat SSO as a "premium" feature and require "call us" enterprise plans for it.
How is an encryption backend supposed to fix this?
It’s not really clear to me how it could be even expected to do so, or what gives the impression that the goal of a 650-lines bash script would be to scale “from 1 to enterprise”.
Still, asserting that only your problems are real world problems and worth solving is pretty reductive.
With Mozilla's SOPS, one can simply swap out GPG or age for AWS KMS or Vault as its encryption backend as one matures. SOPS is more or less ideal but doesn't have the ecosystem built around it pass does (Password Store for Android just one example of easily using your pass secrets everywhere).
So, yes, it is possible and we don't need yet another encryption backend that already does what the existing backend does just because it's "lighter/sexier".
What we should be doing with GPG is building abstractions (Keybase) and contributing to modernization efforts like Sequoia. Age is disruptive without showing its value to that disruption.
> how to write a simple secrets manager that can scale from 1 to enterprise.
It's not clear that this is a problem for a non-enterprise use case, so I wouldn't make strong claims like "age is not solving any problems that really matter".
I only recently added a second key to my password store, and ran pass init to "rotate" the encryption and add a 2nd key.
It's my impression that you push these changes to the git repo, so any other employee would pull and the files would already be fixed to exclude the former employee.
So it's not a case of each employee having to run pass init, they just pull from git and get the updated files.
I believe pass for teams requires a key manager who does this and pushes to git for all employees. The optimal security would be that employees only have read access but that's not very practical.
Yes, but this doesn't lock people out of decrypting old copies, it's git, they can rolllback to before their key was removed, and decrypt every old secret they had access tobwhich hasn't been rotated yet.
The key-rotation on every employee-change is the hard part.
I don't think age, pass, etc. have ever claimed to be an enterprise solution. It sounds like you want Hashicorp's Vault or even a managed secret offering like what AWS, GCE, Azure, etc. all sell.
One thing that wasn't mentioned is whether or not private keys can be stored in HSMs with age. I'm guessing since the authors recommend against password-protecting private keys that the answer is "no" but that's one reason that I pick GPG for things.
We designed the plugin protocol (https://hackmd.io/@str4d/age-plugin-spec) and generally the age recipient/identity structure specifically to enable the use of hardware or remote keys!
For example, https://github.com/str4d/age-plugin-yubikey makes it very easy to use PIV tokens, including YubiKeys, with age. (Well, for now with rage, since plugin support is coming in age v1.1.0.)
I argue against password-protecting keys by default because, unlike using hardware tokens, it doesn't protect against many threat models.
If I run GUI applications, let's say, as my user -- as is the default in most operating systems -- they have general access to my files, including my keys-as-files, no? (Putting aside some minor restrictions MacOS and others are slowly making.)
Yes, and they can also replace the age binary with one that uploads the password as soon as you type it. There is no meaningful security boundary to defend.
We implemented support for password-encrypted keys for the cases where you store the key file in, say, Dropbox.
But in the "age binary replaced" threat scenario, isn't just gameover even with hardware keys? Eg. the same exact age code with an extra call after the print password to stdout that uploads it somewhere?
The difference with hardware keys is that the primary key can’t be exfiltrated, and only one secret can be decrypted per physical touch, so rotation and recovery are possible without invalidating all secrets.
I mean, most users don't root-install, but anyway the GUI application can drop a different age binary higher on the user's PATH. Or change their shell. Or a million other things.
There really isn't a point to defending against code running unsandboxed on a single-user machine.
I password protect my key for the sole threat model of me physically losing my device. I am aware that all other threat models that involve someone taking remote control of my device are not fully protected against, but it at least requires significantly more effort on their part versus just doing a scan for private keys on the file system.
age-plugin-yubikey supports all PIV tokens. There are other 3rd party plugins in development for other hardware tokens. The v1.1.0 Go API will be able to drive arbitrary plugins, so you should be able to integrate with all of them!
What benefits do I get if I use Passage instead of Pass?
I get that it uses only one algorithm and does one thing. But that’s not a benefit impacting the end user in practice. Actually, my Pass also uses CV25519. It’s just, one is written in Go, one in C/bash.
Looking at the list of CVEs for SSH, OpenVPN, OpenSSL and GPG, the latter has stood up pretty well. Plus, OpenPGP is a standard heavily audited which is important for interoperability (and arguably security).
Many useful patches are posted to the upstream pass mailing list. They're rarely responded to. I don't know if that specifically drove the creation of this fork, but it's absolutely a frustration as a pass user
We need good GUI apps on multiple platforms for tools like this (and for tools like age/rage). It’s one thing to use such tools for oneself, but impossible to have many others adopt it if there’s no GUI.
one issue with i remember with pass was that it leaks the length of the plaintext. was kinda hoping that maybe age offered a padding mode to avoid this, but i guess not.
The main argument in favour of age is that it only does one thing: encrypting files, and does it 'well' in the sense there is no need to edit your gnupg config file to exclude all the crypto from the 90s that your version of gnupg might decide to default to.
This makes things significantly simpler in terms of the code for age, which reduces the possible bugs and possible misuse.
GnuPG is more versatile and tries to solve a number of (arguably, orthogonal) problems, including signing and authentication (by users and also of keys in the web of trust). This leads to more complexity, and some parts, e.g. the web of trust, can't really be described as being a success. Others, such as sign and encrypt, likely don't achieve semantic security (also the case in SMIME / CMS, and I'm not sure if this has ever been fixed) and definitely don't achieve the forward/future secrecy guarantees of say Signal. Age isn't trying to be a messenger, so it doesn't need to worry about this.
It comes down to this: if you use AES-Something with GnuPG or you use ChaCha20-Poly1305 with Age, it is unlikely ever to be meaningfully broken in our lifetimes (including if you are currently 1 day old and including by quantum computers) and everything will be fine. But age will only use ChaCha20-Poly1305, and the format is pretty simple, while GnuPG can be convinced to decrypt CAST5 encrypted messages and has a much more complex format that introduces a greater risk of binary parser vulnerabilities.
Age is written in Go (there's also Rage, written in Rust), whereas GnuPG is a big old ball of C, so if there are parser bugs, GnuPG is bad news and Go/Rust are likely hopefully sound.
> Age is written in Go (there's also Rage, written in Rust), whereas GnuPG is a big old ball of C, so if there are parser bugs, GnuPG is bad news and Go/Rust are likely hopefully sound.
Well, on "written in Go" part, I remember looking at the crypto code 2-3 years ago (??), and it was full of unnecessary copies. You know the practice of using explicit_bzero, avoiding copying, and so forth? In this case neither I remember was being the followed practice (it uses GC anyways), in fact, I remember copying like no tomorrow. Correct me if I am wrong though. Regardless, languages with GC are usually a no-no. There is a chapter on it in "Cryptography Engineering: Design Principles and Practical Applications by by Bruce Schneier, Niels Ferguson, and Tadayoshi Kohno", the name of the chapter is: "Implementation Issues". It mentions Java (as not being a good language for crypto due to GC), and I think C as well.
In any case, it has been a while, and I am currently tired. Feel free to correct me, of course.
Knowing who the author is and knowing what he knows and what projects he has completed I would expect that he is quite familiar with this citation and the body of literature surrounding it.
>...while GnuPG can be convinced to decrypt CAST5 encrypted messages...
That's not actually a problem. The problem would be if it could be convinced to encrypt CAST5 messages. Assuming that is actually a problem. Is there anything particularly wrong with CAST5 other than the block size?
> That's not actually a problem. The problem would be if it could be convinced to encrypt CAST5 messages.
I don't disagree, but would add that if the encryption used is believed to be insufficiently secure these days all tools should warn loudly that this is the case when asked to decrypt them.
I use pass, and have no desire to move away from gpg.
I think gpg still provides "pretty good" privacy, I don't see any benefit that age would afford me with pass.
(There are sore points to the OpenPGP user experience, integration with mail clients among others, along with the WoT, have in practice been not great. But these are not things you would encounter with pass.)
https://github.com/gopasspw/gopass/blob/master/docs/backends...