If I run GUI applications, let's say, as my user -- as is the default in most operating systems -- they have general access to my files, including my keys-as-files, no? (Putting aside some minor restrictions MacOS and others are slowly making.)
Yes, and they can also replace the age binary with one that uploads the password as soon as you type it. There is no meaningful security boundary to defend.
We implemented support for password-encrypted keys for the cases where you store the key file in, say, Dropbox.
But in the "age binary replaced" threat scenario, isn't just gameover even with hardware keys? Eg. the same exact age code with an extra call after the print password to stdout that uploads it somewhere?
The difference with hardware keys is that the primary key can’t be exfiltrated, and only one secret can be decrypted per physical touch, so rotation and recovery are possible without invalidating all secrets.
I mean, most users don't root-install, but anyway the GUI application can drop a different age binary higher on the user's PATH. Or change their shell. Or a million other things.
There really isn't a point to defending against code running unsandboxed on a single-user machine.
I password protect my key for the sole threat model of me physically losing my device. I am aware that all other threat models that involve someone taking remote control of my device are not fully protected against, but it at least requires significantly more effort on their part versus just doing a scan for private keys on the file system.
If I run GUI applications, let's say, as my user -- as is the default in most operating systems -- they have general access to my files, including my keys-as-files, no? (Putting aside some minor restrictions MacOS and others are slowly making.)