Someone at my company generated the keys. They then put them on a network share without any security restrictions. They've been there for 5 years with no rotation. At least 2 are checked into source control.
We have very simmiliar issue.
All our databases have password Qwerty1234
Android keystore is checked in repository with access key in scripts.
Security keys for external services are also checked in into repository.
Some external services for production are managed by devs that are long time ago not working in our company
Hehe. Less than 8 years ago I asked for help to add a column in a database at a company I helped. This was a few days after they met me for the first time.
The company solved this by giving me a root username and password that worked on every single important database in the company, at least every customer database.
I had to beg them to create a somewhat restricted account.
The same company was however deeply sceptical to all kinds of remote work. The security equivalent of penny wise pound foolish I guess :-]
At a previous job they refused to give me database access and instead insisted I ask them whenever I needed any columns added/altered, however I did have access to the code to do my job and...mysql root credentials were committed to the repo.
To keep the charade up I sent one of every 20 requests to them to do for me.
I end up doing something like that whenever I’m granted root. I create a limited account, then delete the root or, if it’s shared, ask that the password be changed.
It's more than an absurdity, you have to do that to protect yourself from embarrassing mistakes. It prevents you from accidentally deleting or modifying the system in a difficult-to-restore way. If that happens, any security issue that existed becomes purely abstract and academic.
On one my past job there were fingerprint reader system on enter to office. Almost 6 years later, I were still able to enter office with my fingerprint.
At one of my past jobs there was a fingerprint reader system to enter the office. It didn't work reliably to recognise fingerprints of employees, so after a while people settled on the solution of having a large brick next to the door which was used to wedge the door open during the daytime after the first person managed to get the door open in the morning.
Skepticism regarding remote work often comes from the fact that a company is not sure whether the employees work like they should (especially for larger companies).
If they slack off, at least they do it in the office and not freely at home (imagine the possibilities!)
In a boneheaded movei I accidentally committed my SendGrid creds to GitHub. Pretty quickly after, GitHub alerted me. However by then my SG account was sending thousands of automated spam messages. Those automated scammer systems are FAST.
Not particularly germane to the discussion, but really disappointed in how SendGrid handled things. I notified them immediately, rotated all API tokens, and tey could not turn it off, so the spammer sent messages for days and eventually my SG account got suspended.
So this has happened at more than one company that I've been at, only difference is that these were AWS keys and used to mine bitcoin. AWS was actually pretty good about it, we rotated the keys as quickly as we could and they dropped all the charges.
I once worked for a co-founder who despite all the warnings I gave about not committing infra credentials to source control still went ahead and explicitly committed credentials to public source control because "the developer experience was better".
The CEO was not pleased with the 30K (or maybe it was 60K) bill... and I just pointed at the CTO and was like "I fought this battle and was overruled"
The most frustrating part of this is that the developer experience isn't better when you check credentials into source control.
It seems convenient until the credentials change (which they ought to now and then). Then when you check out an old revision of the project, it is broken. You end up having to copy and paste the new creds back in time and it's finicky as hell.
Github monitors for public commits of service secrets. Not an excuse to commit secrets, but there is a bit of a safety net.
> When you push to a public repository, GitHub scans the content of the commits for secrets. If you switch a private repository to public, GitHub scans the entire repository for secrets.
> When secret scanning detects a set of credentials, we notify the service provider who issued the secret. The service provider validates the credential and then decides whether they should revoke the secret, issue a new secret, or reach out to you directly, which will depend on the associated risks to you or the service provider.
Luckily I was in charge of setting up our remote repositories and made sure that by default they are all private repos, otherwise this would have happened without a doubt.
This would be hilarious if it were not so telling about the state of security in general (not at this company in particular, i'm certain many if not most companies do the same...).
We make no distinction between dev keys and production. Consider them production.
Since it's of interest to HN, I am working on educating our very small team on how keys should be protected and used. I am the youngest developer by about 15 years. It's a very rural company and it often feels like all learning and passion for development stalled around 2005. It's a company that gave me a chance to grow into a development role with no previous experience so I feel indebted to try my best to keep the lights on.
I don't judge too harshly- anyone who has black and white principles on these matters has never worked in any other industry most likely... all you can do is your best to steer the ship and convey the downsides.
I think it's important too because it helps us understand how much friction people will tolerate.
In many cases, even a small amount of friction will cause people to stop functioning completely; I recently tried setting up vault and it was a nightmare, I understand why people avoid picking it up.
That doesn't mean we should not try; we have to become the advocates, arbiters and helpers for those systems.
One thing i have noticed is that “security conscious” people are very good at criticizing things and pointing out flaws. But they are not as good at proposing clear and workable solutions that don’t add huge burden to users.
It should be no surprise that people do insecure stuff under deadline pressure.
> are very good at criticizing things [but] not as good at proposing clear and workable solutions
This is definitely true, but not actually surprising. It's much easier to notice that, say, a violin performance or plumbing repair is done very badly, than it is to actually do it correctly yourself.
Which also leads to a great deal of exasperation when people (either apparently or actually) don't even notice that what they're doing is insecure. There's a big difference between "yeah, it's broken, but it'd be a huge pain to fix and we'd probably get it wrong anyway, so we'd rather take our chances" versus "there is no problem".
Using different ones for dev and prod still might be good idea. If either one is compromised, there’s a chance the other is safe. You can still rotate them regularly, and/or if either one is compromised.