Trust but verify. AWS has plenty of logging capabilities. I'm not saying that all developers should have unfettered access. But someone has to be trusted.
Logging doesn't help you when you business has to shut down because someone took over you account and deleted everything.
Separation of access is important and _required_. Developers don't need access to prod, admins maintaining the infrastructure don't need access to the directory, IDM doesn't need access to either QA or prod.
Developers do need full access in an environment to properly test - but that environment should be basically hermetically sealed from the rest of the companies infrastructure. So even if they do screw up the whole business won't be affected.
If someone took over your account and deleted everything, and you couldn't get any of it back, you weren't taking care of the "availability" third of security. I agree that developers don't need access to everything, but I completely disagree that they should have no access to prod.
The games of phone tag and "try typing this" that happens during prod issues is a waste of everybody's time, and I fully believe that the people who write the code should be the ones with both the responsibility of the pagers and the ability to fix the code they've deployed. Everybody is happier, and the job gets done more quickly, when the job gets done by the people most qualified to do it (because they wrote it), and when they bear the consequences of writing bad code.
The environment needs to be set up to be forgiving of mistakes, yes, but that's easily done these days and should never result in loss of data if the infrastructure is properly automated. If giving production access means your developers can screw something up, then your admins can just as easily screw something up. Create environments that forgive these failures because they'll happen one way or another.
There are already examples companies which have folded overnight after losing creds and having everything deleted.
Removing root is not a trust issue - it’s a security surface area issue. You increase the number of audit points and attack options by at least an order of magnitude (1 admin : 10 devs).
In a small shop this might be acceptable, however in a large org it’s plain old insane.
If you believe that devs require root then that’s an indicator that your build/test/deploy/monitor pipeline is not operating correctly.
> If you believe that devs require root then that’s an indicator that your build/test/deploy/monitor pipeline is not operating correctly.
For one, I never said anything about root. I'm not sure anybody should have root in production, depending on the threat model. What I am saying is that the people who wrote the proprietary software being operated should be the ones on the hook for supporting it, and should be given the tools to do so, since they're the most aware of its quirks, design trade-offs, etc.
That means not just CI/CD and monitoring output, but machine access, network access, anything that would be necessary to diagnose and rapidly respond to incidents. That almost never requires root.
> Not getting root on your own machine as a developer?
was the origin of this thread, and there are tons of places where developers are not permitted root access to their own dev machines. We are not all talking about prod instances.
I have this conversation with my own counterparts in network / platform / infosec / application teams (I am an app dev), and in some cases the issue is conflated because dev environments are based on a copy of prod, and the compromise of such prod-esque data sources would be almost equally as catastrophic as an actual prod compromise.
If this is your environment, then don't be that guy and make it worse by changing the subject from dev to prod. Don't conflate the issue. Dev is not prod and it should not have a copy of sensitive prod data in it. If your environment won't permit you to have a (structural-only) copy of prod that you can use to do your development work unfettered, with full access, then you should complain about it, or tell your devs to complain if it affects them in their work and not such a big deal for yours.
Developers write factories, mocks, and stubs all the time to isolate tests from confounding variables such as a shared dev instance that is temporarily out of commission for some reason, and so they don't have to put prod data samples into their test cases, and in general for portability of the build. Then someone comes along and says "it would be too expensive to make a proper dev environment with realistic fake data in it, just give them a copy of Prod" and they're all stuck with it forever henceforth.
It's absolute madness, sure, but it's not misrepresented. This is a real problem for plenty of folks.
You're assuming that a small company has a separate person solely dedicated to infrastructure.
Yes I have an AWS certification and on paper I am qualified to be an "AWS Architect". But I would be twiddling my thumbs all day with not enough work to do and would die a thousands deaths if I didn't do hands on coding.
Yes that sounds like someone who doesn't want to have to wait two weeks to get approvals to create resources in a Dev environment.
But as the team lead, I already had the final say into what code went into production and could do all kind of nefarious acts if I desired. Yes we had a CI/CD process in place with sign offs. But there was nothing stopping me from only doing certain actions based on which environment the program was running in.
I've seen what happens to people who are "just developers" that spend all their life working in large companies where they never learn anything about database administration, Dev ops, Net ops, or in the modern era - cloud administration. They aren't as highly valued as someone who really is full stack - from the browser all the way down to architecting the infrastructure.
Why wouldn't I choose a company if given that option that lets me increase my marketability, and gives me hands on experience in an enterprise environment instead of just being a "paper tiger" who has certifications but no experience at scale?
That's what made things more infuriating at the company I left. I came in as the lead developer knowing that if I wanted to get things done, I would have to ingratiate myself to the net ops people. I could fire off a Skype, ask for what I needed on prem (VMs and hard drive space mostly) and by the time I sent the ticket request as a formality, it was already done.
But then they decided to "go the cloud" and instead of training their internal network ops people and having them work with the vendor who was creating the AWS infrastructure, the vendor took everything over and even our internal folks couldn't get anything done without layers of approvals.
So I ended up setting up my own AWS VPC at home, doing proof of concepts just so I could learn how to talk the talk, studied for the system administrator cert (even though I was a developer) and then got so frustrated it was easier to change my environment than to try to change my environment.
So now they are spending more money on AWS than they would have in their colo because no developer wants to go through the hassle of going through the red tape of trying to get AWS services and are just throwing things on EC2 instances.
In today's world, an EC2 instance for custom developed code is almost always sub optimal when you have things like AWS Lambda for serverless functions, Fargate for serverless Docker containers and dozens of other services that allows you to use AWS to do the "undifferentiated heavy lifting".