I think the key distinction here is unprivileged builds outside a container. Kaniko was aimed at an actually-quite-large problem, which is performing unprivileged building inside a container in a shared cluster.
This matters a lot, because otherwise CI/CD is a serious weak point in your overall security model. Especially if you will run PRs that folks send you -- ie, run sight-unseen 3rd-party code in a privileged container.
In terms of the OpenShift vs local development experience, we heard something very similar about Cloud Foundry too, so now there's `cf local`[0].
Essentially, Stephen Levine extracted buildpack staging into standalone container images (which now use Kaniko) along with a little CLI tool to help you make the local development experience more like the `cf push` one.
As it happens there is a lot of cooperation happening in this space right now -- folks from Google, Pivotal and Red Hat have been comparing notes about container building in the past few months. It's something we share a common interest in improving.
At Pivotal we have also begun closer cooperation with Heroku on buildpacks, which is really exciting. The buildpacks model lends itself to some really cool optimisations around image rebasing[1].
Folks heading to Kubecon this week who are interested in container builds should make sure to see the joint presentations by Ben Parees (Red Hat), Steve Speicher (Red Hat) and Matt Moore (Google).
For folks coming to this later, a correction. The buildpacks team are using the "crane" tool in go-containerregistry[0], rather than Kaniko, as the basis for their daemonless container-building containers.
There is so much container-related work coming out of Google right now that I am struggling to keep up.
Not getting root on your own machine as a developer? That's ridiculous. I left a company where I was the lead developer because I got tired of waiting two weeks for the outside networking consultants to get resources provisioned on AWS. Our own internal network people had to wait on them.
Now I work for a much smaller company and they gave me full admin access to the AWS console from day one and I am "just" a senior developer.
It's unfortunate but it's true, I've seen it 100x.
We let our devs have local admin at my company but please don't misunderstand the serious risks involved in that decision. We take great measures to ensure that we can do that while still keeping our customers safe.
We would never allow admin AWS access, that's absurd. An attacker on your box would be able to own prod. Sorry that it's an inconvenience! When you manage to solve infosec, hit your ops/sec team up, they'll be happy to hand you the keys after that point.
I think the GP is talking about things like provisioning AWS resources, not having root on the production box.
Anyone should be able to provision new resources without harming production. Most of the time you don't see this, because management has still not gotten it through their thick heads that in order for the business to move fast, devs need the agency to implement solutions fast.
Cost-cutting of AWS leads to one (or several!) groups that are the gatekeepers to AWS, purportedly in order to make it more "efficient" by everyone doing everything one way, or having one group of experts handle it. But that's just one group of plumbers handling repairs for an entire city. Leaks spring up and nobody is there to fix them, and meanwhile, the developer is sitting there staring at the leak, going "will you please just let me duct tape this leak so I can get back to work!?"
Exactly. When the last company I worked for decided to "go to the cloud", it took weeks to get simple EC2 instances setup because we had to go through my manager, say why we needed it, then wait on an outside vendor. We had layers of consultants getting in the way.
I still ended up doing stuff suboptimally like using all of the stuff I would use on prem like Consul (instead of Parameter store),Fabio (instead of load balancers), Nomad (instead of ECS), VSTS (CI/CD - when I should have use the Codepipeline, Code Build, code deploy ) and even a local copy of Memcached and RabbitMQ just so I wouldn't have to go through all of the red tape just to provision resources so we could get our work done. Our projects ended up costing more because it took so long to get anything approved, I would overprovision "just in case".
Anytime that I wanted to stand up VMs for testing, I would send a Skype message to our local net ops person for an on prem VM and have it within an hour.
I was the "architect" for the company - I couldn't afford to make excuses and it's never helpful at my level of responsibility to blame management when things don't go right. I had to get it done.
So, I learned what I could, studied AWS services on a high level inside and out, got a certification and left for greener pastures.
And people wonder why I never do large organizations.
Trust but verify. AWS has plenty of logging capabilities. I'm not saying that all developers should have unfettered access. But someone has to be trusted.
Logging doesn't help you when you business has to shut down because someone took over you account and deleted everything.
Separation of access is important and _required_. Developers don't need access to prod, admins maintaining the infrastructure don't need access to the directory, IDM doesn't need access to either QA or prod.
Developers do need full access in an environment to properly test - but that environment should be basically hermetically sealed from the rest of the companies infrastructure. So even if they do screw up the whole business won't be affected.
If someone took over your account and deleted everything, and you couldn't get any of it back, you weren't taking care of the "availability" third of security. I agree that developers don't need access to everything, but I completely disagree that they should have no access to prod.
The games of phone tag and "try typing this" that happens during prod issues is a waste of everybody's time, and I fully believe that the people who write the code should be the ones with both the responsibility of the pagers and the ability to fix the code they've deployed. Everybody is happier, and the job gets done more quickly, when the job gets done by the people most qualified to do it (because they wrote it), and when they bear the consequences of writing bad code.
The environment needs to be set up to be forgiving of mistakes, yes, but that's easily done these days and should never result in loss of data if the infrastructure is properly automated. If giving production access means your developers can screw something up, then your admins can just as easily screw something up. Create environments that forgive these failures because they'll happen one way or another.
There are already examples companies which have folded overnight after losing creds and having everything deleted.
Removing root is not a trust issue - it’s a security surface area issue. You increase the number of audit points and attack options by at least an order of magnitude (1 admin : 10 devs).
In a small shop this might be acceptable, however in a large org it’s plain old insane.
If you believe that devs require root then that’s an indicator that your build/test/deploy/monitor pipeline is not operating correctly.
> If you believe that devs require root then that’s an indicator that your build/test/deploy/monitor pipeline is not operating correctly.
For one, I never said anything about root. I'm not sure anybody should have root in production, depending on the threat model. What I am saying is that the people who wrote the proprietary software being operated should be the ones on the hook for supporting it, and should be given the tools to do so, since they're the most aware of its quirks, design trade-offs, etc.
That means not just CI/CD and monitoring output, but machine access, network access, anything that would be necessary to diagnose and rapidly respond to incidents. That almost never requires root.
> Not getting root on your own machine as a developer?
was the origin of this thread, and there are tons of places where developers are not permitted root access to their own dev machines. We are not all talking about prod instances.
I have this conversation with my own counterparts in network / platform / infosec / application teams (I am an app dev), and in some cases the issue is conflated because dev environments are based on a copy of prod, and the compromise of such prod-esque data sources would be almost equally as catastrophic as an actual prod compromise.
If this is your environment, then don't be that guy and make it worse by changing the subject from dev to prod. Don't conflate the issue. Dev is not prod and it should not have a copy of sensitive prod data in it. If your environment won't permit you to have a (structural-only) copy of prod that you can use to do your development work unfettered, with full access, then you should complain about it, or tell your devs to complain if it affects them in their work and not such a big deal for yours.
Developers write factories, mocks, and stubs all the time to isolate tests from confounding variables such as a shared dev instance that is temporarily out of commission for some reason, and so they don't have to put prod data samples into their test cases, and in general for portability of the build. Then someone comes along and says "it would be too expensive to make a proper dev environment with realistic fake data in it, just give them a copy of Prod" and they're all stuck with it forever henceforth.
It's absolute madness, sure, but it's not misrepresented. This is a real problem for plenty of folks.
You're assuming that a small company has a separate person solely dedicated to infrastructure.
Yes I have an AWS certification and on paper I am qualified to be an "AWS Architect". But I would be twiddling my thumbs all day with not enough work to do and would die a thousands deaths if I didn't do hands on coding.
Yes that sounds like someone who doesn't want to have to wait two weeks to get approvals to create resources in a Dev environment.
But as the team lead, I already had the final say into what code went into production and could do all kind of nefarious acts if I desired. Yes we had a CI/CD process in place with sign offs. But there was nothing stopping me from only doing certain actions based on which environment the program was running in.
I've seen what happens to people who are "just developers" that spend all their life working in large companies where they never learn anything about database administration, Dev ops, Net ops, or in the modern era - cloud administration. They aren't as highly valued as someone who really is full stack - from the browser all the way down to architecting the infrastructure.
Why wouldn't I choose a company if given that option that lets me increase my marketability, and gives me hands on experience in an enterprise environment instead of just being a "paper tiger" who has certifications but no experience at scale?
That's what made things more infuriating at the company I left. I came in as the lead developer knowing that if I wanted to get things done, I would have to ingratiate myself to the net ops people. I could fire off a Skype, ask for what I needed on prem (VMs and hard drive space mostly) and by the time I sent the ticket request as a formality, it was already done.
But then they decided to "go the cloud" and instead of training their internal network ops people and having them work with the vendor who was creating the AWS infrastructure, the vendor took everything over and even our internal folks couldn't get anything done without layers of approvals.
So I ended up setting up my own AWS VPC at home, doing proof of concepts just so I could learn how to talk the talk, studied for the system administrator cert (even though I was a developer) and then got so frustrated it was easier to change my environment than to try to change my environment.
So now they are spending more money on AWS than they would have in their colo because no developer wants to go through the hassle of going through the red tape of trying to get AWS services and are just throwing things on EC2 instances.
In today's world, an EC2 instance for custom developed code is almost always sub optimal when you have things like AWS Lambda for serverless functions, Fargate for serverless Docker containers and dozens of other services that allows you to use AWS to do the "undifferentiated heavy lifting".
A developer wouldn’t have to install malware. A developer could create malware. Even if you have all of your deployments automated, any developer worth anything could sneak malicious code into the process.
A developer doesn't need admin access to the AWS console to install malware, bitcoin miners, etc. He just needs to have his code installed. The person who is deploying the developer's code installed is rarely going to code review the code before its installed. If my code has access to production when you deploy it, I can make it do anything I want and you would never know.
What developers are you talking about? I want to know what developer would risk their career and prison time. And if a developer has no problem with going to prison, surely they have no problem finding some 0day privilege escalation exploit.
> It's unfortunate but it's true, I've seen it 100x.
That's not really an argument, how many developers did you have and how many of them risked prison time to install malware on dev machines?
> We take great measures to ensure that we can do that while still keeping our customers safe.
> We would never allow admin AWS access, that's absurd. An attacker on your box would be able to own prod.
You aren't doing a great job then, because your production stuff should be on a separate AWS account altogether.
Even if your production stuff is in a separate account, that just helps prevent someone from accidentally screwing up production. To think that not giving your developers - the people who are creating code that you are putting on your servers and know the infrastructure as well as anyone - will prevent them from being malicious is just security theatre. It may help you check the box about being compliant with some type of standard but it really doesn't help you. If the developers program has access to production resources, they can gain access to those resources.
I think of bitcoin mining attacks as pentesting with a contingent fee. Probably the cheapest and least destructive way to learn you have security weaknesses.
This isn't what this is about (nanny state not allowing you root on yer laptop). Rather it is about the idea that software should require the necessary privileges to do its job, and no more.
Singularity in general requires root, it's just in the form of setuid helpers. Now, you can force it to not require root but from memory there are a lot of caveats with using it that way. Unprivileged LXC is much more full-featured. And obviously rootless runc works great as well (though I'm a bit biased of course).
Singularity is capable of using setuid helpers, but by default it uses user namespaces (the USERNS kernel feature) and does not need the setuid helpers.
Lots of things are 'more full-featured' and none of them work well in an HPC context, where individual user jobs may need to be staged carefully.
> Lots of things are 'more full-featured' and none of them work well in an HPC context, where individual user jobs may need to be staged carefully.
After long discussions with the Singularity folks I've come to the conclusion that the only special features that Singularity has are:
* Their distribution format is a binary containing a loopback block device, allowing you to have "a single thing to distribute" without concern for having to extract archives (because in theory archive extraction can be lossy or buggy). The downside is it requires (real) root to mount or a setuid helper, because mounting loopback is privileged because it opens the door to very worrying root escalation exploits. When running without setuid helpers I'm not sure how they have worked around this problem, but it probably involves extraction of some sort (invalidating the point of having a more complicated loopback device over just a tar archive).
* It has integration with common tools that people use in HPC. I can't quite remember the names of those tools at the moment, but some folks I know wrote some simple wrappers around runc to make it work with those tools -- so it's just a matter of implementing wrappers and not anything more fundamental than that.
Aside from those two things, there's nothing particularly different about Singularity from any other low-level container runtime (aside from the way they market themselves).
A lot of people quote that Singularity lets you use your host user inside the container, but this is just a fairly simple trick where the /etc/passwd of your host is bind-mounted into the container and then everything "looks" like you're still on your host. People think this is a security feature, it isn't (in fact you could potentially argue it's slightly less secure than just running as an unprivileged user without providing a view into the host's configuration). If you really wanted this feature, you could implement it with runc's or LXC's hooks in a few minutes.
I'm working on [1] and the plan is to make this work much easier for people, so they don't need to hit the "docker is root" problem anymore -- this was a pain for me a few years ago when I was a university student trying to run some code on a shared computing cluster.
There's still lots of stuff left to do (like unprivileged bridge networking with TAP and similarly fun things like that).
Ever worked in government? You won't get (full) admin access there, either.
I thought it was good practice to have strong separation between Dev and Production, and I'm pretty sure you're meant to create AWS keys+accounts with less-than-root access for day-to-day work.
Yes. I create separate roles for different ec2 instances, Lambda expressions, etc. based on least privilege.
With AWS databases - except for DynamoDB - you still use traditional user names/passwords most of the time. Those are stored in ParameterStore and encrypted with keys that not every service has access to. Of course key access is logged.
There is a difference between the root account and an administrator account.
Day to day work on the console is configuring resources.
Even if you do have strong separation -in our case separate VPCs, someone has to have access to administer it. We don't have a separate "network operations" department.
I don't want to run services as root, except for services that grant login rights (It's probably better to have one ssh daemon, than allow/force every user to run their own...). And even then, keep the code that can change user I'd to a minimum.
I certainly don't want run big, rapid-changing code as root. Not the countinous integration pipeline, not the build pipeline.
It's a misfeature of docker that it needs more privileges than a traditional chroot/fakeroot build (probably a good reason to build on those tools to build docker images, rather than rely on Dockerfile/docker build. Build images and reserve Dockerfiles for pulling in ready images and setting some parameters for configuration).
Another major use-case of rootless containers (though image building is not as useful in that case) is being able to run things unprivileged on computing clusters. I implemented rootless containers in runc and quite a few other tools (like umoci) in order to be able to handle cases where you don't get root on a box.
There is also the security benefit of there being no privileged codepath that can be exploited. So the only thing you need to worry about is kernel security (which, to be fair, has had issues in the past when it comes to user namespaces -- but you can restrict the damage with seccomp and similar tools).
As CTO at a small healthcare company, one of my primary goals is to prevent even our employees from getting unaudited access to the db in our production system. In dev, no problem.
Having AWS admin privileges doesn't give you access to the database data. RDS/Aurora/Redshift instances are still controlled by your typical database privileges.
I don't think this is right. With RDS, for instance, you can reset the 'master' password from within the RDS console. (The RDS service itself retains the real superuser, the user RDS creates for you has some privileges dropped.)
Take EC2 as another example. If you have volume attach/detach permissions and EC2 start/stop permissions, you can stop an arbitrary instance, detach its root volume, reattach it to an instance of your choosing where you have login access, log in, add whatever you want to that volume (including rootkits, etc.), and reattach it back to its initial instance.
Giving someone AWS admin should be considered analogous to giving someone the keys to your racks in a datacenter. There really are many surprising ways that an AWS admin can take control of your infrastructure. Can you put countermeasures in place? Sure, but it's a huge attack surface
You should probably assume that anyone that has admin access to your aws account has complete access to everything in it. It’s too easy to get access to whatever secrets you need.
Devs like you were some of our favorite targets in the pentest world. What I wouldn't give for your ~/.bash_history file... I bet I could pivot to three different servers in under an hour.
This is an exaggeration, but only slightly. :)
Security costs convenience. But people love to be too lax. And it's so fun to point it out and see the look on their faces, or pop up an XSS in their favorite stack of choice.
My best one was getting remote access on a server thanks to an unsanitized PDF filename. They were calling into the shell like `pdf-creator <company name.pdf>` (or whatever the utility was called). They were a B2B service, so they never thought to set anyone's company name to something like "; <reverse shell here> #"
I just thought it might be fun to contrast the two worlds. Those big, stodgy companies that we love to make fun of... Those guys were some of the hardest targets. I once spent a week trying to get anything on one, and just barely got an XSS. And I was lucky to find it.
Developers often are a soft target, esp. in small/medium companies, but in my experience it's more of necessity by imposed expectations from management, than people actually wanting to have "root" everywhere. But I and many others dislikes getting yelled at by management, and no one has yet to accept any reasonable security precautions as a reason for delayed delivery without promptly ordering that precaution to be summarily removed, at least not in any of the companies I worked or consulted at the last couple pf decades.
If management understands that security costs time in feature development, fine. But with the role software development has these days in companies, if security and ops doesn't succeed in getting management on board, please don't hold the developers hostage! Work with them and try to find the least bad ways of working quickly enough. Many of them will support calls for better security practices as long as it doesn't imply more sleepless nights because goals haven't been changed, only the speed of which work can be done.
For any substantial deployment, I really don't want to have any access as a developer, but often I have to have root access to tons of machines simply to have any chance at actually doing my work.
Maybe its me, but I don't see what the restrictions would be to make an image which had a root FS inside it, without privileges. The "bits" which make it be a root FS don't need you to "mount" it as a root process and run chflags, you can dd into the image, change things, do math, change checksums. Sure, its a lot harder but the principle of what an image is, is: its a stream of bits. If you can modify the stream of bits, you don't need root to mark it as a region which has magic root properties be it a chflags FS state, or a setuid bit of whatever.
Also, again it may just be me, but if you are running a hypervisor limited VM image on a stream of bits, and you can modify those bits outside that state, restricting this VM not to have root runtime is slightly odd.
This reads like a proscriptive "no root" rule has been metasploited into "we will wake you at 3am to check if your dream, you are running as root" type extremism.
It's less about "could I write a program to modify bits directly?" and more about "what can I get that I won't have to support myself for the rest of time?".
Nothing stops anyone from writing code to interpret Dockerfiles or to directly fiddle with image layers. But taking the cost:value ratio proportional to everything else you need to be doing, it's probably a poor investment of time.
Google has economies of scale around this exact problem, which is why they've been pumping out work in this area -- Kaniko, Skaffold, image-rebase, FTL, rules_docker, Jib etc.
So his story reduces down to the real problem: what tooling can I find, which runs without root, but makes images which include root outcomes, for the set of things I need in an image which can't run un-privileged, and those tools need to work without running setuid() or seteuid() to root.
Thats a good story btw. I have people working near me who probably want the same thing from a lower driver, but nonetheless interest in non-root required builds.
> Maybe its me, but I don't see what the restrictions would be to make an image which had a root FS inside it, without privileges.
There are many reasons why those restrictions exist, it's mainly related to what types of files you can create and how you could trivially exploit if the host if things like mknod(2) were allowed as unprivileged users. There's also some more subtle things like distributions having certain directories be "chmod 000" (which root can access because of CAP_DAC_OVERRIDE but ordinary users cannot, and you need to emulate CAP_DAC_OVERRIDE to make it work).
In short, yes you would think it's trivial (I definitely did when I implemented umoci's rootless support) but it's actually quite difficult in some places.
Also unprivileged FUSE is still not available in the upstream kernel, so you couldn't just write your own filesystem that generates the archives (and even if FUSE was unprivileged it would still be suboptimal over just being more clever about how you create the image).
Related to this, so pardon my pitch:
We [Azure] literally just launched Azure Container Registry Build. [1]
With one command you can build your container in the cloud and have it stored in your registry. Either push the source code from local or have it use a remote git repository.
This is interesting, but I'm not sure if I'd stay at a company that refused me root access to my own machine. A culture where such a lack of trust exists is not one that sounds attractive to me.
This is something I've had to consider a lot in the design of EnvKey[1] (a config/secrets manager). I agree that refusing root access on your own machine seems like it's going too far in most cases, but where's the line?
For example, EnvKey makes it easy to only give a developer access to development/staging config so that if someone only needs to deal with code and not servers, they'll never see production-level config.
Could that get in the way sometimes? Sure. If for whatever reason someone who's normally a pure dev needs to step into ops for a bit, they'll have to ask for upgraded permissions to do so, which could certainly be seen as annoying, and could make someone feel less trusted than he or she would like to be. On the other hand, giving production secrets to every dev undeniably increases the surface area for all kinds of attacks, and I think that even small startups would be well-served by moving on from this as soon as they can.
I think the key distinction to make is between real security and security theater. As a developer, I'm willing to give up a little trust and a little efficiency if the argument for why I'm doing it seems valid, but if I'm being asked to jump through extra hoops without any clear benefit attached, I'll probably resent it. So for me, the most relevant question to ask the OP (or a company that wants to implement this) is what's the threat model? What exact attack scenarios is this protecting against? Are those realistic enough to justify the extra hoops?
Rootless containers are quite useful for things like accessing computing clusters which are shared and thus nobody gets root access on them (such as in universities). Rootless building is quite useful for other reasons (mainly because it allows you to build without ever hitting a privileged codepath which is arguably a huge security improvement), but to be honest it's more of a cool feature you can implement once you have the far more useful tools implemented (such as rootless storage management and rootless containers).
Fair enough, but there are cases where allowing that while also meeting the web of regulatory requirements that beset some industries is difficult if not practically impossible.
I've seen that in a couple of IT outsourcing companies, too. Its hilarious because it complicates working with these companies so much that sometimes u just might reconsider hiring the company and do the work yourself.
I dont think they realize how much they are shooting themselves in the foot with dumb restrictions like that.
If the VM is on the corporate network then it’s the same as connecting an unmanaged device - defeats the purpose of locking down machines. Developer VMs should be on their own VLAN.
Yes, but I think you are missing the point. Developers can access dev and production machines with non root users, root is never needed to run software.
If you are part of the sysadmins that really need root, to manage iptables or system updates for instance, you would have root.
I've locked down developer workstations before, to prevent things like "connect a USB stick and take a copy of data/code". We did allow running your own VM, where you could be root if you wanted.
In general, I've found it's pretty workable to run without root/sudo for development work - there's not _that_ much stuff that you can't just install to ~/bin and run from there.
The big unanswered question in my mind is "why on earth does being in the docker group give root access?". The relevant section from the Docker manual is
> First of all, only trusted users should be allowed to control your Docker daemon. This is a direct consequence of some powerful Docker features. Specifically, Docker allows you to share a directory between the Docker host and a guest container; and it allows you to do so without limiting the access rights of the container. This means that you can start a container where the /host directory is the / directory on your host; and the container can alter your host filesystem without any restriction.
This does not seem like a necessary consequence of setting up a daemon for building disk images. What am I missing here? Is this an engineering oversight on the part of Docker or is there some technical reason that forces it to be like this?
Basically docker group exists because it's a lot easier to get people to just add themselves to that group than to type 'sudo' over and over again. Sad but true.
It's not mandatory to use the docker group though. That's completely optional and you could definitely just 'sudo' whenever you do a 'docker' command. The docker _daemon_ needs to run as root because it needs to be able to do all kinds of privileged system calls to actually set up the containers. But if everyone who interacts with the docker daemon is a sudoer, that's not a problem.
Thanks for the reply. The docs say "only trusted users should be allowed to control your Docker daemon". Presumably this means that the Docker daemon can be coerced into doing all sorts of nasty stuff. Is that right? If so, doesn't that imply that it's badly written?
Afaik as I've been able to figure out, it should be possible to implement the plan9 filserver in userspace, and combine user-mode file-sharing with convenience and performance (esp. for a local host). Unfortunately, I've never been able to find a combination of utilities that actually work and allow this. Which I guess leaves user-mode nfs - or more exotic things like ipfs.
"Building Container Images Securely on Kubernetes:""Standalone, daemon-less, unprivileged Dockerfile and OCI compatible container image builder.""TLDR; will work unprivileged on your host not in a container til we work out some kinks."https://blog.jessfraz.com/post/building-container-images-sec...
Author here. I worked at a place that had full root for everyone everywhere and introduced docker back in 2014. It was fun. Then I moved to finance where regulatory requirements mean that root is unacceptable. Whatever we think of that (and I now build openshift clusters at home to test, which I'll blog on soonish) it is a reality for a lot of people. Don't get me started on trying to use git on Windows either...
Also, I have no admin access to uat or production envs or codebase. It's a challenge.
Yup. I implemented the rootless containers support in runc and I love working with the LXC folks when it comes to topics related to low-level container implementation.
It should be noted though that by default LXC does require some privileged (suid) helpers (for networking and cgroups) -- though you can disable them as well. runc by default doesn't, though that's just a matter of defaults and what use-cases we were targeting.
A PoC was shown of how to do this with buildkit several weeks ago in [0], but in your words - it's also not for the faint of heart (involving patching the Kernel). This is the way of the future - Docker image builds should not need to be privileged (they often are for mounting filesystems)
> A PoC was shown of how to do this with buildkit several weeks ago in [0], but in your words - it's also not for the faint of heart (involving patching the Kernel).
Rootless builds work without kernel patches (the "rawproc" stuff mentioned in issues is not going to be merged into the kernel and there are other ways of fixing the issue -- like mounting an empty /proc into the container). I can do builds right now on my machine with orca-build.
The main reason it's for the faint of heart is that we don't really have nice wrappers around all of these technologies (runc works perfectly fine, as does umoci, as does orca-build, as does proot, as does ...). Jess's project is quite nice because it takes advantage of the cache stuff BuildKit has, though I personally prefer umoci's way of doing storage and unprivileged building (though I am quite biased ofc). I'm going to port orca-build to Go and probably will take some inspiration from "img" as to how to use Docker's internal Dockerfile parsing library.
I may depend upon the company. I work at a company what writes software for smart phones [1] and we don't have such restrictions. Yet our parent company (we were bought out several years ago) does deal with financial transactions and has all sorts of regulatory restrictions what can and can't be done with computers.
Unfortunately, for some obscure reason, it's only now that we might have to conform to the regulatory compliance issues (no one has been able to answer me "Why now? Why not when we first got bought years ago?") We're trying to fight it but the fix seems to be in.
[1] Enhanced caller ID. Part of our software lives on the cell phone. Part lives on the phone network.
I've heard of it happening in other companies here in Ireland, and our security/infra team at a large multinational keep making appreciative noises towards companies and concepts about locking down end dev user machines and having dev needed stuff happen on VMs. Luckily they've not got their way yet.
It was like this at one of the companies I worked at. All dev desktops were heavily locked down and you had to file tickets to get software installed - sometimes the IT folks would log into your machine remotely and do it from there. It was kinda neat in a way.
Looks like the guy who wrote this works for a UK bank, which kinda makes sense since we all know UK banks suck at IT. RBS, TSB, Natwest, etc all have serious technical problems and banks have been known for years of just plain sucking at building web apps.
Access to large compute farms in most enterprises requires that users do not have root access to the nodes. Being able to run Docker containers on the farm would be fantastic. This is a great article that I will investigate further. Thanks!
I would recommend taking a look at [1]. This is the exact usecase I started working on rootless containers for. I talk a bit more about it in a talk I gave last year[2].
I was actually part of a university research group when I discovered the need for this, and started working on rootless containers as a result -- I gave a talk about this last year[1] (though quite a few things have changed since then).
According to the author's Linkedin [0] he's "Lead OpenShift Architect at Barclays"
I get the security risks and the fear of malicious insiders at a company such as this. But having expensive, fairly high level employees work around not having root access on their own machines strikes me as odd. The guy does docker for a living and can't run docker on his own machine.
Also, recall that he has physical access to said machine (I hope) so if he really was a malicious insider, he could already pretty much own it. Then again, maybe he doesn't.
Unfortunately I think these types of restrictions are fairly common in large organizations, even non-financial.
Speaking as someone in a Fortune 500, sometimes the bureaucratic hoops one must jump through to develop in a VM or with docker on ones own machine isn't even worth it.
At a company I interned at, even getting Linux on a laptop was a multi-week process. I ended up running a Linux VM on virtualbox on Windows, used cmder to ssh into it, and used a shared folder to edit code natively while running it on Linux.
To be fair, it makes no sense for the company to allow any desktop on Linux. The desktop wouldn't have the centralized authentication, the system updates, office or the drivers for the printer that are well integrated only on their windows machines.
So take a major financial insititue you probably have an account with - 300,000 employees each of whom has at least one PC age ranged upto a decade. Including servers and routers they probably have as many machines as AWS, but in several thousand "data centres" some of which get unplugged by the cleaners vacuum.
The management headaches for this kind of distributed computing is off the scale, and banning Linux and locking down senior devs workstations is just table stakes. Everyone is heading to a thin client running citrix to a data centre for pretty much this reason
None of it has to be complicated, it is because people want to make it complicated, and make computers some magical box.
Lets firs talk about removing root privileges on personal workstations or laptops. This is pointless. Anything that might be bad for root to do on a single user system is going to be just as bad running as a user. The second you allow any custom code to run as a user on any system you should treat it as potentially compromised -- adding root in to the mix really does not change things -- on a single user system. Worried about root user getting access to some customer data on the system? To bad, if the data was on the system it is more than likely the user level account (windows or linux for that matter) had access to the data therefor any intruder will also have access, just at the user level. The same goes for just about any other issue you can run in to. Am I suggesting running things as root? No, because there really is no need for most things -- at the same time if your developer needs to have root level access so they can test or work with technologies that require it when deployed to production then it really should be a non issue for sysadmins. The problem is sysadmins are mostly scared to be outed for doing nothing for most organizations these days. These sort of power sweeps are often used to justify big budgets and teams of people who tell you to "reboot" when it does not work right. There is also a bit if power hungry attitude associated with it too.
You state that Linux makes it harder, I can't see how and you did not show me anything convincing. Bold statements without any details into facts can just be tossed into the trash can as far as I am concerned.
Now lets talk about citrix. How does that help? All that does is move any real or perceived problem to a different system. If any of these VMs get accessed by bad actors they will still be able to own any of the information on them that the user had access too.
In any case I did not really come here to argue any of this, your comment is just sort of out of place with relation to what I said.
If you can't trust your employees don't hire them, or just pay them and tell them to sit in a dark room so they can't hurt anything.
Your first comment was on point. It's a massive hassle to manage the many environments that come with a hundred thousand computers.
The last poster has zero argument and is just ignoring the problem. Go setup a thousand printers for ten thousand employees in a hundred locations. They all have to work flawlessly and on all OS.
In the place I'm currently working at we have Windows behind Cisco VDI, we have to have Cntlm proxy running (all traffic goes through it).
Last week I got an email from Security team because I installed Decentraleyes addon for Firefox. Apparently, it's not allowed and is a security breach.
Many banks use thin clients that just log you on to a VM running on a rack mounted box somewhere. There’s a very good chance this fella has never even seen the physical hardware their system is running on. There’s also a very good chance they’re sharing the hardware with other users as well.
The zoo of build tools and scripts on top of Docker and "orchestration" tools sure reaks of incidental complexity. Could someone explain to me what material problem Docker is solving that couldn't be solved using a statically linked binary as demon?
Great now deploy that demon and updates across an entire fleet of servers a few times a day. And make sure you have metrics for it, and that you’re efficiently using cpu and memory resources across your cloud footprint.
This matters a lot, because otherwise CI/CD is a serious weak point in your overall security model. Especially if you will run PRs that folks send you -- ie, run sight-unseen 3rd-party code in a privileged container.
In terms of the OpenShift vs local development experience, we heard something very similar about Cloud Foundry too, so now there's `cf local`[0].
Essentially, Stephen Levine extracted buildpack staging into standalone container images (which now use Kaniko) along with a little CLI tool to help you make the local development experience more like the `cf push` one.
As it happens there is a lot of cooperation happening in this space right now -- folks from Google, Pivotal and Red Hat have been comparing notes about container building in the past few months. It's something we share a common interest in improving.
At Pivotal we have also begun closer cooperation with Heroku on buildpacks, which is really exciting. The buildpacks model lends itself to some really cool optimisations around image rebasing[1].
Folks heading to Kubecon this week who are interested in container builds should make sure to see the joint presentations by Ben Parees (Red Hat), Steve Speicher (Red Hat) and Matt Moore (Google).
Disclosure: I work for Pivotal.
[0] https://github.com/cloudfoundry-incubator/cflocal
[1] https://www.youtube.com/watch?v=jsNY4OP3IrE