So if you let people run arbitrary docker commands on a box, it's pretty much a given that they can affect the availability of the system, heck a low tech version of this would just be to keep spamming docker run on an image and move some files around to chew disk space.
If you're looking to restrict what users can do with docker on a host there's a variety of overlay packages you can look at to do that, with things like docker Universal Control Plane.
Theoretically there's a Docker authorisation plugin framework which could be used to restrict what user's who connect to the daemon can do, but it's never (AFAIK) really taken off.
Something similar to this happened to me recently -- My Ubuntu 1604 hard drive was unexpectedly full and cpu was being used by docker processes. Killing the docker processes and deleting info saved by them restored my system.
Meanwhile, is there a way to restrict Dockerfiles, e.g. not allowing users to be root in the container?
I had the impression that this technology was only usable for the "single user machine" use-case, as too many bad things might happen in true multi-user environments - what is quite limiting in a unix world where we are used to multi-user reality since a long time - it was disturbing to see that such a successful tec seemed to ignore that.
However, I am really happy for any updates on this issue, I did not follow Docker development too much, so punish me when I am totally wrong!
Unfortunately it's pretty slow as it has to rebuild the docker image in each child - this could be made better by dumping the image and passing it in in the docker context
> Now wait patiently until your hd fills up.
This is obviously slower than a fork bomb, but I wonder which happens first: either the hard drive actually fills up or you run into a process limit.