Hacker News new | past | comments | ask | show | jobs | submit login

> I’d think that a few megabytes of disk isn’t as valuable as the extra cpu cycles.

Depends on your workload, of course. Some people want to run a huge number of containers and each isn’t compute intensive.

Or maybe you don’t use libc at all in your fast path?

Lots of cases it makes sense.




If you reuse the base image, the libc files will be shared anyway.


This is the part most people don't realise.

They see Alpine at 5Mb and Ubuntu at 80Mb. They mentally multiply, without realising that each of these will be pulled once for each image built on top of them.

For a large cluster it's a wash. You might as well use Ubuntu, Centos -- anything where there are people working fulltime to fix CVEs quickly.


As far as I'm aware you'd have to load that 80mb into memory for each docker container you run so that can add up if you want to run a bunch of containers on a cheap host with 1GB of RAM.

I do agree that people prematurely optimise and mainly incorrectly consider disk space but I think there's a decent use case for tiny images.


Not quite. The 80Mb represents the uncompressed on-disk size. Those bits can appear in memory in two main ways. It can be executable, in which case large parts will be shared (remember, containers are not like VMs). Or it can in the FS cache, in which case bits will be evicted as necessary to make room for executables.

There's a case for tiny images, but it's in severely-constrained environments. Otherwise folks are fetishising the wrong thing based on a misunderstanding of how container images and runtimes work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: