Hacker News new | past | comments | ask | show | jobs | submit login

Docker is a life saver if your shipping software on Linux. Otherwise you’re going to have to deal with endless bug reports from oddball distros. Docker lets you test with a stable set of dependencies and know that’s what your users will see.



Containerisation is a clever technology, but I can't help feeling it's a capitulation; an admission that the compatibility problem was just too hard for us to solve.

It's undeniably useful and pragmatic, but its existence should be a source of shame, a constant reminder that we failed.


I think of it exactly the opposite way. Containers show that an extremely high degree of compatibility has been achieved. Think about it, you can run new software, on an unrelated distro, even a newer distro that wasn’t planned when the host distro was created, because they all conform to the Linux ABI.


A thought that just popped up in my head: I wonder how a priest would react if someone came to the confessional to talk about Docker and the Open Container Initiative (:


You could also "just" ship a VM disk image and cut down those variables even more...

As a user/self-administrator, I really do not appreciate a developer throwing a ton of incidental complexity over the wall. Docker is basically a scaling up of the "works on my system" cop out.

I get that there are a lot of oddball distros, but it would seem that a policy of only digging into bug reports on specific well known distros would be more appropriate than basically forgoing the entire concept of a distribution in favor of what are essentially huge static binaries.

It's especially a problem when projects go nuts with this Dockerization anti-pattern, and become actively hostile to distributions shipping plainly administerable versions of their software (looking at you Home Assistant).


It's like people forget statically compiled binaries exist


It's a good solution, but there are popular programming languages where that doesn't work.


Why does it matter what distro you're running on? You shouldn't be relying on anything outside of the tarball you ship, besides maybe glibc. Is there anything which can't be made to run from a self contained directory?

I'm sure Docker's great if you want to guarantee that the thing inside can't access the host system, using all kinds of kernel mechanisms, but that's generally the opposite of what you want for application software.

The main advantage of Docker for application distribution that I see is that it's lazy. You don't have to worry about keeping track of what's required, you don't have to fiddle with paths, you just hack until something works and then ship it. That's fine if you prioritize your time over your user's time.


How about flatpak? To fulfil this function?


And snap, Appimage...


AppImage is just a handy single-file solution for the very standard "tarball of executable + dependencies" which is good enough for such complex projects as Firefox and Blender. You don't need to integrate with the OS, and you don't need to put everything in a container either. Just include all the binaries and libraries you need, and make sure everything looks in the correct path. That's it.


There's no obligation to support every distro on the planet.


If you’ve run an open source project you will get these bug reports. It’s also not obvious if it’s a bug in your code or a dependency many times so you’re going to want to debug it.

As an example alpine has a smaller default stack size. It’s not obvious if that stack overflow is expected or not.


Sure, you might get them. But you have the option to ignore them, preferably with a friendly message back to the reporter, stating that, say, Alpine Linux is currently not supported.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: