Hacker News new | past | comments | ask | show | jobs | submit login

> The Windows problem has always been so so much worse

Hard, hard disagree. The problems are somewhat comparable. But if any platform is more painful it's Linux. Although they're similar if you exclude glibc pain. At least in my personal experience of writing lots of code that needs to run on win/mac/linux/android.

> pretty much all developers simple resorted to shipping the OS system runtime with every package

Meanwhile Linux developers have resorted to shipping an entire OS via docker to run every program. Because managing Linux environment dependencies is so painful you have to package the whole system.

Needing to docker to simply launch a program is so embarrassing.

> except of course you can't even combine some modules compiled for debug with modules compiled without debug in the same executable

That's not any different on Linux. That has more to do with C++.




> Meanwhile Linux developers have resorted to shipping an entire OS via docker to run every program. > Needing to docker to simply launch a program is so embarrassing.

I have never needed docker "just to launch a program". Docker makes it easy to provide multiple containerised copies of an identical environment. Containers are a light alternative to VM images.

I assume you find the existence of Windows containers just as embarrassing? https://learn.microsoft.com/en-us/virtualization/windowscont...


> Docker makes it easy to provide multiple containerised copies of an identical environment.

Correct. The Linux architecture around a global of dependencies is, imho, bad and wrong. The thesis is it's good because you can deploy a security fix to libfoo.so just once for the whole system. However we now live in a world where you actually need to deploy the updated libfoo.so to all your various hierarchical Docker images. sad trombone

> Containers are a light alternative to VM images.

A light alternative to Docker is simply deploy your dependencies and not rely on a fragile, complicated global environment.

> I assume you find the existence of Windows containers just as embarrassing?

Yes.

I know my opinion is deeply unpopular. But I stand by it! Running a program should be as simple as downloading a zip, extracting, and running the executable. It's not hard!


I wish you all had experienced the golden age when running a program was as simple as apt-get install program and run it.

I find it hard to discuss the merits of Linux va windows with regard to deploying software without addressing the elephant in the room which is the replacement of a collectively maintained system that ensure software cohabitation by the modern compartmentalised collection of independent programs.


Flatpak it's fine. For serious reprocibility and unmatched software instalations, guix.


> Correct. The Linux architecture around a global of dependencies is, imho, bad and wrong. The thesis is it's good because you can deploy a security fix to libfoo.so just once for the whole system. However we now live in a world where you actually need to deploy the updated libfoo.so to all your various hierarchical Docker images. sad trombone

Only if you choose to use docker. As other people have pointed out most things on Linux can be deployed using a package manager.

> A light alternative to Docker is simply deploy your dependencies and not rely on a fragile, complicated global environment.

So like flatpak?

> I know my opinion is deeply unpopular. But I stand by it! Running a program should be as simple as downloading a zip, extracting, and running the executable. It's not hard!

How is that better than apt install? That (or equivalent on other distros) is how I install everything other than custom code on servers (that I git pull or similar).


I guess your software is not libre, right?

In that case, I think you should stick to snap and flatpak.


> Running a program should be as simple as downloading a zip, extracting, and running the executable. It's not hard!

You can do that; just most choose not to and use packaging infrastructure instead. It's not mandatory.


Instead of Docker, static linking is a much more elegant solution if library/dep management is painful.

Switching to static linking using a sane libc (not glibc) can be a pain initially but you end up with way less overhead IMO.


Static linking is a good way to avoid the problem, but I’d hardly call it “elegant” to replicate the same runtime library in every executable. It’s very wasteful - we’re just fortunate to have enough storage these days to get away with it.


> I’d hardly call it “elegant” to replicate the same runtime library in every executable. It’s very wasteful - we’re just fortunate to have enough storage these days to get away with it.

I think you need to quantify "very wasteful". Quite frankly it's actually just fine. Totally fine. Especially when the alternative has turned out to be massive Docker images! So the alternative isn't actually any better. Womp womp.

An actually elegant solution would be a copy-on-write filesystem that can deduplicate. It'd be the best of both worlds.


> Switching to static linking using a sane libc (not glibc) can be a pain initially but you end up with way less overhead IMO.

how does that work when you app needs access to the gpu drivers ?


Containers are a relatively recent phenomenon. Outside of distro packages, distributing statically linked binaries, dynamic binaries wrapped in LD_LIBRARY_PATH scripts or binaries using $ORIGIN based rpath are all options I have seen.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: