Hacker News new | past | comments | ask | show | jobs | submit login
Reasons to Drop Docker for Podman (redhat.com)
60 points by richardpetersen on Aug 6, 2023 | hide | past | favorite | 97 comments



This reads like an AI generated article. It covers some differences like pulling images with a gui vs a cli. Ok… thats incredibly unimportant.


It might be because an intern wrote it? I read it myself and I definitely think it’s not an easy read, borderline not make any sense at all, but I don’t think it’s AI because as crazy as it may be, I think an AI would have done a better job at this.


Ouch lol


What problem does Docker's daemon actually solve? To me it always felt over-engineered.


Having a daemon is inherently faster, period. I was disappointed by Podman because almost all operations including `podman ps`, `podman run` were slower than their Docker counterparts. After many releases Podman got faster, but still not on a par with Docker.


Why is a daemon inherently faster? Why would `podman ps` be slower at a human scale?


Problems having a daemon solves:

1. You need a watchdog process anyways to handle stdout/stderr processing (to not block those pipes), namespace setup and teardown, etc. Why not have 1 daemon, rather than 1 daemon per container?

2. You need root to setup networking and various security controls (seccomp etc). Having a daemon lets an unprivileged user delegate to a privileged daemon. This was especially important in the early days when rootless containers and slipr4netns didn't exist or were immature.

3. You need to avoid contention between two "docker run" commands, such as avoiding downloading the same image twice, or running the same name twice, or so on. In-memory locks are really easy to get right and cheap

4. You need to be able to efficiently gather state for things like "docker ps" etc to work. A daemon lets you cache a lot of information very easily.

5. A daemon lets you provide a clean rest API to other components that want to "exec" or "list containers" or so on. Without a daemon, your API is just "exec this command, use stderr/stdout", which is worse, right? An API also helps immensely with "docker desktop" like stuff, where you want to run a client on the host, and run containers on a linux VM. I guess the daemonless answer there is "exec over ssh", which is slow, and also eww.

So, how does podman solve these? I'll number them the same:

1. Watchdog process per container (N daemons) plus centralized systemd daemon if you want to background a container, or for restarts to actually work.

2. slipr4netns, which is buggy and still less featureful than docker networking. It also just doesn't implement various security features that would require a privileged component to setup, and makes it harder to run root-ful containers.

3. Contention is avoided with an incredibly over-engineered filesystem locking scheme that has had a long history of deadlocks and bugs.

4. Again, the filesystem locking scheme and api. It's slow, can't cache well, and also has a long history of deadlocks

5. The API for podman is uh... "run a daemon" https://docs.podman.io/en/latest/markdown/podman-system-serv... lol. Except how does another app know if you ran the API because it's optional? This makes it so other applications that want to integrate with podman have to say "use podman and also run the podman daemon if you're not, and if you forget we won't auto-detect podman sorry".

In general, podman's answer to all the problems the daemon solves seem materially worse, and more overengineered. Except to the API bit, where their answer is to post all over the place "podman is daemonless", but then to package a daemon, which is just funny.


> Why not have 1 daemon, rather than 1 daemon per container?

Having a daemon per container has this little advantage that if something manages to bring down one of the daemons, it won't bring down the whole shebang.


Also side-channel attacks.

E.g. if one user downloads a container, and then for another user it is already in the cache, this gives the other user information about the first user.


Isn't this pointless, since if the other user has access to docker, they basically have root access to the machine?


I don't think users need to have root access to use Docker.


No, but unless they run in rootless mode, they can mount any directory and write on it as root


They don't, but if you have given them access to Docker then it's just as if you had given them access to root.


Why is watchdog needed at all? There is systemd or any other pid1, stdout/stderr should go to journal without any middleware.


If you want to use podman some of the more recent podman versions on Ubuntu or Debian, I have a kind of hacky PPA up here - https://github.com/notbobthebuilder/podman

GitHub actions auto build new version releases for me so major versions become available as soon as they are released and I click the button


This is good. I felt really annoyed that Ubuntu versions of podman lagged significantly and podman's official stance is basically just "deal with it" while they provide support for every other platform.


I don’t think I understand the author completely. First, she mentions that Podman is not a replacement for Docker but then mentions multiple reasons why Podman is better than Docker. I.e. why you should replace Docker with Podman.

What did the author mean by Podman is not a replacement of Docker?


She means that it's not a drop-in replacement that works exactly the same way.

It works differently, and (in her opinion) in some ways better.

But you will likely have to change your workflow to use it.


I tried Podman on 2 MacBook Pros: my personal one (Intel) and my work one (M1) and it basically doesn’t work well at all.

Podman Desktop simply doesn’t work, on first run it loops forever on initializing stuff (I guess it tries to create the Podman machine but fails? No idea because it doesn’t say what’s wrong, nor where to look). So I tried Podman bare without Podman desktop and it’s not a lot better, the machine starts fine and I can run containers, but every time my computer wakes up from hibernate, the containers and the machine are stuck. I have to recreate the whole Podman machine from scratch.

I loved the idea of rootless but it doesn’t work on Mac. And I won’t believe I’m the only person having the exact same set of issues on 2 different MacBooks


These types of papercuts (see other comments on how podman-compose and any sort of custom networking have issues, too) are why I have mostly avoided Podman so far. I find myself just using Docker rootless on Linux workstations and the Docker Engine on servers. On MacOS, I use Colima and it has worked well for me.

> I loved the idea of rootless but it doesn't work on Mac

One clarification I think is worth making in case you weren't aware is that the "rootless" approach isn't really a factor for any Linux container runtime on MacOS since all the container solutions on MacOS run in a VM (since Linux containers rely on Linux kernel features). I.E. Docker Desktop, Podman Desktop, etc. can't run as root on MacOS because they rely on a user-level Linux VM.


Thanks for the clarification!

Maybe I’m using the wrong term, but when installing Docker, you need root access, and not for Podman. Maybe I’m wrong but I don’t think it’s possible to install Docker if you’re not root on the machine?


https://docs.docker.com/desktop/mac/permission-requirements/

This link breaks down what permissions are used on MacOS.

> Maybe I’m using the wrong term

Typically, the meaningful piece with "rootless" Docker is that the daemon is not running as root.

When the Docker daemon is running as root on a Linux server, for example, anyone who can access the daemon (i.e. anyone in the "docker" group) has enough access to the system can do catastrophic damage with the access they have. For example, the docker daemon can mount any file on the host's filesystem (i.e. "-v /etc/shadow:/tmp/shadow"). With Docker running as root, anyone with access to the Docker daemon has the power to do almost anything to the system.

With rootless Docker, that issue is mitigated heavily because the Docker context is restricted to an unprivileged user context.

> but when installing Docker, you need root access, and not for Podman

According to Podman Desktop's docs, it asks for admin permission when installing on MacOS: https://podman-desktop.io/docs/Installation/macos-install

That being said, I don't personally see any security value added or removed by an installer process needing to elevate privileges. That's a one-time thing and likely should require admin privileges.


> That being said, I don't personally see any security value added or removed by an installer process needing to elevate privileges. That's a one-time thing and likely should require admin privileges

Where I worked before we didn’t have root access on our laptops, so we couldn’t install Docker.

I’ve switched company since, but my former coworkers were able to install Podman (not Podman Desktop) without root access.


Ah, I see. So not really container runtime security, more operational/principle of least privilege. Had not accounted for that, I can definitely see how that would be useful.

Although, I would say we have definitely strayed far away from the typical definition/security benefits of "rootless" container runtimes. Usually the rootless container threat model accounts for containers or access to the runtime being weaponized -- it's not usually IT preventing you from installing apps. :)

Still, thanks for indulging this conversation.

(Also, I thought the only way to run Podman containers locally on MacOS was Podman Desktop -- has that changed recently?)


You are not - it is basically non-functional on Mac. I try to stick with it but the reliability of Podman Desktop on Mac is awful - it can’t be doing RedHat’s brand any good releasing and promoting something so poor.


I've tried Podman for a few weeks, and I have to say that first experience is fine, I get that there are some differences in the setup. Aaaand then you come across all sorts of the small and medium cross-incompatibilities (flags, parameter handling, different syntax parsing, drivers..), long standing bugs, Podman Compose is at least subjectively poorly supported/aims at compatibility at all, the list goes on. Docker itself isn't perfect, but it's totally not such a mess.


I came accross a lot of such info from redhat. And I tried podman which is not bad because my scenario was not complicated. But days before, I tried podman-compose which behaved so poor. That's a pitty. After years, podman's ecosystem is so poor. I don't want to be trapped by sorts of small/hidden pitfalls. I have to switch back docker then. The quality prevails, not the advertisement. PS: I did not read the passage. The title is enough.


I started using docker a bit late, only about 3 years ago. My big question is if it is common knowledge that Docker Desktop creates a separate VM in which one's Docker Engine runs, not your host running Docker, but that VM inside your host. That separate VM indirection does not appear to be common knowledge, especially for 3rd party developers, because utilities that are supposed to work with Docker only work without Docker Desktop, or are unaware of Docker Desktop's VM and when installed claim there is no Docker on the host. It's really messed up, and as I tried to figure out what was going on I find nobody recognizes this gargantuan difference between these implementations with and without the "desktop" Docker is internally significantly different, requiring completely different runtime configurations for 3rd party integration.


> common knowledge that Docker Desktop creates a separate VM

I mean, I know that linux containers only run on linux (please ignore docker for windows which briefly did not need a vm, that is thing of the past now). I don't know if other people know that linux containers only run on linux.

> utilities that are supposed to work with Docker only work without Docker Desktop, or are unaware of Docker Desktop's VM and when installed claim there is no Docker on the host

> gargantuan difference between these implementations with and without the "desktop" Docker is internally significantly different, requiring completely different runtime configurations for 3rd party integration

It's not really that different though. Because docker has a daemon and API, anything that integrates with docker is supposed to talk to the "$DOCKER_HOST" environment variable to run docker containers, see if docker is running and what version, etc etc.

That should be identical whether "$DOCKER_HOST" is pointed at a unix socket on your host, or at a linux vm, or so on.

The implementation of those APIs should be identical.

Do you have an example of a specific utility? Did you not set "$DOCKER_HOST"?


I spent months spinning my wheels trying to get the Tailscale VPN / Traefik / Let's Encrypt automatic SSL cert generation working. Each of the tech support teams at all three of these companies are unaware of that Docker Desktop separate VM. I spent months with support from Tailscale and Traefik, and after I realized the existence of that separate VM and discussed with their support - that VM news to them.

The transition to Docker development is very poorly documented. I've taken two formal classes in Docker, read a book, and have half a dozen Docker projects done and delivered and this is the first time I've even heard one needs to manually set "$DOCKER_HOST". This industry is just a bunch of overly paid amateurs, blindly groping in a dark cave, a cave carved out of money.


Long-time Docker user, am aware of the need for a VM on MacOS, Windows for Linux containers.

I think that one of the reasons many people might not be aware of the VM is because -- in my experience -- Docker Desktop works almost identically to Docker on a real Linux system. I feel like Docker has done a fantastic job at making you feel like it's running natively (i.e. despite running in a VM you can mount volumes close to the same way, you can use the docker CLI from the host, etc.). Additionally, I don't think people realize/care that Linux containers rely heavily on features the Linux kernel provides (interestingly, and less well-known, Microsoft has done a lot of work to provide Windows containers[0], too).

I am curious, though, why in your use-case of Tailscale and Traefik knowing that Docker Desktop runs in a VM would impact anything from a functional standpoint? I.E. why would the VM have even been an important factor to the support teams you reached out to?

> This industry is just a bunch of overly paid amateurs

I think, perhaps a more compassionate view is that everyone is learning and growing and it's difficult to be an expert at literally everything you use in your stack. :)

[0] https://learn.microsoft.com/en-us/virtualization/windowscont...


> I am curious, though, why in your use-case of Tailscale and Traefik knowing that Docker Desktop runs in a VM would impact anything from a functional standpoint?

When using Ubuntu/WSL2, there are not the same daemons running as on the same Ubuntu running on it's own. Tailscale expects one or two, I'd have to dig into my emails to find the specifics, something like no systemd under Ubuntu/WSL2 and Tailscale not checking, just failing. I seem to remember there was more than one daemon expected, which might be present on that other VM but either Tailscale or Traefik know to check or communicate with that other VM and their integration fails. Support's recommendation was to just use a no desktop gui server where everything just works.

Yeah, I get grumpy. I need to check myself better. I realize we're all trying our best.


> When using Ubuntu/WSL2, there are not the same daemons running as on the same Ubuntu running on it's own.

Ah, makes sense. I have encountered some funky stuff with Docker+WSL, especially because I often prefer to use distros other than Ubuntu. It feels extra fragile/added complexity how Docker Desktop on Windows relies on WSL for Linux containers.

Thanks for indulging my curiosity!


Small fyi, the site seems to break the back button on mobile.


It has an annoying table of contents system that seems to move you to new urls constantly as you scroll up and down, so back just takes you to different parts of the page for a while till you finally leave.


Not an awful feature if it rewrote the existing URL in place, rather than pushing a new one (idk if the API actually allows that) - so copying it to share you'd always get the right anchor to where you're currently reading (or could remove it still if you intended to share the whole thing).


aka "How else I can justify for the company to spend money on 'Web-Developer' for another year"


Yes, although the transition from Docker to Podman would be seamless, the transition from the blogpost to HN wouldn't be :p


I'm surprised to see RedHat thinking that programmers will prefer a desktop GUI interface to CLI for operations like starting containers. I thought that thinking had died out 20 years ago and that even Java and Windows people understood that it wasn't what programmers wanted.


I prefer a desktop GUI to cli for a lot of stuff. That's why I use the docker integration into intelliJ.


You think all programmers are HN-level passionate?

Most of my colleagues don't know what a ssh public key is used for.


I was astonished by how few dev people in my company knew what git vs GitHub is, how to create an ssh key, why, etc.


OK fair enough (and I was being deliberately provocative so deserve any criticism I get).

You're of course right that there's a huge diversity of programmers (in terms of experience, background, and extent to which they give a shit even if they're experienced and had access to the most helpful family background and best education a wealthy developed country can provide). But don't you kind of think that we should still encourage all programmers to see that _programmatic_ interfaces (in particular shell programming interfaces) offer so much power that they should be embraced and looked upon with enthusiasm where possible? And that we should not create static non-programmable GUI interfaces for programmers?


I agree in the same manner I agree we should fix the world hunger problem. :)


Haha but no, they're not similar. We can absolutely influence the prevalence of inappropriate GUI interfaces. For starters, push back if someone tells you to make one inappropriately. I don't think there's a huge market demand from GUI-prone programmers; I believe those people skew more towards being less opinionated and just using what's suggested. It's more driven by product managers and people writing code using languages/frameworks that are very divorced from the command line perpetuating the same thing in the products they create.


After the license shxt show I'm a bit skeptical on using anything from Redhat now.


I'll bite. I'm assuming you're talking about RH removing source availability for RHEL[0]? If not, would love to be informed on the licensing "shxt show" you're referring to.

Some background: I avoid most of the Enterprise Linux ecosystem (CentOS, RHEL, AlmaLinux, etc.) as my mentality to sysadmin is often quite different than the average RHEL lover. I run NixOS everywhere and supplement with Arch Linux or Ubuntu when I can't use NixOS. So I am in no way a part of the target demographic of RHEL or derivatives.

But, I actually agree with Red Hat's view on the RHEL clones[1]: they should start using CentOS Stream as their upstream base. CentOS Stream is upstream of RHEL, so if the RHEL-compatible distros start snapshotting from CentOS Stream, then all of the Enterprise Linux ecosystem will benefit from bug-fixes, improvements, etc. that they all contribute into CentOS Stream. Unfortunately, for the RHEL-clones, if they adopt this approach they wouldn't have bug-for-bug compatibility with major RHEL releases, and they would have to do more work in making an LTS off of CentOS Stream. But, everyone benefits (theoretically) in contributing to the same upstream codebase -- and I can't think of many instances where an application that works on RHEL wouldn't work on CentOS Stream or a downstream derivative.

AlmaLinux has chosen to rebase off of CentOS Stream[2] and I think it's the right choice. Rocky Linux has chosen to try to workaround Red Hat's removal of source availability by relying on loopholes to obtain RHEL source code via UBI containers and cloud images[3]. I can't imagine this (seemingly fragile) approach being sustainable and it feels counterintuitive to continue building a distro based on the work of a company whose mentality that you fundamentally disagree with.

Long-winded response, but I guess I don't love the FUD surrounding Red Hat. Red Hat's work benefits me as a Linux user with tooling like Network Manager, systemd and GNOME. I don't think Linux would be as serious of a contender in certain spheres without Red Hat's open source work that the entire Linux ecosystem benefits from.

[0] https://www.redhat.com/en/blog/furthering-evolution-centos-s...

[1] https://www.redhat.com/en/blog/red-hats-commitment-open-sour...

[2] https://almalinux.org/blog/future-of-almalinux/

[3] https://rockylinux.org/news/keeping-open-source-open/

EDIT(s):

Some minor grammar fixes.

Also, I would recommend looking at the the AlmaLinux link's footnote for ABI compatibility and what that means.


Recently switched back to Docker Desktop from Podman Desktop since Docker Desktop seems to integrate better with Windows.

Good progress though and I'll revisit it again soon.


Even Docker Desktop for Mac works pretty great and they improved the startup. Not really had issues to warrant switching.


Docker for Desktop got better.

I think VirtioFS fixed most unbearable fs slowness.

But Rancher Desktop also got better and I just cannot understand why I would use Docker for Desktop instead of the Open Sourced Rancher Desktop.


My experience on Mac: - colima: Testcontainers cannot connect to the containers when the tests are ran from the command line (I have env vars DOCKER_HOST=unix:///Users/dxxvi/.colima/docker.sock and TESTCONTAINERS_DOCKER_SOCKET_OVERRIDE=/var/run/docker.sock). However, the same tests ran fine in IntelliJ. - Docker Desktop: doesn't have the above issue.

Don't know why.


Maybe this was a typo in your comment, but you should double check to make sure that DOCKER_HOST is set to `unix:///Users/dxxvi/.colima/default/docker.sock` (missing `default` in your line)

I just verified running Testcontainers tests via CLI with those env vars you posted. YMMV


Me and docker desktop for Mac do not get on. I've switched to OrbStack and it's a breath of fresh air.


For several months I've used colima. It seems more performant, and I really don't need the UI. That said, my use cases are pretty basic (mostly just web apps with Docker Compose)


I've been having an ok time with Rancher Desktop, I stopped using Docker Desktop on Windows after they forced me to update to a known broken build with no way to go back (unless I paid for their enterprise version)


Seemed great when I first tried it, but I ran into inconveniences I wasn't willing to put up with (can't remember exactly what they were, but I think all involved pulling from registries).

Colima on the other hand, is install-and-forget (so much so that I had to look up what it was called again). It uses Lima, which means you can do more with it it if you want.


> Podman is a cloud-native

What does cloud-native mean actually?


AWS' definition[0] I think is pretty good:

"Cloud native is the software approach of building, deploying, and managing modern applications in cloud computing environments."

An example of this would be how Podman Desktop includes tooling that allows you to easily deploy and manage pods in Kubernetes.

[0] https://aws.amazon.com/what-is/cloud-native/


Anyone using Podman on a macOS? How's the experience compared to Docker Desktop?


I tried using Podman for about 4 or 5 months on my M1 Mac, but eventually just gave up in frustration. I'm not doing anything too crazy, just running Minikube for some local development, but it just isn't stable enough and requires a lot of digging into threads to find the workarounds or the proper config.

I was wasting more and more time on it. It was constantly crashing/hanging so one day I just said fuck it and switched back to Docker Desktop, which not only worked better than when I switched but seems to have improved quite a lot in the intervening time as well.


I've been using orbstack on macos and its been wonderful. Much faster than docker for. Mac.


May not be a concern of yours, but eventually it will have the same licensing concerns as Docker Desktop. (Free now, but only during beta; business use will require licensing)


It works well, but I found I had to configure several items inside the VM to get it to work within our corporate environment. Timezones, ntpd, proxy, and arm64/amd64 cross platform all needed me to tweak some stuff. But that was quite a while ago and may be fixed now.

Colima on the other hand works pretty much seamlessly, but really works best on macOS Ventura because you can enable vz virtualization.

Edited to add: colima works seamlessly on the command line. So if you're command-line-averse, you'll need to learn some new commands and things won't be as convenient. I haven't tried rancher desktop yet.


> Anyone using Podman on a macOS? How's the experience compared to Docker Desktop?

Don't forget there is also Rancher Desktop[1]

[1] https://rancherdesktop.io


Why would one use Docker for Desktop when Rancher Desktop exists?


I wonder if Rancher Desktop, et al, works with testcontainers.org?


> I wonder if Rancher Desktop, et al, works with testcontainers.org?

I don't use testcontainers myself, but it looks like as long as you are using Rancher Desktop >= 1.0.1 you should be just fine.[1]

[1] https://github.com/testcontainers/testcontainers-java/issues...


I wanted to try a while ago so I downloaded Podman Desktop, but I couldn't get past the initial setup due to issues in the desktop app. I was able to reproduce this on two or three Macs, can't recall exactly.

It has been a long-standing issue but seems as though it is on its way to resolution: https://github.com/containers/podman-desktop/issues/1633.

I am waiting for a few months to make sure it is all sorted and will try again to see if it works then.


There are some problems especially with the QEMU integration. E.g. two bugs that irritated me immensely:

* podman with --network host won't expose the ports.

* If you suspend the laptop the time on the VM goes haywire.

I always found docker to be unreasonably buggy too so am not sure how I feel about this.


Not podman but using rancher desktop and it has either nerdctl + containerd or docker and dockerd options and I have been using it a month without issue replacing docker desktop. The kubernetes options work well too


Just joined a team and received an M1 Pro laptop but licensed docker isn't in the budget... tl;dr a bit early to tell, but I think it works.

podman-desktop is pretty rough, but it gave me enough confidence to start using podman generally. podman-mac-helper solved a few docker compatibility problems I had in previous attempts in 2021 and 2022.

It seems like my problems getting started with podman just last week wete mainly due to the abundance of already outdated tutorials. Maybe those who are more active in this space can weigh-in, but fixes and common work arounds documented as recently as 6 months ago aren't necessary an longer? For me, podman works without nearly as much fuss as I remember.


It doesn't work as a 100% replacement but I'd really like to switch. They need to work on better docker compose compatibility. Anything with custom network settings and it pukes.


My guess is Kubernetes would be the way to go here? You can run k3s locally.


Or kind.

And use Skaffold or Tilt to manage it all.

This setup makes Docker Compose look like a toy.

Developers should stop with this impedence mismatch of using Docker Compose when developing and K8s in prod.


Man, compose can be so much simpler for local dev though, you can easily bind local files and you don't need a fancy setup for local dev. We just switched from docker compose to k8s so that it was closer to prod, but I think if we had to do it again I'm not sure I'd bother.


With Skaffold, you can bind (or rather transparently sync) local files to a Pod.

I agree the initial setup is more difficult with a K8s local dev than with a Docker Compose local dev.

But the impedance mismatch makes using Docker Compose a bad idea when you deploy to a K8s cluster.

The secrets management is different. The way pods talk to each other is different. App-level configuration and updates are different. There is so much that diverges, that you end up with two independent systems. One for dev and one for prod. This is not good.


It's not about just running the containers. It's that applications are distributed with a docker compose file as the setup medium.


What do you mean by that? You use Docker Compose to distribute to production? Into what?


Here is netbox's compose file: https://github.com/netbox-community/netbox-docker/blob/relea... -- there is no way I am going to spend the time to translate that into whatever k8s wants. If it isn't broken don't fix it.


You could use Kompose https://kompose.io, just sayin :)


With 8.7k stars, plus the last commit was made 4 days ago and the last release was 3 weeks ago.

That's a pretty strong option!


Yeah it is a pity people think this way. If you create something that just works and don't need bugfixes for a year then you are punished! Probably those people should add empty commit "nothing to do!" and do release "release notes: nothing more to add or remove"


I partially agree with you.

Docker Compose is still a moving project. Look at the version of the schema. It evolves.

Kubernetes is also a platform that is moving. And it is moving fast.

If Kompose was considered `done` by its authors in 2020, I don't think I would even bother trying it out. It probably wouldn't be able to parse recent Docker Compose schemas and it would probably output outdated Kubernetes manifests.

But, see, yesterday I was looking for a Time Boxing application and stumbled on that one: https://github.com/khrykin/StrategrDesktop/. I dismissed it because the last release was in 2020. But that is an example of an app that could be considered `done` and still useful 3 years later.

Hence why I partially agree with you. :)


StrategrDesktop still has three unticked boxes for platform support, though :-)


okay, but that only allows you to deploy to a docker engine.

at best, you can deploy that app to a Docker Swarm. Docker Swarm is not reasonably comparable to K8s.

there are tools (and colleagues ! :) ) to transform docker compose files to K8s manifests.


Too bad, podman, technically is not bad, but IBM/Redhat made it too (voluntarily?) dependent on Redhat ecosystem.


Which parts? I know there are various "support/tool/components" that go into Podman, some can be swapped out such as the container runtime, some can't so much (afaik) such as the build tool or container network creator, but those don't really seem part of the RH ecosystem besides maybe RH employing some of the devs - which is maybe enough for it to qualify I guess?


How?


Was interested to see if Podman is a shiny new technology or a likely replacement of docker?


New? It was released ~2018. Docker was released ~2013.


Far from “shiny new”, but at this time the pace of development is such that there doesn’t exist a stable version that accepts bugfixes only, make of it what you will.

As a development tool it’s awesome, and having no centralized daemon is sure a boon.


It does has some quirks that is why I keep using docker for containers on servers.

I do use podman daily in the form of distrobox since both the steam deck and my desktop are immutable systems.


I really wanted to like immutable OS and Distrobox but I’ve run into so many issues and there are just some things that are significantly harder to do inside a container. Not to mention the subtle differences between podman and docker and it’s just too much tinkering for me when I’m trying to actively work on a project.

As much as it pains me to say this, I think nixos might be my next path.


Podman is a Docker implementation without the architectural bugs.


and twice the networking bugs


It has this awkward situation where CNI is old and boring and missing just a couple thing, while Netavark/Aardvark is the new shiny Wayland of container networking but can't handle half of the use cases CNI used to handle just fine.

I wanted to have a separate network on a bridge, visible to the host, where IP addresses and DNS would be managed by a dedicated DHCP/DNS service (like what dnsmasq can do). Well imagine, unless you jump through a whole lot of hoops and use macvlan and whatnot, netavark just plain cannot do it.

I heard one could make a netavark plugin quite easily, but interfaces for DNS and IPAM are missing from the puzzle.


Waiting for podman to support ports <= 1024




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: