Hacker News new | past | comments | ask | show | jobs | submit login
Podman: A Daemonless Container Engine (podman.io)
321 points by lobo_tuerto on Feb 11, 2021 | hide | past | favorite | 241 comments



I've been trying to use podman in production and it is not working very well. I'm excited for this technology but it is not ready.

- podman layer caching during builds do not work. When we switched back to docker-ce, builds went from 45 minutes to 3 minutes with no changes to our Dockerfile

- fuse-overlayfs sits at 100% CPU all day, every day on our production servers

- podman loses track of containers. Sometimes they are running but don't show in podman ps

- sometimes podman just gets stuck and can't prune or stop a container

(edit: formatting)


I could very well be wrong but Podman seems to have missed the time-frame of opportunity. It was always a knife fight between Red Hat and Docker with regard to tooling. Red Hat wanting to own the toolchain for containers so they didn't have to deal with, and so they could box out, competitors like (now basically defunct) Docker Enterprise.

I've taken a look at podman from time to time over the years but it seems like it's just never formalized, never been polished and almost always has been sub-par in execution. On this list the builds and container control are things that I've run across. I guess - what's the point? The rootless argument leaned on so heavily is pretty much gone, the quality of Podman hasn't (seemingly) improved and now IBM owns Red Hat (subjective, but a viable concern/consideration given what's recently happened with CentOS).

You're more than safe leveraging Docker and buildkit (when and where needed). Quite honestly, given the relatively poor execution of Red Hat with these tools over the years, I don't see the point. I'm sure there are some niche use cases for Podman/Buildah, but overall it just seems to come up as an agenda more than an exponentially better product at this point. Red Hat could have made things better, instead they just created a distraction and worked against the broader effort in the container ecosystem.


> Red Hat wanting to own the toolchain for containers so they didn't have to deal with

This is a misrepresentation. The situation was that Docker didn't take patches, as some were very specific changes for systemd and lack of unionfs, etc but over time it applied to most patches from RH associated people.


I'm not sure that's entirely true. If you look at the history of container tooling, all of the FUD Dan Walsh used to spread and Red Hat pulling support for Docker packages on RHEL I believe what was stated to be fairly accurate.

Edit: You also may have different perspective given you work for Red Hat / IBM.


Not to mention podman's historic[1] lack of compose support (or similar), which is frequently used in CI and local dev environments. I think has really held it back for a lot of folks.

[1]: Apparently initial support planned in version 3.0?


I don't know if I'm misunderstanding, but if by 'compose support' you mean docker-compose, then podman-compose has existed for a lot longer than v3.0 (which got released yesterday). First commit was nearly two years ago.

Sadly it doesn't feel as polished as docker-compose, I can only assume it's due to the lack of API. Specifically, podman-compose just translates all the docker-compose.yml file directives into podman cli commands, which doesn't seem to be handled gracefully.


I've been running multiple services through podman on a pi without any issues. Certainly not the same as production though.

Layer caching seems to work for me. Note that rootless stores images (and everything else), in different place from rootfull. It may be that you're caching the wrong directory.


I'm surprised it's using fuse-overlayfs instead of the overlay2 driver. Is it that you're using rootless containers in production? For rootful containers I think you'd have the same experience as docker here (or at least much closer).


Reading through the comments though it looks like I should try making a BTRFS mount for the container storage and that might help the fuse-overlayfs problems though.


I know it tends to be a controversial technology around here, but I've had great luck with OpenZFS for container storage.


It feels weird that a containerisation technology imposes requirements on the underlying storage technology to work well, tho.

Now I understand that btrfs is not really a requirement.

But I've been thinking about using podman to replace docker-in-docker on our fleet of gitlab runners where we build our images, and running btrfs is a deal breaker. We really don't want to add complexity to the mix. Fuse-overlay burning through the whole CPU is not really good looking either. Mh.

I still have to properly dig deep into this, but I'm keeping my hopes high.


>It feels weird that a containerisation technology imposes requirements on the underlying storage technology to work well, tho.

Why? Storage is literally the backbone of technology. From databases to webservers there are unique and real requirements. There's a reason why the enterprise storage market is still worth several billion dollars, and it's not because storage is easy.

In 2021, with billions of dollars of R&D at their disposal - neither Amazon, Google, or Microsoft have even a mediocre NAS stack in comparison to the likes of Dell/EMC (Isilon) or NetApp (ONTAP). Heck they can't even compete with startups like Qumulo or Vast.


Are you sure Amazon, Google and Microsoft don't have this competitive stack because they don't want to enter that market or are in that market solely for themselves? Storage may be a multi-billion dollar market - but it's also razor thin margin compared to where those brands focus their R&D spend. I'd argue those you've named don't have interest selling storage to the enterprise in the form of legacy boxes. They make far more revenue leasing storage to the enterprise. Cloud storage is also recurring revenue where enterprise storage is considered perpetual. The latter is generally frowned upon from the Street's perspective these days (good, bad or otherwise).


>Are you sure Amazon, Google and Microsoft don't have this competitive stack because they don't want to enter that market or are in that market solely for themselves

They have "competition" in the market, it's just horrible. Because storage is hard.

>Storage may be a multi-billion dollar market - but it's also razor thin margin compared to where those brands focus their R&D spend.

I'm going to let you go back and review the earnings reports from the aforementioned companies. Razor thin margins? You wouldn't survive in the market with razor thin margins.

> I'd argue those you've named don't have interest selling storage to the enterprise in the form of legacy boxes.

I'd argue Azure Stack, AWS Outpost, and Google Anthos readily prove you wrong.

> Cloud storage is also recurring revenue where enterprise storage is considered perpetual.

Just... no. Enterprise storage has a shelf life of 3-5 years before maintenance or technology make it obsolete.

>The latter is generally frowned upon from the Street's perspective these days (good, bad or otherwise).

https://www.google.com/finance/quote/NTAP:NASDAQ

5 years ago they were at $20/share, today they're at $69/share. Reality doesn't appear to match your claim as to the street's opinion of enterprise storage.


First, if you review the financials of the storage companies compared to earnings of all cloud companies they're abysmal.

Yes, razor thin margins - comparatively. The cost of buying physical storage is commoditized. I've worked in technology from the pre-sale engineering side for a number of years. Margins on hardware are fractional compared to software and subscription offerings.

Stack, Outpost and Anthos are not revenue drivers today. They're vendor lock in tools.

3-5 years is a horrible argument in the position you're attempting to make considering it's an eternity in technology. Consider that you sell the storage once in that 3-5 years. Your cloud provider bills you monthly and you don't own it. It's recurring revenue vs a singular sales event.

NTAP over 5 years is a decent return. 215% over that timeframe. Amazon returns 543% in the same timeframe. And let's use a hot space that's 100% subscription based, like Crowdstrike: 265% in less than a year. Software and subscription margins crush the legacy perpetual hardware model.

Storage is an old, stable, and commoditized market. There's no huge growth (any storage earnings report show that very clearly). And while the demand for storage continues to increase, the NetApps of the world aren't capturing where that growth is.


>First, if you review the financials of the storage companies compared to earnings of all cloud companies they're abysmal.

I'm sorry but you're just flat out wrong and didn't bother to look up margins like I suggested you do.

NTAP 2020 margins: 66% https://www.macrotrends.net/stocks/charts/NTAP/netapp/gross-...

AMZ 2020 margins: ~41% https://www.macrotrends.net/stocks/charts/AMZN/amazon/profit...

An all time high... and not even close.

>Storage is an old, stable, and commoditized market.

Which is why... per my original post... the cloud providers struggle to provide basic features in 2021 that have been available to enterprise storage customers for 20+ years.


Your chart takes all of Amazon into account, not just AWS. You'll have to actually dig into earnings to understand that.

Second thing is gross margin is not the same thing as net margin. Net margin is a far better indicator in the case of pure profitability / revenue. Just because a company shares $1.42B in revenue doesn't mean they have $1.42B in profits. If you look at NetApp net profit margin it's off 45% year over year. Latest net profit margin is 9.68%. So on that 1.42B we have $137M in profit. Not too hot.

The reality is hardware sales have thin margin (comparatively) and NetApp's software model is much weaker than comparative storage offerings in the cloud. It's very expensive to design, build, ship and house inventory of hardware for purchase. There's no way around this, which chews through that profit margin.


>Your chart takes all of Amazon into account, not just AWS. You'll have to actually dig into earnings to understand that.

Yes, I'm aware Amazon continually buries their AWS profitability. It's 57% of the overall margin, it's not significantly better than NetApp.

>Second thing is gross margin is not the same thing as net margin. Net margin is a far better indicator in the case of pure profitability / revenue.

Net margin is a dial they can and do turn by shifting money in and out of R&D and sales staff. Furthermore you can't simultaneously say that for Amazon we only take into account AWS and not the rest of the business... which is a part of the engine that drives the AWS business. That's no different than saying we shouldn't account for hardware in NetApp's number because they also sell software.

>It's very expensive to design, build, ship and house inventory of hardware for purchase. There's no way around this, which chews through that profit margin.

What exactly do you think Amazon's datacenters are full of? Hardware they design, build, and ship inventory around the country for. They are working with the exact same ODM's that NetApp or any other storage vendor works with to design their custom hardware variations.


I'd wager heavily on AWS (i.e. Amazon) outperforming NetApp over the next decade. Why? Their operation model is far better positioned for profits. Honestly, if you consider storage vendors a significant growth market you and I have a very different opinion of what's driving profits in technology today.

All balance sheets can and are manipulated to differing extent. The reality though is net margin takes better into account than what you were proposing, which is a cherry picked angle to make a point. Growth companies move profits to R&D. NetApp isn't highlighting R&D on earnings calls. Amazon highlights R&D very publicly multiple times per year. Low R&D costs and investment are indicators of a stagnant market.

Amazon's hardware model is not the same as NetApp's. You realize Amazon consumes almost 100% of their own hardware offerings for AWS, right? They have a much more palatable JIT model. They can build as they need. NetApp has to house dead weight inventory for potential sales so they can recognize revenue, especially for end of quarter / end of year sales pushes. Amazon doesn't have this problem.

Also NetApp has to package their product for customer distribution. You may ignorantly shrug this off but it's an additional cost that adds up quickly. NetApp also has to maintain documentation and support staff for customer facing hardware related issues. Amazon has a much lighter requirement because they have specialists within their DCs.

It's not even remotely comparable. Amazon isn't working with "the exact same ODMs". Many ODMs NetApp is forced to work with Amazon doesn't need. Take a look at what Amazon is doing internally and it's clear their stack continues to evolve more towards in house designs and build. NetApp is far more reliant on external support than Amazon is given their positions in the market and have far greater control over the hardware stack from top to bottom. Maybe peruse the career openings at AWS vs NetApp for some insights.


> Which is why... per my original post... the cloud providers struggle to provide basic features in 2021 that have been available to enterprise storage customers for 20+ years.

Netapp is love, Netapp is life.

I have fond memories of cloning a volume, from a snapshot, in seconds. On EBS it takes minutes.


Have you tried Kaniko? That's how I got hid of DinD in my Gitlab Runners.


Could you maybe provide a small example of the relevant part of the gitlab CI definition? I assume you run kaniko inside a Docker container on the runner?


What are you exactly excited about?

It's just minor difference to me except it doesn't work well.

Not sure what the laziness was about from RedHat on podman development.


What are the Github issue IDs for these 4 problems?


Where are the four unit tests in podman's codebase that should be catching these issues?

I find it very strange that your reaction is, "the software is infallible, it is _YOU_ who have failed the software by not logging bugs!" It is perfectly reasonable to have a conversation about the quality or issues with a piece of software SEPARATE TO the logging or finding of bugs in said software.


What are you trying to say here? Your comment feels a bit passive-aggressive.


> What are you trying to say here?

I want to look at the problems. But I don't want to do a guessing game with the 159 open issues, which one may relate to one of these points or not.


I agree with the sentiment that I should go find them. But I also agree with the sentiment that if RHEL 8 is going to make it hard to install docker and promote podman instead as a drop-in replacement, it's a bit frustrating.


> I agree with the sentiment that I should go find them.

I can sympathize with feeling overwhelmed when searching for similar Issues in a bug database, as I was in a similar situation in the past.

What happend was this: I went to the central starting point of the repo's database (e.g., https://github.com/containers/podman/issues), removed the filters like `is:open` from the search bar, and entered one or two keywords that for me sounded similar to the problem I faced.

As I didn't find any Issues that seemed similar, I just created a new one. Of course, I was worried about creating a possible duplicate, but I figured that it would still be a valuable contribution by adding more keywords (with the maintainers linking my duplicate to the ticket where the problem gets resolved) for others to search for. And last but not least, I thought this would also add additional debugging information for the developers, as well as an indication of the community-wide impact of the problem for the release management team.

In the end, I was quite happy I went this way, because I didn't just make a small impact on a project that would be useful for me, but it also enabled me to connect to the amazing people that build the tools I use on a daily basis.

I hope that you can also find a way that gives you some happiness in dealing with your situation, yobert.


One very interesting piece of tech coming from it, is toolbox (https://github.com/containers/toolbox). Basically throwaway (or keeparound) rootless containers with their own root directory but shared HOME. Install hundreds of dev-dependencies to build this one piece of software? Yeah, not gonna install those packages permanently. Spin up a toolbox, build it, install it in my home/.local.

You have root in the container without having root in the host system. That takes care of a lot of issues as well.


I usually share a volume between containers (eg:: a volume for wp-cli cache, another for -g npm_modules).

What benefits would toolbox add ?


a lot of software expects a usable home directory. There was another type of containerization (syos) which was designed for HPC use cases (think deploying 10k nodes doing one thing many times in parallel divvying up and pulling a shared dataset on a academic or industry (pharma), cluster with a high performance distributed filesystem underneath) that did this, however syos is not appropriate for most webscale use.


You could, for example, host the whole dev environment for a project in a container and still develop on the code in home.


Ah, I see ; put the whole dev toolchain in the container.

I use bindfs to mount the volume. I have a $HOME/Dev folders with WPProjectA, WPProjectB folders. Each has a volume subfolder mounted like that (the script has more variables but that's the gist of it):

    /usr/bin/bindfs \
        --force-user=johnchristopher \
        --force-group=johnchristopher \
        --create-for-user=www-data \
        --create-for-group=www-data \
        /var/lib/docker/volumes/WPProjectA-web/_data \
        $HOME/Dev/WPProjectA/volume
This setup allows using VSCode+xdebug and editing the code in the mounted volume while running the container and remote debugging.


>Basically throwaway (or keeparound) rootless containers

Toolbox emphasises "keeparound" containers since it's intended to be the primary command-line environment for image-based systems like Silverblue or CoreOS. Such systems try to keep a small, atomically updated rootfs and push users to install everything in containers.


Under the covers, both podman AND docker use runc. Redhat is writing a new version named "crun" which is lower overhead and faster:

https://github.com/containers/crun


Great idea - let's rewrite the most security-critical piece of container tech in plain C!

Red Hat literally took a memory safe Go program and rewrote it in C for performance, in 2020.


Yeah. I just don’t think “runc” has ever been a bottleneck in any systems I’ve worked with.

It’s been years since I’ve been able to get excited about rewriting systems from a memory safe language into a memory unsafe language for “performance reasons”. As an industry, we have too much evidence that even the best humans make mistakes. Quality C codebases still consistently run into CVEs that a memory safe language would have prevented.

Rust exists, if performance is so critical here and if Go were somehow at fault... but it sounds like most of the performance difference is due to a different architecture, so rewriting runc in Go probably could have had the same outcome.

I wish I could be excited about crun... I’m sure the author has put a lot of effort into it. Instead, I’m more excited about things like Firecracker or gVisor that enable people to further isolate containers, even though such a thing naturally comes with some performance impact.


“Any of the systems you’ve worked with”

I suspect it has been a bottleneck in some of the systems redhat has worked with, or they would have no benefit in writing it. As the authors of openshift I’m sure they’ve seen some pretty crazy use cases. Just because it isn’t your use case doesn’t make it invalid.


> I wish I could be excited about crun... I’m sure the author has put a lot of effort into it. Instead, I’m more excited about things like Firecracker that attempt to enable people to further isolate containers, even though such a thing naturally comes with some performance impact.

gVisor is also very exciting due to how it handles memory allocation and scheduling. Syscalls and I/O are more expensive, though.


Heh, I actually edited gVisor in right before your reply.

I agree completely!


I did some hacking on crun this week to fix a bug, so maybe I can offer a perspective about why it’s written in C.

A huge part of a container runtime is pretty much just issuing Linux syscalls - and often the new ones too, that don’t have any glibc bindings yet and need to be called by number. Plus, very careful control of threading is needed, and it’s not just a case of calling runtime.LockOSThread() in go - some of these namespace related syscalls can only be called if there is a single thread in the process.

It’s _possible_ to do this in go, of course; runc exists after all. But it’s certainly very convenient to be able to flip back and forth between kernel source and the crun code base, to use exactly the actual structs that are defined in the kernel headers, and to generally be using the lingua franca of interfacing with the Linux kernel. It lowers cognitive overhead (eg what is the runtime going to do here?) and lets you focus on making exactly the calls to the kernel that you want.


Go is actually a really poor choice for the container runtime because much of the container setup cannot be done from multithreaded code[0], so it has to be done in C before the go runtime initializes. I do think rust is a better choice for this layer than C because there are still security risks, but getting rid of Go for this layer is a win. I'm not sure why RH chose to rewrite it in C rather than using rust[1].

[0]: https://www.weave.works/blog/linux-namespaces-and-go-don-t-m... [1]: https://github.com/drahnr/railcar


> Go is actually a really poor choice for the container runtime because much of the container setup cannot be done from multithreaded code[0]

This was addressed in 2017/2018 [0], it's no longer a poor choice.

[0]: https://github.com/golang/go/commit/2595fe7fb6f272f9204ca3ef...


While the particular issue of network namespaces and locking the os thread was fixed, there is still c code that must run before the go runtime starts to work around the issue that you cannot do some of the necessary nsenter calls once you have started additional threads. The c code to make runc work is encapsulated in libcontainer/nsenter[0]

[0]: https://github.com/opencontainers/runc/tree/master/libcontai...


this seems like a pretty trivial amount of C, by comparison, and a pretty solved problem now.


This I entirely agree with you on but I’d expect it due to the resources available at the time were more fluent with C than rust. A shame really.


Or they want to be more platform-agnostic than rust and llvm.


Which platforms does Red Hat support Linux containers on that Rust doesn't support? Which platforms is anyone else frequently using Linux containers on that Rust doesn't support? I'm pretty sure the venn diagram here would be a single circle. Rust supports the major architectures.

Portability seems like an unlikely reason here, but I would pick memory safety over portability every single day of the week. Rust is far from the only language that offers memory safety, but it is certainly one of the most compelling ones for projects that would otherwise be written in C.

"runc" is battle tested, written almost entirely in Go, which offers memory safety, and the performance delta under discussion is only about 2x for a part of the system that seemingly only has a measurable performance impact if you're starting containers as quickly as possible.

This just isn't a very compelling use case for C.


Who knows? Who cares?

If I worked for RedHat I would still seek to write as portable as possible just on general principle or for the sake of my own future self.

But I agree it's merely a possible reason and may not be a major one.


It is almost 50% faster for launching containers. That’s considerable, and at scale could be an actual cost savings. If nothing else, efficiency is good.

Also, small c projects can be written properly with discipline. This is a very talented team on a very small and well scoped project. It can be written properly.


> This is a very talented team on a very small and well scoped project. It can be written properly.

Talent does not preclude you from making mistakes

Most projects start out small and well scoped. That doesn’t mean it’s going to last.


There's also the copious amount of static analysis and warnings built into gcc and clang these days, not to mention safer build options. Writing C today is quite a bit better than even 10 years ago.


Better, yes. Safe, no. Even with ASAN and modern C++, vulnerabilities happen all the time.


This is C, not C++. Modern C++ is kind of a mess with how much stuff they’ve added to the language. This is not relevant.


> Modern C++ is kind of a mess with how much stuff they’ve added to the language. This is not relevant.

Yeah and C++ would be a way better language to write critical system daemon in 2020 than C. Both safer and more productive while keeping the exact same portability and performance as C when necessary.

Most safety issues of C (buffer overflows, use-after-free, stack smash) are not a problem anymore in modern C++.

Yes, writing new userland software in C in 2020(1) is non-sense.

- Use at least C++ if you are conservative.

- Use Rust if you aim for absolute safety.

- Use Go if you accept the performance hit and do not need libraries.

There is zero excuses to C in userland in 2020.

The only excuse is some Red Hat folks seems to practice C++ hating as some kind of religion.

That exactly what give us "beautiful" monstrosities like systemd or pulseaudio with their associated shopping list of CVEs [^1]

Even GCC maintainers switched to C++, by the sake of god, do the same.

----

[^1]: https://www.cvedetails.com/product/38088/Freedesktop-Systemd...


> There is zero excuses to C in userland in 2020.

I wouldn't go quite that far. I'm generally in the C++ camp (rather than C, that is) but there are significant advantages to a compact and stable language, and real disadvantages to a sprawling disaster like C++ that grows still more monstrously complex every few years.

This topic turned up a year ago: https://news.ycombinator.com/item?id=21946060

Your point stands though: well written modern C++ should be much less prone to memory-safety issues than well written modern C.


The Linux kernel still has memory safety issues to this day.

The C++ approach is to offer a feature-rich language so the programmer doesn't have to reinvent common abstractions. The C approach is to offer a minimal and stable language and let the programmer take it from there. It's not obvious a priori which approach should result in fewer memory safety issues. If I had to guess my money would be on C++ being the better choice, as it has smart pointers.


Saying C is easier to secure than Modern C++ is just confusing. The point of a lot of recent C++ features is to make memory safety easier to achieve. I personally don't think it gets anywhere close to safe enough for me to recommend, when Rust is such a competent option that people can choose instead. I would still rather people have the tools of C++ at their disposal instead of having to do everything themselves in C.

Please find me a well-known C code base that doesn't suffer from memory safety CVEs. You're the one making the claim that they exist... I can't prove that such a thing doesn't exist, but I can point to how curl[0][1], sqlite[2][3], the linux kernel[4][5], and any other popular, respected C code base that I can think of suffers from numerous memory safety issues that memory safe languages are built to prevent.

Writing secure software is hard enough without picking a memory unsafe language as the foundation of that software.

"Just find/be a better programmer" isn't the solution. We've tried that for decades with little success. The continued prevalence of memory safety vulnerability in these C code bases shows that existing static analysis tools are insufficient for handling the many ways that things can go wrong in C.

I think you mentioned railcar elsewhere in this thread, and I would find it easier to be interested in a runc alternative that is written in a memory safe language... which railcar is.

[0]: https://curl.se/docs/CVE-2019-5482.html

[1]: https://curl.se/docs/CVE-2019-3823.html

(among numerous others)

[2]: https://www.darkreading.com/attacks-breaches/researchers-sho...

[3]: http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=SQLite

(the second link has quite a few memory safety issues that are readily apparent, including several from SQLite that affected Chrome)

[4]: https://www.cvedetails.com/cve/CVE-2019-11599/

[5]: https://www.cvedetails.com/cve/CVE-2018-15471/


And yet NASA uses C, not Rust.


I don't think you want to develop your software project like the NASA develops its critical software projects.


Indeed. Pretending it isn’t is equally silly.


External static analyzers also improved a lot.

You can ensure memory-safety in C by enforcing strict rules since the first commit.


Can you share any good examples of C codebases that have enforced these rules and successfully avoided having memory safety problems? Can you share any details on what rules a team could use to reliably avoid having memory safety problems when writing C?


The most security-critical part of container tech lives in Linux kernel and has always been in plain C.



Is what appears to be less than 2,000 lines including detailed comments etc. "quite a lot"?

This seems like a manageable amount to very carefully maintain and not at all like writing something entirely in C.


To be fair, much of runc is written in C as well. And go is really terrible specifically for the case of runc.


Most security software is written in C. WireGuard, OpenSSH, OpenSSL, OpenVPN, all the Software from OpenBSD folks. Crun is being used without root privilege in most cases so it’s not a huge problem if the runtime is ever escaped.


Contrary to popular belief, memory safe C is possible to write, and with modern linters it’s even easier.

Yes it’s easier to shoot yourself in the foot, but with practice you can learn to miss every time.


It probably isn't worth trying.

djb wrote remotely exploitable C for qmail, applications written by the OpenBSD team have had exploitable memory issues, Microsoft still doesn't get it right, neither do Linux kernel devs.


You say that as if the likes of Microsoft or Linux kernel devs consist exclusively of elite C devs.

I'm very glad languages like Go and Rust exist, but saying you shouldn't write C because you might create memory leaks is kind of like saying you shouldn't write multi-threaded code because you might create race conditions. Yeah, it adds complexity to your code, but it's sometimes worth the overhead it saves. Whether that trade-off is worth it is always up for debate.


Oh, I agree it is a tradeoff. But the parent said "you can learn to miss every time". Can anyone point me to a C language project where the developers consistently miss every time?


It’s possible with rules, experience and guidance.

Granted, I learned to write C 25+ years ago, and have worked as a C programmer for 15 of them, writing mostly embedded software for mobile phones (pre smartphone), airport sorters, but have also written financial software (mostly for parsing NASDAQ feeds), but the point is that most of the software I’ve written has had close to a decade of “runtime”, and while I started out making the same mistakes as everybody else, you learn to put ranges on your heap pointers like strncpy instead of just blindly doing strcpy.

Checking the size of your input vs your allocated memory takes care of a lot of it. As for memory leaks, it’s not exactly hard to free a pointer for every time you malloc one.

People are terrified of C, and yes, Go, Rust, Java, C# makes it harder to make _those_ mistakes, but that doesn’t mean it’s impossible to write good C code.

And it’s not like projects written in Go or Rust are error free. They just struggle with different errors.

As for good C projects, check stuff like postfix, nginx, and yes Linux or FreeBSD.


Are you saying that every project written in C has suffered from a memory leak issue at some point? Most C/C++ code I've personally written doesn't even use the heap.

I work on avionics and we use C/C++ now and then. We have a ton of rules regarding memory management (pretty much everything stays on the stack) and I can't recall anything I've ever been involved with suffering from a memory leak.


If basically everything stays on the stack, you’ll have a much lower chance of seeing a memory leak, by definition.


Exactly. Using C/C++ doesn't have to mean using the heap.


Using the heap has nothing to do with the code being safe or unsafe.


All I'm saying is that it's a trivial oversimplification to say you can learn to miss every time.


Don't need a leak for C code to be vulnerable, in fact not using the heap helps - just one missing bounds check on user input and you can write to the stack. OTOH, W^X should catch those.


It’s not just the developers. The code gets reviewed and merged as well. I don’t know which projects but those are the highest quality C code based.

Writing in C doesn’t create complexity but lots of traps to fall in. Writing in C doesn’t save overhead over Rust. Your rust code can be very low level and safe and C isn’t that close to hardware as it used to be anyway.


It’s not for performance. It’s to settle an old grudge with Docker. All code coming from Docker must be erased.

See also: podman; cri-o; buildah.


Grudge? You mean when Jessie Frazelle walked around dockercon with a badge saying, “I will not merge patches for systemd” just out of principal? In hindsight it is fairly obvious picking a fight at redhat summit was a very poor long term strategy from docker. It forced them to ultimately sell the orchestration parts of their business and massively limit the free tier of docker hub.


Thank you for illustrating my point perfectly.

“Grudge?” (proceeds with details of grudge).


What was described could be the basis of a grudge, or it could be that Red Hat were convinced that the current project would not accept changes Red Hat would benefit by even if it helped many other people. At that point, and at the size of Red Hat and how important this technology is becoming for them, it only makes sense for them to secure some of the technological foundations they rely on. It costs them very little in resources in relation to the business they are basing on that technology.

I don't think it's a stretch to look at the events as described and see them as Red Hat doing what they need to work around someone else's grudge. It could be any number of things, but I don't think the information presented so far lends itself towards Red Hat having a grudge.


runc, containerd and the docker client are critical infrastructure to a whole lot of people, and they seem to be doing just fine running it and, when needed, contributing back. Only Red Hat seems to have a problem with using those projects without forking or re-writing them. Could it be because Red Hat considers it of strategic importance for their business, not just for those projects to be reliable (they are), but to be controlled by them to a sufficient degree that they can act as gatekeepers to upstream, thus preserving their business model?


That could be, but if Frazelle did actually walk around with a button saying “I will not merge patches for systemd” I think it's simpler and safer to assume Red Hat decided that relying on Docker was risky given how important it was to their future products and the apparent animus on display from the Docker maintainer.

If I carpool with someone and they like to go around and loudly proclaim "I refuse to listen to anything kbenson has to say", maybe I find someone else to carpool with or drive myself? That only seems prudent. That doesn't mean I have a grudge against that person, it just means I don't want to trust them with getting me to and from work anymore.


There is a picture of her badge in this article. It is not something I was alleging. It is fact:

https://lwn.net/Articles/676831/


Sorry, it was poorly worded on my part. I didn't actually doubt that, but prefer not to state other people's assertions as fact unless I know it is so for myself. I prefer to have my assertions grounded by what I'm basing them on, which is less about doubt of the source and more about keeping myself honest when arguing and not taking too hard a stance on stuff I haven't verified myself, since it's easy to come across as authoritative when I don't intend to otherwise.

Thanks for the evidence though, it's always best to have, even if it can be a pain to drum up when it might not really be called into question anyways. :)


Sorry the way I replied came off as defensive when it wasn’t meant to be. Your approach is to assume the best and that is always a good one.


It’s no secret that there were tensions betwen Docker and Red Hat that culminated around 2016, when Red Hat’s desire to capitalize on the success of containers ran into Docker’s own ambitions to compete with Red Hat. Those tensions were eventually resolved (or at least diminished) by Red Hat shifting gears to Kubernetes as their “next Linux”.

The drama you’re talking about is from that period - 2015-16. But “crun” was launched in 2020, a full four years later. Frazelle left Docker years ago, as well as most of the people involved in those feuds. Runc, containerd and docker are chugging along drama-free, and are maintained by multi-vendor teams. There is no interest from anyone outside of Red Hat in forking or re-writing those tools. It’s just a tremendous waste of energy and in the end will be problematic for Red Hat, because they will be building their platform on less used and therefore less reliable code.

The world’s container infrastructure runs on containerd, runc and docker. Not crio, crun and podman. Someone at Red Hat clearly is having trouble accepting that reality, in my opinion because they’re having trouble moving on from old grudges. I really wish they did, because all of that wasted engineering effort could be deployed to solve more pressing problems.


> It’s just a tremendous waste of energy

> The world’s container infrastructure runs on containerd, runc and docker

I don't consider alternate implementations of widely used software to be wasted evergy almost ever. Was Clang a waste of energy because everyone was using GCC? There are many reasons why multiple implementations may be useful, competition and being able to cater to slightly different common use cases obvious ones.

I'm not sure why any end user would wish for there not to be multiple open source projects looking to satisfy a similar technical need.

> in the end will be problematic for Red Hat

That may be, but I don't really care about whether it's good for Red Hat. I care that it increased user choice and maybe at some point features and capabilities offered to users (whether through podman or pressure on docker).


That’s fair, there’s always a benefit to alternate implementations. I think those were started for the wrong reasons (interpersonal conflict rather than technical requirements) but perhaps in the end it doesn’t matter.


I'll gladly use crio, crun and podman instead of the docker alternatives.


runc and containerd wouldn't exist if Redhat hadn't created the OCI specification and work around docker to make things interoperable. Docker, Inc was not playing ball and then redhat had to work around them. It wasn't a grudge it was just them keeping their business safe. The Open Containers Initiative was essentially designed to force Docker Inc to try to play better with others. They donated containerd to the OCI due to the reality that docker was about to be left in the dust.

For reference, this was the shirt that Docker Inc was giving away at Redhat summit years ago. I was there. I saw the shirt, and thought it was in very poor form. https://twitter.com/SEJeff/status/1125871424126767104

As Rambo said, "But Colonel, they drew first blood."


Your history is wrong. Docker created the OCI, not Red Hat. Docker donated both the spec and the runc implementation.

Containerd was donated (also by Docker) to CNCF, a different organization, and I believe a few years later.

You are correct that Docker did those things because of pressure to be more interoperable.

Reading your linked tweet (“don’t mess with an engineering behemoth”) you seem to agree that Red Hat is rewriting away Docker code because of a past grudge?


Docker and Redhat were both founding members of OCI. CoreOS probably was responsible for a lot of the initial conversations. They wanted a standardized container runtime after finding some limitations in docker which caused them to create rkt. Docker inc wasn’t super into changing anything and as history as shown, wasn’t terribly into working well with others. The industry was going to move forward without them but they were offered a spot on the OCI if they played ball.

OCI would have made docker inc entirely irrelevant if they didn’t join it. They donated runc (I was wrong originally, dyslexia sucks sometimes) as the reference implementation so docker continued to stay at the forefront.

They donated a very minimal implementation of containerd to the CNCF when kubernetes wanted to do things with the container runtime interface (CRI) to make it more pluggable. As a more purpose built CRI, crio is better suited for k8s.

Docker did not create OCI any more than Redhat did. Redhat and a collection of other orgs did. Docker and Redhat were besties after Alexander Larson added device mapper support to docker. This is what allowed docker to run on any Linux distro that didn’t include the out of tree aufs. Look it up. Redhat then made a lot of money doing container multihost orchestration with openshift v3, which deprecated their own tool, geard. When Docker inc accepted a ton of VC money, they realized they actually had to monetize things. This is the part where the relationship started to seriously sour. The company that helped make them so popular was now suddenly a huge competitor. This is what lead to the problem.

It wasn’t a grudge from Redhat I mentioned in the linked tweet. It was docker inc thinking docker was anything other than commodity. It just is some very nice UX and cli tooling with a container registry around Linux namespace and Linux control groups. They just put it all together very nicely for developers to ship code faster. Podman is also a commodity. At its core, it just is some json parsing and pretty wrapping around Linux namespace and cgroups. Redhat knows this, and they don’t hide it. They monetize openshift and quay, not podman.

Sorry if this is a bit scrambled. Long posts on mobile are difficult.


I understand what you’re saying but it is incorrect. Docker was in fact the originator of OCI in every way that matters. They made the decision to create it; drafted the founding documents and went back and forth with Linux Foundation on the content; chose the names (initially OCF, then later OCI); negotiated with LF the governance structure and initial board composition; negotiated the list of external maintainers with commit access, including a Red Hat and CoreOS employee; chose the announcement date and venue; drafted the announcement content along with LF; coordinated with PR teams of other founding members. All this in addition to being the original authors of both the spec and implementation.

Neither Red Hat nor CoreOS were involved in any of those steps, nor were they even aware of them until the last minute. When they were invited it was on a “take it or leave it” basis. They took it, then tried to pre-empt the Docker/LF announcement by a few hours to make it look like they were launching it. Those were tense days and there was very little trust between those companies.

Source: I was involved in the process of creating OCI.


Well thanks for your input, and I stand corrected. I liked Solomon and he always seemed like a super nice guy when I met him, but it did seem that there was a bit of a crisis when Docker Inc realized, "Oh shit, we have to make money for these VCs!" and then it got really awkward. I was sad to see it happen as Dan Walsh and Solomon are both really good peeps to me.


Yeah the tensions were really unfortunate. In the long run it resulted in more competition which benefits all of us; but in the short term it made people’s lives more stressful than they needed to be.

I actually have a slightly different explanation of what set it off. I don’t buy the “docker is suddenly under pressure to make money” explanation. Docker was a VC funded startup since 2010. They already had a business when they pivoted to Docker. As far as I know they kept the same investors and board. So it seems unlikely that they suddenly remembered that they needed to make money. More likely they planned to make Docker as ubiquitous a platform as they could, at which point there are many known avenues to monetize. Also, the CEO they brought on board, Ben Golub, had just spent a year at Red Hat after selling his previous startup to them. So he was certainly familiar with Red Hat’s business and strategy, and probably on friendly terms with their leadership (this is speculation on my part).

And in the early days Red Hat and Docker were in fact partners. Docker engineers even implemented features specifically to please Red Hat; for example devicemapper support and storage drivers, which were developed by Docker for Red Hat.

The problem is that the partnership was built on a misunderstanding. Docker was hoping for a distribution deal between the two businesses, with revenue share etc. and intended to keep control of their open-source project. Whereas Red Hat was hoping to make Docker their “new Linux” which involved no partnership with Docker and instead taking or at least sharing control of the project as they did with Linux. Each side was slow to realize the true intentions of the other, whether by naivete or deception I don’t know. This still surprises me because both side’s intentions were rational and entirely predictable. I think a lot of wishful thinking, and possibly some individual incompetence in leadership were involved. For example Red Hat has never actually partnered with a smaller startup and shared revenue with them in the way Docker was hoping, so I’m not sure what made them so confident it would happen. In any case, when the fog lifted, bitterness and conflict quickly followed.

In the end Red Hat shifted gears to Kubernetes as their “new Linux” which was a much better match for them. Then proceeded to rewrite history to minimize Docker’s role; which is unfortunate but understandable, I guess. I know a lot of Docker employees are personally hurt by it, to this day. It doesn’t feel good to see your work swept under the rug for reasons of competition and ego.


Alexander Larsson did the lions share of the work on device mapper, and here are 29 PRs to show it (including the very first with the devicemapper label):

https://github.com/moby/moby/pulls?page=2&q=is%3Apr+is%3Aclo...

In fact, Solomon's own words from https://github.com/moby/moby/pull/2609 "Integrate lvm/devicemapper implementation by @alexlarsson into a driver", so unless you're more knowledgeable about the history than Solomon, I'm going to have to discount your take on this entire thing.

Regardless, I think things are at a much better place now. It is unfortunate that there were tensions and more so that Solomon had such a tough time through it all. He's a super nice guy. But this is tech, and there will always be lots of personality conflicts. It is one of the precious few constants.


It is absolutely not correct that Larsson did the lion’s share. What he did was implement a Go wrapper for libdevmapper, which exposes a very low-level API. It is the Docker team that implemented devmapper-based container storage, as well as the whole storage plugin system which was now required to support more than one storage method. The original devmapper lib is utterly undocumented and Larsson’s wrapper did not fix that. So getting that feature to work was an all-consuming task and it is the Docker team that did the bulk of it.

You can see all this from the early history of the devmapper directory: https://github.com/alexlarsson/docker/commits/a14496ce891f1f...


So why does podman include code from docker?


Excellent question, especially since the code is copy-pasted without crediting the original authors. Could it be that they’re trying to minimize the role of Docker as much as possible even when re-using their code?


> Excellent question, especially since the code is copy-pasted without crediting the original authors.

That seems like a serious allegation. Can you back that up?


It’s right there in the repo history. Start from the first commits. I don’t know how serious of an allegation it is: copy-pasting open-source code without crediting it properly is rude, but not illegal.


No, that’s totally illegal. The Docker Engine is licensed under the Apache-2.0 terms, which require attribution.


It is actually against copyright law, and redhat legal would take any allegations very seriously. What in specific do you think is copy and pasted code?


Docker is properly attributed to, see https://github.com/containers/storage/blob/a4cc7aa79e050c976...

I think OP wanted to say that Podman hates Docker what is not I feel when I'm interacting with the community there. People who use Podman do it because of it's additional features that Docker does not have, like starting an Container from a rootfs or mounting the currect directory in a container using "." as path. Also building containers using a shellscript is supported. No need to learn the Dockerfile syntax. There are lot of small things that make Podman better integrated into Linux systems: running the container via systemd is a lot more intuitive.


Adduser vs useradd is older than I am and I still can't keep them straight. Just put this with the rest of the fire, I guess, heh!


I think it is sort of a play on words. Since runc is written in go and is slower, crun is written in c and is faster. Stupid nerd joke I know.


Indeed. After 30 years of Linux, we should have got better at naming things.


The podman targets the crun too:

> Runtime: We use the OCI runtime tools to generate OCI runtime configurations that can be used with any OCI-compliant runtime, like crun and runc.


Upside: At least they used a different name.

On other occasions, they just hijack the name. Like they did with dstat.


That was a misunderstanding where they saw that there had been no commit activity for 3 years, and no responses to issues/PRs filed in the past 18 months, and assumed the upstream was dead.

Making that assumption was reasonable - moving forwards without at least making an attempt to contact the maintainer "just in case" was not. It would have been courteous to at least try.

But it's not exactly the deliberate hostile takeover that you make it out to be.

https://bugzilla.redhat.com/show_bug.cgi?id=1614277#c9

>> To my knowledge, there is no need (legally) to obtain consent to use the name 'dstat' for a replacement command providing the same functionality. It might be a nice thing to do from a community perspective, however - if there was someone to discuss with upstream.

>> However, dstat is dead upstream. There have been no updates for years, no responses at all to any bug reports in the months I've been following the github repo now, and certainly no attempt to begin undertaking a python3 port.

>> Since there is nobody maintaining the original dstat code anymore, it seemed a futile exercise to me so I've not attempted to contact the original author. And as pcp-dstat is now well advanced beyond the original dstat - implementing features listed in dstat's roadmap for many years, and with multiple active contributors - I think moving on with the backward-compatible name symlink is the best we can do.


The ability to start containers as a normal user ("rootless mode") is quite interesting and imo the most significant feature missing from LXD, which requires you to either use sudo or be a part of the "lxc" group (equivalent to having sudo privileges).


You can afair configure LXD to disallow escalating privileges via group membership.


Wouldn't the lxc group have much more focused permissions than sudo?


You can mount the root fs in a container and with uid mapping and because the daemon is running as root you can pretty much do whatever you want.


Well, "daemonless" is kind of marketing - there is still this daemon-per-container 'conmon' thing https://github.com/containers/conmon and I don't get why it is needed because 1) who actually needs to re-attach anyway? 2) container's streams are already properly handled by whatever supervisor (e.g. systemd). You can't disable conmon and I'm not sure if its usage is not hardcoded throughout the codebase.

I would very much like to use Podman as a finally proper container launcher in production (non-FAANG scale - at which you maybe start to need k8s), but having an unnecessary daemon moving part in thousands lines of C makes me frown so far.


It's great to see how far Podman (and its sister projects) have come. I think it's a reliable tool and I'm a happy user both personally and professionally.

We make heavy use of Podman in our infrastructure and it's mostly a pleasure. My current pet peeves are that:

1) Ansible's podman_container module is not as polished as docker_container. I regularly run into idempotency issues with it (so lots of needlessly restarted containers).

2) Gitlab's Docker executor doesn't support Podman and all our CI agents run on CentOS 8. I ended up writing a custom executor for it and it's working quite well though (we're probably not going back to the container executor even if it supported Podman, since the custom executor offers so much more flexibility).

3) GPU support is easier/more documented on Docker. For this reason, the GPU servers we have are all Ubuntu 20.04 + Docker since it's the more beaten path.

4) Podman-compose just needs more work. Luckily for us, it seems that Podman 3.x will support docker-compose natively [1].

As mentioned, our CI environment is very dependent on Podman. The first step of every Gitlab pipelines is to build the container image in which the rest of the jobs will run. I find that it's simpler to have a shell executor in a unprivileged, restricted environment (i.e. can only run `podman build`) than setting up dind just for building images. All jobs that follow are ran in rootless containers, for that nice added layer of security.

Wishing all the best to the Podman, Buildah and Skopeo teams.

[1]: https://www.redhat.com/sysadmin/podman-docker-compose


What's the status on ease of running on Mac? I know last time I seriously considered testing it out in my workflow, it was kind of crazy that I had to have a separate machine (VM or real) just to run container images...

I see in the docs [1]:

"Podman is a tool for running Linux containers. You can do this from a MacOS desktop as long as you have access to a linux box either running inside of a VM on the host, or available via the network. You need to install the remote client and then setup ssh connection information."

And yes, I know that Docker (and any other Linux container-based tool) also runs a VM—but it sets it up almost 100% transparently for me. I install docker, I run `docker run` and I have a container running.

[1] https://podman.io/getting-started/installation#macos


> I know last time I seriously considered testing it out in my workflow, it was kind of crazy that I had to have a separate machine (VM or real) just to run container images...

Why does that seem 'kind of crazy' to you? A container is really just a namespaced Linux process, so … you either need a running Linux kernel or a good-enough emulation thereof.

What seems kind of crazy to me is that so many folks who are deploying on Linux develop on macOS. That's not a dig against macOS, but one against the inevitable pains which arise when developing in one environment and deploying in another. Even though I much prefer a Linux environment, it would seem similarly crazy to develop on Linux and deploy on Windows or macOS. Heck, I think it is kind of crazy to develop on Ubuntu and deploy on Debian, and they are generally very very close!


The docker machine or whatever it’s called these days is pretty impressive but when we’ve tried to use it in developer workflows on mac it would eat all of the CPU allotted to it in order to marshal filesystem events over the host/guest VM boundary. We tried a number of workarounds but none solved the problem. I’m sure it’s possible, but we ultimately gave up and moved back to developing against native processes on MacOS.

For other use cases I imagine it’s just fine.


this is an issue I have battled as well. Docker is great for normalizing the execution environment; which should be a huge boon for developer tooling but on MacOS having your laptop sound like a hovercraft just to have some files watched for hot rebuilds is no bueno.


It seems like they haven't gotten to this, don't understand the needs and the potential, or aren't doing this intentionally. My understanding is that Docker maintains what amounts to a wrapper VM with nice integration to enable the workflow. This doesn't exist for Podman as far as I've seen.

On Windows, the WSL2 feature gives you that easy wrapper in a different way. It's sets up and manages the underlying VM although you have to choose your distro etc. Once this is running after a simple setup you are just using Linux from that point on. It's less specialized than how Docker on macOS seems to work.

If someone knows of something that follows the WSL2 approach without VirtualBox or VMware Fusion I'd be all ears. That would be more versatile than how Docker seems to work right now. Docker's business interests probably aligned well with getting a workflow running, so unless someone is motivated to do similar for Podman, you are going to be out of luck. At least recognizing this deficiency would be a start though.


Yeah without a transparent workflow like that for Mac, there is no reason for me to switch to this project.

I wish they had tried to collaborate with Docker and contribute upstream instead of this project.


> And yes, I know that Docker (and any other Linux container-based tool) also runs a VM—but it sets it up almost 100% transparently for me. I install docker, I run `docker run` and I have a container running.

That's exactly not true, considering you said Linux.

The utility of Linux containers is that they share the same OS instance, but have an embellished notion of Unix process group where, within a container i.e. embellished process group, they see their own filesystem and numbering for OS resources, AS IF they were on individual VMs, but they're not.


> it was kind of crazy that I had to have a separate machine (VM or real) just to run container images...

Containers use Linux namespaces to decouple from the main OS. MacOS doesn't support those, so no matter what you do, you can't run them directly on MacOS. That's why you need a VM, and you need the WSL Windows subsystem for Linux on Windows.

BSD has jails, but they don't have the same functionality. In particular, I sorely miss the network namespace that Linux has on MacOS.


> And yes, I know that Docker (and any other Linux container-based tool) also runs a VM—but it sets it up almost 100% transparently for me. I install docker, I run `docker run` and I have a container running.

Comments like this help reinforce the stereotype of lazy macos developers ;)


It doesn't matter what platform I'm on, I like tools that let me be lazy ;)


Anyone have experience with this? I love the idea of it being daemonless.


If you run it as non-root it is significantly slower than docker as root. Docker can us the overlay2 kernel driver, whereas podkan would use fuse-overlayfs in userspace. This has a high CPU overhead (e.g. don't try to run AFL inside podman), and a 1024 FD limit for the entire container (so a 'make -j40' usually dies).

There are ways around it: raise the ulimit for your user and run new enough podman to raise limit for fuse-overlayfs, use the 'vfs' driver (it has other perf issues). I heard (but haven't tested yet) that the 'btrfs' driver avoids all these problems and works from userspace. Obviously requires an FS formatted as btrfs...

There are also compatibility issues with Docker. E.g. one container was running sshfs inside Docker just fine, but fails with a permission error on /dev/fuse with podman.


You can run podman as root, but it doesn't default to it, for generally sensible security reasons.

Also, docker runs as root, so it won't have permissions problems. You can change the permissions of /dev/fuse if you want to allow podman containers to access it or update the group of the user launching podman.


If I understand correctly support for native rootless mounts is currently under development: https://github.com/containers/storage/pull/816 The functionality requires Linux kernel 5.11 (soon to be released)


That is correct. We're targeting that work for RHEL 8.5 in November-ish, so you'll likely see that drop in Fedora and other Linux distros sooner.


Are these fuse limitations or fuse-overlayfs limitations ?


I think FUSE limitations: fuse is served by a single userspace process, which is limited the same way as any other userspace process by ulimit. It is not a fundamental limitation of podman, just of podman's default rootless behaviour.


I tried it, and went back to Docker with BuildKit.

* The newer version of Docker+BulidKit supports bind-mounts/caching/secrets/memory filesystem etc. we used it to significantly speed up builds. You could probably find a way in buildah to achieve the same things, but it's not standard.

* Parallel builds - Docker+BulidKit builds Dockerfile in parallel, in podman things run serially. The combination of caching and parallel builds and Docker being faster even for single-threaded builds, Docker builds ended up an order of magnitude faster than the Podman ones.

* Buggy caching - There were a lot of caching bugs (randomly rebuilding when nothing changed, and reusing cached layers when files have changed). These issues are supposedly fixed, but I've lost trust in the Dockerfile builds.

* Various bugs when used as a drop-in replacement for Docker.

* Recurring issues on Ubuntu. It seemed all the developers were on Fedora/RHEL, and there were recurring issues with the Ubuntu builds. Things might be better now.

* non-root containers require editing /etc/subuid, which you can't do unless you have root access.

More information about the new BuildKit features in Docker:

https://pythonspeed.com/articles/docker-buildkit/


Buildkit's caches are a killer feature. They go a long way towards making Docker builds usable for iterative development.

We also migrated back to Docker from Podman because of Buildkit and the general bugginess of Podman.


FWIW, I'm using Podman and I'm happy with it given that it's fast enough for my purposes and I haven't encountered any caching bugs.


One huge advantage over docker is that if you mount a directory into the container, and the container writes files there, on the host system they always have the owner of the user that started the container.

That makes it much more convenient for build environments, without having to hard-code user IDs both in the container and on the host.


If you're on a Linux system you can actually make this work better with sssd. So sssd's architecture is actually client-server over a unix socket.

So all you need to do is create a very simple base container layer that just installs sssd-client, and wires up /etc/nsswitch.conf to use it (your package manager will almost surely do this automatically). Then just bind mount the sssd socket into your container and boom, all your host users are in the container.

If you already log in with sssd you're done. But if you only use local users then you'll need to configure the proxy provider so that sssd reads from your passwd on your host. In this case the host system doesn't actually have to use sssd for anything.


Sounds interesting, do you have a link to an example where this is done/demonstrated?

(Also I'm not sure how that's better (and not just different), except maybe it allows more than one host user in the container, but I haven't had a use case for that).


This is glorious and just the thing I was looking for. I am trying to move towards an even more container based dev environment, basically shell into long running containers. Maybe even window manager in docker .

Totally eliminates dependency hell, e.g. ROS heavy workflows where it wants to control every part of your environment.


From limited experience: podman and the associated toolset (buildah, skopeo) show a lot of promise. That said, they are evolving and may currently require administrative care and attention when used in production.

If your environment would benefit from smaller, isolated containerization tools then my two cents would be that it's worth keeping an eye on these as they mature, and perhaps perform early evaluation (bearing in mind that you may not be able to migrate completely, yet).

The good:

- Separation of concerns; individual binaries and tools that do not all have to be deployed in production

- A straightforward migration path to build containers from existing Dockerfiles (via "buildah bud")

- Progressing-and-planned support for a range of important container technology, including rootless and multi-architecture builds


I have a Makefile for a Rust project which binds the local repository to a Docker volume, builds it in the container using muslrust, and then does a chown to change the target directory back from root ownership to my own user.

All I had to do was 's/docker/podman/g' and remove the chown hack and it works fine: https://github.com/sevagh/pq/commit/6acf6d05a094ac2959567a9a...

It understands Dockerfiles and can pull images from Dockerhub.


I had pretty negative experiences with it. I'm not sure how much of my experience is _actually_ the fault of Podman and how much of it was around using it as a drop-in replacement for Docker. It baffled me though and I'm a long time Docker and Linux user.

I used Arch Linux and followed their docs[1], mainly because I wanted it to also run without root. Big mistake. Attempting to run a simple postgres database w/ and w/o root permissions for Podman resulted in my system getting into an inconsistent state. I was unable to relaunch the container due to the state of my system.

I mucked around with runc/crun and I ended up literally nuking both the root and non-root directories that stored them on my computer. I reset my computer back to Podman for root only, hoping that I had just chosen a niche path. It still would leave my system broken, requiring drastic measures to recover. No thanks.

After much debugging, finally switching back to Docker, I realized my mistake: I had forgotten a required environment variable. Silly mistake.

Docker surfaced the problem immediately. Podman did not.

Docker recovered gracefully from the error. Podman left my system in an inconsistent state, unable to relaunch the container and unable to remove it

Again, I'm not sure how much of my experience was just trying to force Podman into a square hole? I'm sure there are other people that make it work just fine for their use cases

Edit: I should note that I used it as a docker-compose replacement, which is probably another off-the-beaten path usage that made this more dramatic than it should have been

1. https://wiki.archlinux.org/index.php/Podman


I don't know when you tried this but I recall it having sucked in the summer when I tried it and I gave up. New job in the fall, decided to give podman a shot "if it doesn't work on the first try I'm going to use docker", then podman-compose 3rd party instructions (I can't remember which) just works out of the box. I'm running a fleet that includes postgres and redis.


I've been using my company's docker compose script transparently with podman-compose for three months now and I have basically forgotten that I've switched. I actually didn't use docker before this, so I can't comment on what the difference is.

I feel like the podman experience is binary, either it works perfectly or it sucks balls. My suggestion is to give it a try, if it fails, then maybe file a bug report, fail fast and fall back on docker.


I've used podman-compose for a few web projects. Rootless means you can't bind to privileged ports etc. but after a bit of fiddling I can now spin up my choice of DB/server/mock pretty quickly - just like I could with docker.

The lack of 'cannot connect to dockerd' mysteries makes for a much-improved developer experience if you ask me.


Out of curiosity, why do you prefer daemonless setups?


If I have 30 containers running, why should a single daemon being restarted cause all 30 to shutdown as well?

Similarly, the docker security model is that there isn't a security model. If you can talk to the docker socket you have what ever privileges the daemon is running as.


So Docker supports live restore https://docs.docker.com/config/containers/live-restore/ which addresses the first point.

Second point, yep if you run Docker as root and someone can access the socket file they get root.

If that's a concern, you can run Docker rootless.

And as we're talking file permissions on a local host to allow that access, the same applies to podman containers does it not? If there are permission issues allowing another user to use access the container filesystems, you have the same problem.


Rootless Docker is basically a joke, I've tried to run production workloads on it for about a year before I gave up. Numerous docker images will refuse to work, a bunch of things will subtly fill your error logs with warnings and it doesn't mesh well with running docker swarm at all.


Rootless docker only left experimental status with 20.10 which came out in December 2020, so maybe they would have addressed some of those issues...

As to swarm, I was comparing Docker rootless to podman, which is more a developer use case than prod. container clusters.


When you use a system-level daemon[0], the daemon has to have privileges to start a container as anyone (that is... root). In a daemon-less environment, you only need the privileges of the user who is starting the container.

[0] I suppose you could have a user-level daemon that runs for each user that needs to run containers, but that's even more overhead.


Docker does allow for daemonless execution, but as you say one daemon per user will add a bit of overhead.

There's some tradeoff I guess though, between rootful setup and per user, as images duplication per user could add up.


It's a pain having to setup root access or a user for Docker. At a financial institution I worked at we had to waste about half a day to get this setup (and that's once we worked out who we had to speak to).


FWIW you can run Docker rootless (as an ordinary user) now.


At my firm you're forbidden from getting root access, docker is not allowed, podman is.


I’m a GPU driver dev and you guys are talking ~alien~, right now. What does “daemonless” mean? Why is that good?


In Docker each container process is a child of the Docker daemon process. If you need to apply security patches to the Docker daemon it kills all your running containers.


Technically the parent process for a contained process with Docker is containerd-shim.

Also, Docker does support live restore if you want to keep containers running over daemon restarts https://docs.docker.com/config/containers/live-restore/


Isn't Singularity[1] also daemonless?

[1] https://sylabs.io/


Yes, but singularity is a gigantic suid root binary where as podman uses user namespaces for unprivileged stuff.


Even less IPv6 support than docker. With docker you can at least get it to work somehow, even if it is totally different from IPv4, weirdly. Podman just has no IPv6 support to speak of.


Not entirely true.

I was researching this few hours ago and according to https://github.com/containers/podman/issues/6114#issuecommen... it just works when you add another network.

Docker registry having no IPv6 is another fun story tho.


Well, from that issue from 2020-12-24: "--ipv6 has not landed yet", so you cannot assign static external v6 adresses to a container. No activity on that bug since. There is tons of stuff in docker that works automatically, where in podman it is like "Please be aware you have to configure ip6tables (or whatever your OS firewall is) for forwarding.". Yeah, well, if i have to do everything by hand or program a ton of scripts around it, it is quite useless, isn't it?

IPv6 support is always like this: half-baked at best, someone got it to work somehow, developers declare it done since it worked for someone. Then crickets...

IPv6 support isn't done until you can replace IPv4 with it, with no changes to your config except addresses. Even docker isn't there yet. And podman's is still in a larval state.


For airgapped networks, this is great.


I still miss an easy way to setup multiple containers in a single network like with docker-compose. podman-compose is not really useable.


You can use docker-compose with podman.

Through it has similar drawbacks as using docker-compose with docker.

As far as I know there is currently no rootless way on Linux to setup custom internal networks like docker compose does (hence why podman-compose doesn't support such things).


There definitely is a way as creating custom networks works perfectly fine with rootless docker. I'm surprised if podman doesn't support that since it uses rootlesskit (same as rootless docker) which does the heavy lifting for rootless networking.


podman 3.x natively supports docker-compose, but it requires running a podman daemon and has the same drawbacks as docker:

https://www.redhat.com/sysadmin/podman-docker-compose

That says, you literally can use the same docker-compose app shipped from docker and it works using podman.


An Ansible playbook with the podman_container module is my go-to. It's not as easy because it's less magic but it's also less magic which I count as a win.


You can also use pod files for this, which is nice in that it's easy to migrate to k8s.


What's wrong with it?


With docker all container have their own ip address and (potential) internal DNS name.

But this can't be done rootless.

So with rootless podman all container map to the same ip address but different ports.

This is for some use cases (e.g. spinning up a DB for integration testing) not a problem at all. For others it is.

More over you can run multiple groups of docker containers in separate networks, you can't do so with rootless podman.

Through you can manage networks with rootfull podman (which still has no deamon and as such works better with e.g. capabilities and the audit sub system then docker does).

Through to get the full docker-compose experience you need to run it as a deamon (through systemd) in which case you can use docker-compose with podman but it has most of the problems docker has.


I recently passed the RHCSA exam, which as of last October added a containers/podman section to the test. I usually work with lxcs under libvirt and directly, so working with podman was new to me. The test requirements are very simplistic, but I went further and spent a decent amount of time finding annoyances and issues.

I am not impressed with podman. It's buggy and slow. Documentation is very basic and the underlaying mechanics are not covered where needed. Want to reconfigure the command parameters your start your container with to add a exported variable? Gotta remake the entire setup because there's no edit command.


I really enjoy working with LXC. And the fact that I can switch now between container mode or VM mode makes it even better. That and best ng able to do lxc in lxc easily to test an infrastructure locally.


I have a container I stick in Syncthing that I do 100% of my development within across all my machines. One day I decided to switch to podman on a whim and forgot I made the switch. That was about nine-ish months ago.


I was a big fan of Podman and switched to it wholeheartedly[0][1] but after a while I ran into a few issues that made me have to switch off of it and back to docker:

- image unpacking issues (invalid tar header)

- linking containers isn't supported

[EDIT] - I wrote up the post real quick[2]

[0]: https://vadosware.io/post/rootless-containers-in-2020-on-arc...

[1]: https://vadosware.io/post/bits-and-bobs-with-podman/

[2]: https://vadosware.io/post/back-to-docker-after-issues-with-p...


I have recently found out you can actually run Docker containers with LXC as well. I've yet to get deep into it, but it seems to work fine*.

* Manual config tweaking may be required.


IIRC, Docker's first versions were only a high-level API to LXC. containerd/runc/... came after.


I liked podman for what I was working on with it. The only hangup I found was that it couldn't do networking related routing? Does anyone know more?


Quite a lot is possible with CNI [1]. For example, we use this setup to give real IPs to containers:

  # /etc/cni/net.d/testnet.conflist
  {
    "cniVersion": "0.4.0",
    "name": "testnet",
    "plugins": [
      {
        "type": "bridge",
        "bridge": "br0",  # main host interface is part of this bridge
        "ipam": {
          "type": "host-local",
          "subnet": "10.0.0.0/16",
          "gateway": "10.0.0.1",
          "routes": [{ "dst": "0.0.0.0/0"}]
        }
      }
    ]
  }
You can then start a container and operate on its network namespace for added flexibility:

  podman run -it --net testnet --ip 10.0.0.2 ...

  ns=$(basename $(podman inspect $id | jq -r '.[0] .NetworkSettings .SandboxKey'))
  ip netns exec $ns ip route add ...
[1]: https://github.com/containernetworking/cni


As far as I know it can do so but not when running rootless, I also don't thing it's currently possible to be done rootless.

You still can run podman as root without damon.

Or you can run a podman deamon (through systemd) which is even compatible with docker-compose but has most of the drawbacks of running docker.


Is the title of this page out of date?

AFAIU, Podman v3 has a docker-compose compatible socket and there's a daemon; so "Daemonless Container Engine" is no longer accurate.

"Using Podman and Docker Compose" https://podman.io/blogs/2021/01/11/podman-compose.html


Podman does not run in a daemon-mode by default. There's a dbus-activated service unit that activates the daemon when you try to use Docker Compose with it.


Sincere question: why would I want to use Podman over Docker?

What does being "daemonless" actually buy me in practice?


It removes a privilege escalation scenario. With docker i can easily gain root, with podman i can’t.

Might not be relevant if it’s your own machine though but for $BIG_CORP desktop that might be useful.


podman runs out of the box in normal user mode and doesn't require sudo.


This. SO MUCH THIS. We use docker for our CI, which is mostly okay. But we devs don't have root on our workstations, so we can't just load the docker image and test stuff in the CI env. Enter podman: I've written a small perl wrapper around it and now everyone can just call `ci-chroot` to drop to a CI-equivalent shell. They can get virtual root (default) or normal user (--user), and the script takes care of everything (binding some useful folders, fetching/updating our internal image, starting/stopping the actual container and cleaning up,...). Only change necessary to our [Manjaro] environment was adding podman to the package list and a small bash script that generates the `/etc/sub{u,g}id` from our LDAP.


I'm intrigued that you must run Manjaro and kind of want work with you because of it. :)

Manjaro is great. Best Linux experience I've had so far. It's good that people recognize it.


It's just great. But I also like Arch@home.


I tried Arch but they are a bit too trigger-happy with the rolling releases and this has caused problems for me in the past.

I recognize it's a minor pet peeve and would not try to convince anyone though. It's just that I found Manjaro's release policy more to my liking (and unwillingness to fix a broken installation which granted, was easy every time yet took time nonetheless).


> we devs don't have root on our workstations

What's the justification for this... while at the same time allowing the use of containers?


With root it's easy to mess up the system badly ("I'll just fix that small error") and/or let all systems slowly diverge. And if something fails on my workstation, it will fail on many others as well and needs a coordinated solution. Also, giving every dev root is a security liability, especially when there is absolutely no work reason that requires us to become root. So only a few "core IT" people have root access.

I don't see how that contradicts the use of containers? When we only used docker for the CI we didn't have access to them, because escalating to root from within docker is still pretty easy. But with podman they're running with our privileges (and due to gid/uid mapping, I can run our installer "as root" in the container and see if it sets up everthing correctly - while in reality the process runs under my uid).

Disclaimer: This is in a professional enviroment. At home I despise containers and prefer to have all my service installed and easily updated by the system's package manager.


Developing anything without root in your own machine these days must be an absolute nightmare. If I may ask, what stack are you using? I can imagine Java, but that's about it. Go, Python, Node, etc, all of them I've needed root access to test some packages, for instance.

I would guess developers must be using a lot of "workarounds" without management's knowledge.


For Python, you can get most packages as non-root via `pip install --user $package` - but you are right, you might need root if these packages depend on system libraries (that they often wrap).

However, it maybe should not be a requirement. AFAIK nix allows you to install stuff as a normal user (and so does dnf on Fedora, but it depends on the package).


C++. And it's no problem :) I don't need privileged ports, and I don't need write access to the OS.

Only thing that annoyed me was when I did some work on an internal web tool (php+js) and had no php-fpm & httpd. But by now I've got a container for that.


Well, I suppose if your dependencies are very locked down it might work in C++.

I would guess, though, that having to change the prefix and default locations in all tarballs you download must be quite a pain. You can't install DEBs or any other tools either?


We have all deps in gits. A build tool takes care of everything. I get all necessary source gits checked out, configured (e.g. change prefix) and built. If I change something (which is my job), rebuilding is just a 'make' away; and rebuilds only the changed parts. Pretty frictionless and painless.

We use some external stuff like sqlite, qt,... but that's all versioned in our gits as well (-> easily reproducible builds). Since we sell a commercial product we can't just add random deps anyway. Plus, I think there is very little code that would benefit from being replaced by an external libs.

Relevant tools are on a global package list. We can get stuff added there on a short notice, and with little questions asked. Eg when I needed some lib for a FPGA dev kit that was a matter of minutes.

Uaaah, Firefox on Android is messing up the input again (can only append, not edit). I guess I should just hit that "reply" button then).


That actually sounds pretty cool to work with, then. If you truly have support from the company to work like that... Sadly not how it works in most places


Why does it matter what happens to your machine?

The big corp I work for has no issue with any of this so I genuinely don’t understand the motivation for this restriction.

Your machine shouldn’t have anything important on it and if it does I’m not sure that root/non-root will protect you from that.


It's only because your environment is fragile running stuff on metal instead of in a virtual environment.

Why not just give everyone an isolated space on a server with stuff like systemd-nspawn and let people do whatever they want including using docker inside and if they've screwed up, that's 1 lesson for them and redeploy his environment from a daily backup of the entire space or from a base image quickly.


I gave your post some thought, but there is not too much I can really say. You throw around some conclusion/assertion ("it's fragile"), but that's just plain wrong. And then you propose a perceived solution for that non-problem ("put people on sliced servers") even though our workloads are poorly suited for that AND that's mostly orthogonal to the stated "problem" to begin with.


In big companies there isn't always a good justification for a restriction ;-)


With no courage to voice a concern, it's your problem to be strangled by such limitations or there's actually a valid concern that you're not aware of.


Queue "you don't know me" ;) if I was strangled, I would voice concern very much. But as noted several times, everything is setup so that we never need root. The only thing that was an issue, was docker for about a year (we used chroot before that). That's why I had the sysadmin install podman (no discussion necessary) and why I build a wrapper for super easy usage (which is globally available now).

And please note that getting a CI like env is usually not necessary. I only need that exact env to prebuild things that should run in the CI, e.g. target-specific GCCs. And we can't upgrade the CI because some customers pay good money to have our product run on ancient Linux versions (safety critical industries, once something is certified, you use it a looooong time).


$ for your favorite infra oss startup or volunteer

rhel ~forces you to do podman in rhel 8+. They make it hard enough to do a docker install that corp policies will reject docker (non-standard centos 7 repos etc), while allowing ibm's thing. Ex: Locks podman over docker for a lot of US gov where rhel 8 is preferred and with only standard repos, and anything else drowns in special approvals.

subtle and orthogonal to podman's technical innovations.. but as a lot of infra oss funding ultimately comes from what enterprise policies allow, this a significant anti-competitive adoption driver for podman by rhel/ibm.


Because people can’t install non-Red Hat software due to internal policy, that’s somehow anti-competitive?


The main repos have plenty of non-rhel sw, and people can expend social capital working around IT policies, but that doesn't change it being against the grain.

We are fine because we use Ubuntu for the generally better GPU support, but I don't have control over the reality of others. I repeatedly get on calls with teams where this comes up, and I feel sorry for the individuals who are stuck paying the time etc. cost for this. Docker for RHEL 8 is literally installing Docker for Centos 7, so this isn't a technology thing, but political: A big corporate abusing their trusted neutral oss infra position to push non-neutral decisions in order to prop up another part of its business. This is all bizarre to operators who barely know Docker to beginwith, and care more about say keeping the energy grid running. ("The only authorized thing will likely not work b/c incompatibilities, and I have to spend 2w getting the old working thing approved?")

And again, none of this is technical. Docker is great but should keep improving, so competitive pressure and alternatives are clearly good... but anti-competitive stuff isn't.


But RHEL repositories are always out of date, sorry, I mean "stable". The disconnect is when you try to `yum install docker` on a RHEL system, you get an old version. If you want a newer version of Docker, you need to use the Docker provided repositories.

But this is the case for all software on RHEL. It's all older and more stable. Try installing any programming language, database, etc... it's always a version or two older in the official RHEL repositories. RedHat written stuff is always going to be more "current" because they control the repositories, but this isn't any different from how any other package is treated. RHEL has always been slow to update. (Which is why Docker created their own repositories in the first place).

It makes sense though... because RHEL has to check all packages that are part of their repository. Specifically because things in their repository are safe and stable. There is only so much time to go around testing updates to packages.


Oof I wish more enterprises/govs could do that. In RHEL 8, a team would need to modify their trust chain to include centos 7 / epel to get at the old and still working `yum install` binaries, and that gets into IT signoffs before we even get into discussing updated versions.

I'd totally expect a trade-off for old & stable over new and shiny for RHEL: They're getting paid for that kind of neutral & reliable infra behavior. That'd mean RHEL is packaging docker / docker-compose, and maybe podman as an optional alternative. Instead IBM/RHEL is burning IT dept $ and stability by not allowing docker and only sanctioning + actively promoting IT teams use their alternative in unreliable ways.


I can't answer the question well, but I think the opposite is just as valid a question: What does a daemon buy you in practice? Why do you need to run the Docker daemon if it is possible (as appears so from podman's existence) to do the same thing without a daemon?


100 years ago I wrote a simple sysv init script that started/stopped/queried/enabled/disabled lxc containers, graceful host shutdown (all containers shut down gracefully before the host shuts down), you could start/stop/query/enable/disable individual containers as well as the the meta-service of all containers, all from a few k of bash, and no daemon, not even in the form of a shell loop that stayed running.

Right about then systemd came along (as in, my distro at the time adopted it) and essentially required a service to monitor (some process, tcp port, or file to monitor), but my efficient little script had no service by design, because none was needed and I hate making and running anything that isn't needed to get a given job done. (Plus about that time my company went whole-hog into vmware and a managed service provider and so my infant home-grown lxc containers on colo hardware system never grew past that initial tiny poc)

I laugh so many years later reading "Hey! Check it out! You don't need no steeenkeeeng daemon to run some containers!"


It is almost 100% compatible with docker. I only had to do minimal changes in my Dockerfiles to use it.


But not Buildkit and docker-compose.


Podman v3 is compatible with docker-compose (but not yet swarm mode, FWIU), has a socket and a daemon that services it.

Buildah (`podman buildx`, `buildah bud --arch arm64`) just gained multiarch build support; so also building arm64 containers from the same Dockerfile is easy now. https://github.com/containers/buildah/issues/1590

IDK what BuildKit features should be added to Buildah, too?


> IDK what BuildKit features should be added to Buildah, too?

Cache mounts are what we rely on for incremental rebuilds.


... but not docker-compose


I you'd like to know what it is like to use Podman, I've found those terminal recordings by Matthew Heon (Podman contributor) quite helpful: https://asciinema.org/~mheon


Glad to see Podman here, worked with it at a past job where services were deployed on RedHat. Converting all existing deployment scripts to podman was as easy as do a recursive search and replace `s/docker/podman/` and there was no learning curve as all the commands were the same. By default root is not required which is great also.


I tried to use Podman a few months ago, but the time spent to execute the "podman" command was considerably higher than that of the "docker" command. I am worried if this is a fundamental problem of the daemonless design. If that is the case, it would not be possible to solve the speed problem of Podman.


I redid some docker builds yesterday with podman, and filled up my laptops 99GB entirely with layers. Docker was handling it somehow better.

I used --squash, cleaned up the various apt-gets but boy it made me realise that for every abstraction we get leakage - sometimes serious leakage


What’s the difference with systemd-nspawn? (with or without -U)


Podman can run under your user and use your UID (e.g., 1000) as UID 0 for the root user in the container. Any other user in the container gets mapped to a temporary unprivileged UID on your system (the range to pick from is specified in /etc/subuid).


How are you meant to pull package from docker hub with nspawn?


I don’t. I’m using Nix containers.


...that nspawn is already available on 90% of Linux systems.


Can anyone explain what daemonless means in this case and what the advantages are? (I don't work in this space and my knowledge ends with knowing what a container is.)


IIRC, docker by default runs a daemon as root, which spawns all your containers. It is possible to run it rootless as well (https://docs.docker.com/engine/security/rootless/), though you’d need a separate daemon for each user you’d want to run containers as.

Podman doesn’t have that. It spawns containers without the help of a controlling daemon, and can spawn containers both as root and rootless.

Rootless is of course a fairly big deal considering if you run docker containers as root, and runc has a vulnerability, you could potentially escape the container and become root, where a rootless installation would just let you escape to whatever user is running the container.


So, it's only about a tiny security concern but it got much bigger usability problems and what's worse RedHat is forcing people to use podman since RHEL 8 when it's not even ready.


You don't need root access to run the containers


Happy to see docker alternative, but the road is still long for it to replace docker


I don't think so.

It's doing a better job for most of the use cases I had recently.

You can run it rootless which makes it a far better use cases for a lot of dev use cases (where you don't need to emulate a lot of network aspects or heavy filesystem usage).

Also you can run it as root without deamon which is much better for a bunch of use cases and eliminates any problems with fuse-overlayfs. This is the best way to e.g. containerize systemd services and similar (and security wise still better as e.g. it still works well with the audit subsystem).

If worse comes to worse you can run it as a deamon in which case it's compatible with docker compose, and has more or less the same problems.

In my experience besides some integration testing use-cases is either already a good docker replacement without running it as a deamon, or the use case shouldn't be handled by docker either way (sure there are always some exceptions).

Lastly I had far less problems with podman and firewalls then with docker but I haven't looked into why that's the case. (But it seems to be related to me using nftables instead of iptables).


I use podman in my projects, no problems. And since cli is compatible with docker all my work I have done using docker simply runs with podman.


Who cares? It’s one command to start the daemon.


Every complaint here boils down to: “it’s not docker, it doesn’t work exactly like docker, therefore it’s bad”.


Which is a remarkable position considering how bad docker is. You’d think that not being docker would be a feature by itself.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: