Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Unregistry – “docker push” directly to servers without a registry (github.com/psviderski)
646 points by psviderski 1 day ago | hide | past | favorite | 148 comments
I got tired of the push-to-registry/pull-from-registry dance every time I needed to deploy a Docker image.

In certain cases, using a full-fledged external (or even local) registry is annoying overhead. And if you think about it, there's already a form of registry present on any of your Docker-enabled hosts — the Docker's own image storage.

So I built Unregistry [1] that exposes Docker's (containerd) image storage through a standard registry API. It adds a `docker pussh` command that pushes images directly to remote Docker daemons over SSH. It transfers only the missing layers, making it fast and efficient.

  docker pussh myapp:latest user@server
Under the hood, it starts a temporary unregistry container on the remote host, pushes to it through an SSH tunnel, and cleans up when done.

I've built it as a byproduct while working on Uncloud [2], a tool for deploying containers across a network of Docker hosts, and figured it'd be useful as a standalone project.

Would love to hear your thoughts and use cases!

[1]: https://github.com/psviderski/unregistry

[2]: https://github.com/psviderski/uncloud






Docker creator here. I love this. In my opinion the ideal design would have been:

1. No distinction between docker engine and docker registry. Just a single server that can store, transfer and run containers as needed. It would have been a much more robust building block, and would have avoided the regrettable drift between how the engine & registry store images.

2. push-to-cluster deployment. Every production cluster should have a distributed image store, and pushing images to this store should be what triggers a deployment. The current status quo - push image to registry; configure cluster; individual nodes of the cluster pull from registry - is brittle and inefficient. I advocated for a better design, but the inertia was already too great, and the early Kubernetes community was hostile to any idea coming from Docker.


Hey Solomon, thank you for sharing your thoughts, love your work!

1. Yeah agreed, it's a bit of a mess that we have at least three different file system layouts for images and two image stores in the engine. I believe it's still not too late for Docker to achieve what you described without breaking the current model. Not sure if they care though, they're having hard times

2. Hm, push-to-cluster deployment sounds clever. I'm definitely thinking about a distributed image store, e.g. embedding unregistry in every node so that they can pull and share images between each other. But triggering a deployment on push is something I need to think through. Thanks for the idea!


I naively sent the Docker developers a PR[1] to add this functionality into mainline Docker back in 2015. I was rapidly redirected into helping out in other areas - not having to use a registry undermined their business model too much I guess.

[1]: https://github.com/richardcrichardc/docker2docker


You're the OG! Hats off, mate.

It's a bummer docker still doesn't have an API to explore image layers. I guess their plans to eventually transition to containerd image store as the default. Once we have containerd image store both locally and remotely we will finally be able to do what you've done without the registry wrapper.


Nice. And the `pussh` command definitely deserves the distinction of one of the most elegant puns: easy to remember, self-explanatory, and just one letter away from its sister standard command.

It's fine, but it wouldn't hurt to have a more formal alias like `docker push-over-ssh`.

EDIT: why I think it's important because on automations that are developed collaboratively, "pussh" could be seen as a typo by someone unfamiliar with the feature and cause unnecessary confusion, whereas "push-over-ssh" is clearly deliberate. Think of them maybe as short-hand/full flags.


That's a valid concern. You can very easily give it whatever name you like. Docker looks for `docker-COMAND` executables in ~/.docker/cli-plugins directory making COMMAND a `docker` subcommand.

Rename the file to whatever you like, e.g. to get `docker pushoverssh`:

  mv ~/.docker/cli-plugins/docker-pussh ~/.docker/cli-plugins/docker-pushoverssh
Note that Docker doesn't allow dashes in plugin commands.

can easily see an engineer spotting pussh in a ci/cd workflow or something and thinking "this is a mistake" and changing it.

> The extra 's' is for 'sssh'

> What's that extra 's' for?

> That's a typo



and prone to collision!

Indeed so! Because it's art, not engineering. The engineering approach would require a recognizably distinct command, eliminating the possibility of such a pun.

I used to have an alias em=mg, because mg(1) is a small Emacs, so "em" seemed like a fun name for a command.

Until one day I made that typo.


I'm a fan of installing sl(1), the terminal steam locomotive. I mistype it every couple months and it always gives me a laugh.

https://github.com/mtoyoda/sl


In the same spirit there's gti https://r-wos.org/hacks/gti

This is a cool idea that seems like it would integrate well with systems already using push deploy tooling like Ansible. It also seems like it would work as a good hotfix deployment mechanism at companies where the Docker registry doesn't have 24/7 support.

Does it integrate cleanly with OCI tooling like buildah etc, or if you need to have a full-blown Docker install on both ends? I haven't dug deeply into this yet because it's related to some upcoming work, but it seems like bootstrapping a mini registry on the remote server is the missing piece for skopeo to be able to work for this kind of setup.


You need a containerd on the remote end (Docker and Kubernetes use containerd) and anything that speaks registry API (OCI Distribution spec: https://github.com/opencontainers/distribution-spec) on the client. Unregistry reuses the official Docker registry code for the API layer so it looks and feels like https://hub.docker.com/_/registry

You can use skopeo, crane, regclient, BuildKit, anything that speaks OCI-registry on the client. Although you will need to manually run unregistry on the remote host to use them. 'docker pussh' command just automates the workflow using the local Docker.

Just check it out, it's a bash script: https://github.com/psviderski/unregistry/blob/main/docker-pu...

You can hack your own way pretty easily.


I agree! For a bunch of services I manage I build the image locally, save it and then use ansible to upload the archive and restore the image. This usually takes a lot longer than I want it to!

It needs docker daemon on both ends. This is just a clever way to share layer between two daemons via ssh.

This should have always been a thing! Brilliant.

Docker registries have their place but are overall over-engineered and an antithesis to the hacker mentality.


As a VC-funded company Docker had to make money somehow.

I think the complexity lies in the dance required to push blobs to the registry. I've built an OCI-compliant pull-only registry before and it wasn't that complicated.

I recommend using GitHub's registry, ghcr.io, with GitHub Actions.

I invested just 20 minutes to setup a .yaml workflow that builds and pushes an image to my private registry on ghcr.io, and 5 minutes to allow my server to pull images from it.

It's a very practical setup.


Ooh this made me discover uncloud. Sounds like exactly what I was looking for. I wanted something like dokku but beefier for a sideproject server setup.

There is also https://skateco.github.io/ which (at quick glance) seems similar

Skate author here: please try it out! I haven’t gotten round to diving deep into uncloud yet, but I think maybe the two projects differ in that skate has no control plane; the cli is the control plane.

I built skate out of that exact desire to have a dokku like experience that was multi host and used a standard deployment configuration syntax ( k8s manifests ).

https://skateco.github.io/docs/getting-started/


Looks like uncloud has no control plane, just a CLI: https://github.com/psviderski/uncloud#-features

I'm glad the idea of uncloud resonated with you. Feel free to join our Discord if you have questions or need help

A recommendation for Portainer if you haven't used or considered it. I'm running two EC2 instances on AWS using portainer community edition and portainer agent and works really well. The stack feature (which is just docker compose) is also super nice. One EC2 instance; running Portainer agent runs Caddy in a container which acts as the load balancer and reverse proxy.

I'm actually running portainer for my homelab setup hosting things like octoprint and omada controller etc.

Takes a look at pipeline that builds image in gitlab, pushes to artifactory, triggers deployment that pulls from artifactory and pushes to AWS ECR, then updates deployment template in EKS which pulls from ECR to node and boots pod container.

I need this in my life.


My last projects pipeline spent more time pulling and pushing containers than it did actually building the app. All of that was dwarfed by the health check waiting period, when we knew in less than a second from startup if we were actually healthy or not.

It's very silly that Docker didn't work this way to start with. Thank you, it looks cool!

You can already achieve the same thing by making your image into an archive, pushing it to your server, and then running it from the archive on your server.

Saving as archive looks like this: `docker save -o may-app.tar my-app:latest`

And loading it looks like this: `docker load -i /path/to/my-app.tar`

Using a tool like ansible, you can achieve easily what "Unregistry" is doing automatically. According to the github repo, save/load has the drawback of tranfering the whole image over the network, which could be an issue that's true. And managing the images instead of archive files seems more convenient.


If you have an image with 100MB worth of bottom layers, and only change the tiny top layer, the unregistry will only send the top layer, while save / load would send the whole 100MB+.

Hence the value.


yeah i deal with horrible, bloated python machine learnings shits; >1 GB images are nothing. this is excellent, and i never knew how much i needed this tool until now.

Docker also has export/load commands. They only exports the current layer filesystem.

If you read the README, you'll see that replacing the "save | upload | load" workflow is the whole point of this, to drastically reduce the amount of data to upload by only sending new layers instead of everything, and you can use this inside your ansible setup to speed it up.

Good advice and beware the difference between docker export (which will fail if you lack enough storage, since it saves volumes) and docker save. Running the wrong command might knock out your only running docker server into an unrecoverable state...

Functionality-wise this is a lot like docker-pushmi-pullyu[1] (which I wrote), except docker-pushmi-pullyu is a single relatively-simple shell script, and uses the official registry image[2] rather than a custom server implementation.

@psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?

[1]: https://github.com/mkantor/docker-pushmi-pullyu

[2]: https://hub.docker.com/_/registry


After taking a closer look it seems the main conceptual difference between unregistry/docker-pussh and docker-pushmi-pullyu is that the former runs the temporary registry on the remote host, while the latter runs it locally. Although in both cases this is not something users should typically have to care about.

Do docker-pussh or docker-pushmi-pullyu verify container image signatures and attestations?

From "About Docker Content Trust (DCT)" https://docs.docker.com/engine/security/trust/ :

  > Image consumers can enable DCT to ensure that images they use were signed. If a consumer enables DCT, they can only pull, run, or build with trusted images. 

  export DOCKER_CONTENT_TRUST=1
cosign > verifying containers > verify attestation: https://docs.sigstore.dev/cosign/verifying/verify/#verify-at...

/? difference between docker content trust dct and cosign: https://www.google.com/search?q=difference+between+docker+co...


Is this different from using a remote docker context?

My workflow in my homelab is to create a remote docker context like this...

(from my local development machine)

> docker context create mylinuxserver --docker "host=ssh://revicon@192.168.50.70"

Then I can do...

> docker context use mylinuxserver

> docker compose build

> docker compose up -d

And all the images contained in my docker-compose.yml file are built, deployed and running in my remote linux server.

No fuss, registry, no extra applications needed.

Way simpler than using docker swarm, Kubernetes or whatever. Maybe I'm missing something that @psviderski is doing that I don't get with my method.


Assuming I understand your workflow, one difference is that unregistry works with already-built images. They aren't built on the remote host, just pushed there. This means you can be confident that the image on your server is exactly the same as the one you tested locally, and also will typically be much faster (assuming well-structured Dockerfiles with small layers, etc).

This is probably an anti-feature in most contexts.

The ability to push a verified artifact is an anti-feature in most contexts? How so?

It is fine if you are just working by yourself on non-prod things and you’re happy with that.

But if you are working with others on things that matter, then you’ll find you want your images to have been published from a central, documented location, where it is verified what tests they passed, the version of the CI pipeline, the environment itself, and what revision they were built on. And the image will be tagged with this information, and your coworkers and you will know exactly where to look to get this info when needed.

This is incompatible with pushing an image from your local dev environment.


Totally valid approach if that works for you, the docker context feature is indeed nice.

But if we're talking about hosts that run production-like workloads, using them to perform potentially cpu-/io-intensive build processes might be undesirable. A dedicated build host and context can help mitigate this, but then you again face the challenge of transferring the built images to the production machine, that's where the unregistry approach should help.


Neat project and approach! I got fed up with expensive registries and ended up self-hosting Zot [1], but this seems way easier for some use cases. Does anyone else wish there was an easy-to-configure, cheap & usage-based, private registry service?

[1]: https://zotregistry.dev


Your SSL certificate for zothub.io has expired in case you weren’t aware.

Neat idea. This probably has the disadvantage of coupling deployment to a service. For example how do you scale up or red/green (you'd need the thing that does this to be aware of the push).

Edit: that thing exists it is uncloud. Just found out!

That said it's a tradeoff. If you are small, have one Hetzner VM and are happy with simplicity (and don't mind building images locally) it is great.


For sure, it's always a tradeoff and it's great to have options so you can choose the best tool for every job.

This is excellent. I've been doing the save/load and it works fine for me, but I like the idea that this only transfers missing layers.

FWIW I've been saving then using mscp to transfer the file. It basically does multiple scp connections to speed it up and it works great.


Nice to only have to push the layers that changed. For me it's been enough to just do "docker save my-image | ssh host 'docker load'" but I don't push images very often so for me it's fine to push all layers every time.

As a long ago fan of chef-solo, this is really cool.

Currently, I need to use a docker registry for my Kamal deployments. Are you familiar with it and if this removes the 3rd party dependency?


Yep, I'm familiar with Kamal and it actually inspired me to build Uncloud using similar principles but with more cluster-like capabilities.

I built Unregistry for Uncloud but I belive Kamal could also benefit from using it.


I think it'd be a perfect fit. We'll see what happens: https://github.com/basecamp/kamal/issues/1588

I'm so glad there are tools like this and swing back to selfhosted solutions, especially leveraging SSH tooling. Well done and thanks for sharing, will definitely be giving it a spin.

I've been using ttl.sh for a long time, but only for public, temporary code. This is a really cool idea!

Wow ttl.sh is a really neat idea, thank you for sharing!

This is really cool. Do you support or plan to support docker compose?

Thank you! Can you please clarify what kind of support you mean for docker compose?

Right now, I use ssh to trigger a docker compose restart that pulls all the latest images on some of my servers (we have a few dedicated hosting/on premise setups). That then needs to reach out to our registry to pull images. So, it's this weird mix of push pull that ends up needing a central registry.

What would be nicer instead is some variation of docker compose pussh that pushes the latest versions of local images to the remote host based on the remote docker-compose.yml file. The alternative would be docker pusshing the affected containers one by by one and then triggering a docker compose restart. Automating that would be useful and probably not that hard.


I've built a setup that orchestrates updates for any number of remotes without needing a permanently hosted registry. I have a container build VM at HQ that also runs a registry container pointed at the local image store. Updates involve connecting to remote hosts over SSH, establishing a reverse tunnel, and triggering the remote hosts to pull from the "localhost" registry (over the tunnel to my buildserver registry).

The connection back to HQ only lasts as long as necessary to pull the layers, tagging works as expected, etc etc. It's like having an on-demand hosted registry and requires no additional cruft on the remotes. I've been migrating to Podman and this process works flawlessly there too, fwiw.


I assume that he means "rather than pushing up each individual container for a project, it could take something like a compose file over a list of underlying containers, and push them all up to the endpoint."

Yes, pushing all containers one by one would not be very convenient.

The right yq|xargs invocation on your compose file should get you to a oneshot.

I would prefer docker compose pussh or whatever

That's an interesting idea. I don't think you can create a subcommand/plugin for compose but creating a 'docker composepussh' command that parses the compose file and runs 'docker pussh' should be possible.

My plan is to integrate Unregistry in Uncloud as the next step to make the build/deploy flow super simple and smooth. Check out Uncloud (link in the original post), it uses Compose as well.


You can wrap docker in a bash function that passes through to `command docker` when it's not a compose pussh command.

You can do these image acrobatics with the dagger shell too, but I don't have enough experience with it to give you the incantation: https://docs.dagger.io/features/shell/

I assume you can do these "image acrobatics" in any shell.

The dagger shell is built for devops, and can pipe first class dagger objects like services and containers to enable things like

  github.com/dagger/dagger/modules/wolfi@v0.16.2 |
  container |
  with-exec ls /etc/ |
  stdout
What's interesting here is that the first line demonstrates invocation of a remote module (building a Wolfi Linux container), of which there is an ecosystem: https://daggerverse.dev/

I’ve wanted unregistry for a long time, thanks so much for the awesome work!

Met too, you're welcome! Please create an issue on github if you find any bugs

I think it will be a good fit for me. Currently our 3GB docker image takes a lot of time to push to Github package registry from Github Action and pull from EC2.

What’s the difference between this and skopeo? Is it the ssh support ? I’m not super familiar with skopeo forgive my ignorance

https://github.com/containers/skopeo


Skopeo lets you work with remote registries and local images without a docker/podman/etc daemon.

We use to ‘clone’ across deployment environments and across providers outside of the build pipeline as an adhoc job.


"skopeo" seems to related to managing registeries, very different from this.

Skopeo manages images, copies them and stuff.

I think this is great and have long wondered why it wasn’t an out of the box feature in Docker itself.

What is the container for / what does this do that `docker save some:img | ssh wherever docker load` doesn't? More efficient handling of layers or something?

Yeah exactly, which is crucial for large images if you change only a few last layers.

The unregistry container provides a standard registry API you can pull images from as well. This could be useful in a cluster environment where you upload an image over ssh to one node and then pull it from there to other nodes.

This is what I’m planning to implement for Uncloud. Unregistry is so lightweight so we can embed it in every machine daemon. This will allow machines in the cluster to pull images from each other.


Relatively early on the page it says:

"docker save | ssh | docker load transfers the entire image, even if 90% already exists on the server"


this is nice, hopefully DHH and the folks working on Kamal adopt this.

the whole reason I didn't end up using kamal was the 'need' a docker registry thing. when I can easily push a dockerfile / compose to my vps build an image there and restart to deploy via a make command


I don't see a reason to not adopt this in Kamal. I'm also building Uncloud that took a lot of inspiration from Kamal, please check it out. I will integrate unregistry into uncloud soon to make the build/deploy process a breeze.

Build the image on the deployment server? Why not build somewhere else once and save time during deployments?

I'm most familiar with on-prem deployments and quickly realised that it's much faster to build once, push to registry (eg github) and docker compose pull during deployments.


I think the idea with unregistry is that you're still building somewhere else once but then instead of pushing everything to a registry once you push your unique layers directly to each server you're deploying.

THANK you. Can you do the same for kubernetes somehow?

A few thoughts/ideas on using this in Kubernetes are discussed in this issue: https://github.com/psviderski/unregistry/issues/4; generally, should be possible with the same idea, but with some tweaking.

Also have a look at https://spegel.dev/, it's basically a daemonset running in your k8s cluster that implements a (mirror) registry using locally cached images and peer-to-peer communication.


Oh this is great, its a problem I also have.

I like the idea, but I'd want this functionality "unbundled".

Being able to run a registry server over the local containerd image store is great.

The details of how some other machine's containerd gets images from that registry to me is a separate concern. docker pull will work just fine provided it is given a suitable registry url and credentials. There are many ways to provide the necessary network connectivity and credentials sharing and so I don't want that aspect to be baked in.

Very slick though.


They're unbundled already. You can run unregistry as a standalone service and use your own way to push/pull from it: https://github.com/psviderski/unregistry?tab=readme-ov-file#...

Does it start a unregistry container on the remote/receiving end or the local/sending end? I think that runs remotely. I wonder if you could go the other way instead?

It starts an unregistry container on the remote side. I wonder, what's the use case on your mind for doing it the other way around?

You mean ssh'ing into the remote server, then pulling image from local? That would require your local host to be accessible from the remote host, or setting up some kind of ssh tunneling.

`ssh -R` and `ssh -L` are amazing, and I just learned that -L and -R both support unix sockets on either end and also unix socket to tcp socket https://manpages.ubuntu.com/manpages/noble/man1/ssh.1.html#:...

I would presume it's something akin to $(ssh -L /var/run/docker.sock:/tmp/d.sock sh -c 'docker -H unix:///tmp/d.sock save | docker load') type deal


This is what docker-pushmi-pullyu[1] does, using `ssh -R` as suggested by a sibling comment.

[1]: https://github.com/mkantor/docker-pushmi-pullyu


The problem with running a registry locally is that Docker doesn't provide an API to get individual image layers to be able to build a registry API on top. You have to hook into the containerd Docker uses under the hood. You can't do this locally in many cases, for example, on macOS the VM running Docker Desktop doesn't expose the containerd socket. I guess the workaround you implemented in docker-pushmi-pullyu is an extra copy to the registry which is a bummer.

That's also what the submitted tool does, I want to do the same thing just in the reverse direction. I just don't want to start extra containers on the prod machine.

No, the second one (docker-pushmi-pullyu) runs the registry on the build host.

I meant to reply to you, whoops.

docker-pushmi-pullyu does an extra copy from build host to a registry, so it is just the standard workflow.

I think Spegel does what I want (= serve images from the local cache as a registry), I might be able to build from that. It is meant to be integrated with Kubernetes though, making a simple transfer tool probably requires some adaptation.


How about using docker context. I use that a lot and works nicely.

How do docker contexts help with the transfer of image between hosts?

I assume OP meant something like this, building the image on the remote host directly using a docker context (which is different from a build context)

  docker context create my-awesome-remote-context --docker "host=ssh://user@remote-host"

  docker --context my-awesome-remote-context build . -t my-image:latest
This way you end up with `my-image:latest` on the remote host too. It has the advantage of not transferring the entire image but only transferring the build context. It builds the actual image on the remote host.

This is exactly what I do, make a context pointing to the remote host, use docker compose build / up to launch it on the remote system.

This is super slick. I really wish there was something that did the same, but using torrent protocol, so all your servers shared it.

Not a torrent protocol but p2p, check out https://github.com/spegel-org/spegel it's super cool.

I took inspiration from spegel but built a more focused solution to make a registry out of a Docker/containerd daemon. A lot of other cool stuff and workflows can be built on top of it.


Sweet. I been wanting this for long.

This is timely for me!

I personally run a small instance with Hetzner that has K3s running. I'm quite familiar with K8s from my day job so it is nice when I want to do a personal project to be able to just use similar tools.

I have a Macbook and, for some reason I really dislike the idea of running docker (or podman, etc) on it. Now of course I could have GitHub actions building the project and pushing it to a registry, then pull that to the server, but it's another step between code and server that I wanted to avoid.

Fortunately, it's trivial to sync the code to a pod over kubectl, and have podman build it there - but the registry (the step from pod to cluster) was the missing step, and it infuriated me that even with save/load, so much was going to be duplicated, on the same effective VM. I'll need to give this a try, and it's inspired me to create some dev automation and share it.

Of course, this is all overkill for hobby apps, but it's a hobby and I can do it the way I like, and it's nice to see others also coming up with interesting approaches.


This is awesome, thanks!

very cool. now lets integrate this such that we can do `docker/podman push localimage:localtag ssh://hostname:port/remoteimage:remotetag` without extra software installed :)

I was informed that Podman at least has a `podman image scp` function for doing just this...


Does this work with Kubernetes image pulls?

I guess you're asking about the registry part (not 'pussh' command). It exposes the containerd image store as standard registry API so you can use any tools that work with regular registry to pull/push images to it.

You should be able to run unregistry as a standalone service on one of the nodes. Kubernetes uses containerd for storing images on nodes. So unregistry will expose the node's images as a registry. Then you should be able to run k8s deployments using 'unregistry.NAMESPACE:5000/image-name:tag' image. kubelets on other nodes will be pulling the image from unregistry.

You may want to take a look at https://spegel.dev/ which works similarly but was created specifically for Kubernetes.


this is useful. thanks for sharing

Very nice! I used to run a private registry on the same server to achieve this - then I moved to building the image on the server itself.

Both approaches are inferior to yours because of the load on the server (one way or another).

Personally, I feel like we need to go one step further and just build locally, merge all layers, ship a tar of the entire (micro) distro + app and run it with lxc. Get rid of docker entirely.

The size of my images are tiny, the extra complexity is unwarranted.

Then of course I'm not a 1000 people company with 1GB docker images.


Love it!

I always just use "docker save" to generate a TAR file, then copy the TAR file to the server, and then run "docker load" (on the server) to install the TAR file on the target machine.

See the README, this results in only changed layers being sent instead of _everything_ which can save a lot of time.

I'll do that. Thank you.

I've been very happy doing this:

DOCKER_HOST=“ssh://user@remotehost” docker-compose up -d

It works with plain docker, too. Another user is getting at the same idea when they mention docker contexts, which is just a different way to set the variable.

Did you know about this approach? In the snippet above, the image will be built on the remote machine and then run. The context (files) are sent over the wire as needed. Subsequent runs will use the remote machine's docker cache. It's slightly different than your approach of building locally, but much simpler.


This approach is akin to the prod server pulling an image from a registry. The op method is push based.

No, in my example the docker-compose.yml would exist alongside your application's source code and you can use the `build` directive https://docs.docker.com/reference/compose-file/services/#bui... to instruct the remote host (Hetzner VPS, or whatever else) to build the image. That image does not go to an external registry, but is used internal to that remote host.

For 3rd party images like `postgres`, etc., then yes it will pull those from DockerHub or the registry you configure.

But in this method you push the source code, not a finished docker image, to the server.


Seems like it makes more sense to build on the build machine, and then just copy images out to PROD servers. Having source code on PROD servers is generally considered bad practice.

The source code does not get to the filesystem on the prod server. It is sent to the Docker daemon when it builds the image. After the build ends, there's only the image on the prod server.

I am now convinced that this is a hidden docker feature that too many people aren't aware of and do not understand.


Yeah, I definitely didn't understand that! Thanks for explaining. I've bookmarked this thread, because there's several commands that look more powerful and clean than what I'm currently doing which is to "docker save" to TAR, copy the TAR up to prod and then "docker load".

Considering the nature of servers, security boundaries and hardening,

> Linux via Homebrew

Please don't encourage this on Linux. It happens to offer a Linux setup as an afterthought but behaves like a pigeon on a chessboard rather than a package manager.


Brew is such a cute little package manager. Updating its repo every time you install something. Randomly self updating like a virus.

That made me laugh lol

Well put, but it's a shame this comment is the first thing I read, rather than comments about the tool itself!

We're using it to distribute internal tools across macOS and Linux developers. It excels in this.

Are there any good alternatives?


100% Nix, it works on every distro, MacOS, WSL2 and won't pollute your system (it'll create /nix and patch your bashrc on installation and everything from there on goes into /nix).

Downside: it's Nix.

I tried it, but I have not been able to easily replicate our Homebrew env. We have a private repo with pre-compiled binaries, and a simple Homebrew formula that downloads the utilities and installs them. Compiling the binaries requires quite a few tools (C++, sigh).

I got stuck at the point where I needed to use a private repo in Nix.


> We have a private repo with pre-compiled binaries, and a simple Homebrew formula that downloads the utilities and installs them.

Perfectly doable with Nix. Ignore the purists and do the hackiest way that works. It's too bad that tutorials get lost on concepts (which are useful to know but a real turn down) instead of focusing on some hands-on practical how-to.

This should about do it and is really not that different nor difficult than formulas or brew install:

    git init mychannel
    cd mychannel
    
    cat > default.nix <<'NIX'
    {
      pkgs ? import <nixpkgs> { },
    }:
    
    {
      foo = pkgs.callPackage ./pkgs/foo { };
    }
    NIX
    
    mkdir -p pkgs/foo
    cat > pkgs/foo/default.nix <<'NIX'
    { pkgs, stdenv, lib }:
    
    stdenv.mkDerivation {
      pname = "foo";
      version = "1.0";
    
      # if you have something to fetch
      # src = fetchurl {
      #   url = http://example.org/foo-1.2.3.tar.bz2;
      #   # if you don't know the hash, put some lib.fakeSha256 there
      #   sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
      # };

      buildInputs = [
        # add any deps
      ];
    
      # this example just builds in place, so skip unpack
      unpackPhase = "true"; # no src attribute
    
      # optional if you just want to copy from your source above
      # build trivial example script in place
      buildPhase = ''
        cat > foo <<'SHELL'
        #!/bin/bash
        echo 'foo!'
        SHELL
        chmod +x foo
      '';
    
      # just copy whatever
      installPhase = ''
        mkdir -p $out/bin
        cp foo $out/bin
      '';
    }
    NIX

    nix-build -A foo -o out/foo  # you should have your build in '/out/foo'
    ./out/foo/bin/foo  # => foo!

    git add .
    git commit -a -m 'init channel'
    git add origin git@github.com:OWNER/mychannel
    git push origin main
    
    nix-channel --add https://github.com/OWNER/mychannel/archive/main.tar.gz mychannel
    nix-channel --update
    
    nix-env -iA mychannel.foo
    foo  # => foo!
(I just cobbled that up together, if it doesn't work as is it's damn close; flakes left as an exercise to the reader)

Note: if it's a private repo then in /etc/nix/netrc (or ~/.config/nix/netrc for single user installs):

    machine github.com
        password ghp_YOurToKEn
> Compiling the binaries requires quite a few tools (C++, sigh).

Instantly sounds like a whole reason to use nix and capture those tools as part of the dependency set.


Hm. That actually sounds doable (we do have hashes for integrity). I'll try that and see how it goes.

> Instantly sounds like a whole reason to use nix and capture those tools as part of the dependency set.

It's tempting, and I tried that, but ran away crying. We're using Docker images instead for now.

We are also using direnv that transparently execs commands inside Docker containers, this works surprisingly well.


A quick and dirty version:

    docker -H host1 image save IMAGE | docker -H host2 image load
note: this isn't efficient at all (no compression or layer caching)!

On podman this is built in as native command podman-image-scp[0], which perhaps could be more efficient with SSH compression.

[0] https://docs.podman.io/en/stable/markdown/podman-image-scp.1...


Ah neat I didn't know that podman has 'image scp'. Thank you for sharing. Do you think it was more straightforward to implement this in podman because you can easily access its images and metadata as files on the file system without having to coordinate with any daemon?

Docker and containerd also store their images using a specific file system layout and a boltdb for metadata but I was afraid to access them directly. The owners and coordinators are still Docker/containerd so proper locks should be handled through them. As a result we become limited by the API that docker/containerd daemons provide.

For example, Docker daemon API doesn't provide a way to get or upload a particular image layer. That's why unregistry uses the containerd image store, not the classic Docker image store.


So with Podman, this exists already, but for docker, this has to be created by the community.

I am a bystander to these technologies. I’ve built and debug’ed the rare image, and I use docker desktop on my Mac to isolate db images.

When I see things like these, I’m always curious why docker, which seems so much more beaurecratic/convoluted, prevails over podman. I totally admit this is a naive impression.


Something that took me 20 years to learn: Never underestimate the value of a slick gui.

> why docker, which seems so much more beaurecratic/convoluted, prevails over podman

First mover advantage and ongoing VC-funded marketing/DevRel


That method is actually mentioned in their README:

> Save/Load - `docker save | ssh | docker load` transfers the entire image, even if 90% already exists on the server


I use a variant with ssh and some compression:

    docker save $image | bzip2 | ssh "$host" 'bunzip2 | docker load'

If you are happy with bzip2-level compression, you could also use `ssh -C` to enable automatic gzip compression.

I simply use "docker save <imagename>:<version> | ssh <remoteserver> docker load"

This is great! I wonder how well it works in case of Disaster Recovery though. Perhaps it is not intended for production environments with strict SLAs and uptime requirements, but if you have 20 servers in a cluster that you're migrating to another region or even cloud provider, the pull approach from a registry seems like the safest and most scalable approach



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: