Hacker News new | past | comments | ask | show | jobs | submit login

Could someone explain why tip#9 is a good idea? To me it makes more sense to build the application in the CI pipeline and use Dockerfile only to package the app.

The post is focused on Java apps but, for example, there is a distinction on runtime and SDK images in .NET Core. If you want to build in Docker, you have to pull the heavier SDK image. If you copy the built binaries to image, you can use the runtime image. I guess there could be similar situations in other platforms too.

Other than that, it looks like a decent guide. Thanks to the author.




For me the big advantage of doing more in docker and lees in the CIT environment is that I have less lock-in/dependency to whatever my CI provider does. I try to reduce my CI scripts to something like

    docker build
    docker run image test 
All complexities of building and collecting dependencies go in dockerfiles, so I can reproduce it locally, or anywhere else. And importantly, without messing with any system settings/packages. No more makefiles or shell scripts that make a ton of assumptions about your laptop that need to be set just right to build something from source; just docker build and off you go. Such a hassle when you need to follow pages of readme just to build something from source (plus a lot of installed dependencies that you have to clean up afterwards)


The same problems that apply to production environments also apply to CI systems - you need to make sure those build agents are project-aware, up to date, and if you decide to move to a new JDK on one project you'll need to update your build servers, and good luck to you if you want to update only some of your projects.

The appeal of docker is completely & reproducibly owning production (what runs on your laptop runs on prod), and that also applies to the build (what builds on your laptop builds on prod). Not to mention the add on benefits that you can now use standard build agents across every tech stack and project, no need to customize them or keep them up to date, etc.


With multi-stage builds you get a bunch of benefits. You can pull the heavy SDK when you start building the app, and that gets cached. Then when you package the image, you copy the jar that was built, but not the heavy SDK. When you run this again, the heavy/expensive steps are skipped because they're cached. Now you have a single set of operations to build your app and its production image, so there are no chances for inconsistencies.

In addition, you can build a separate container from a specific part of your multi-stage build (for example, if you want to build more apps based on the SDK step, or run tests which require debugging). So from one Dockerfile you can have multiple images or tags to use for different parts of your pipeline. The resulting production image is still based on the same origin code, so you have more confidence that what's going to production was what was tested in the pipeline.

Furthermore, devs can iterate on this Dockerfile locally, rather than trying to replicate the CI pipeline in an ad-hoc way. The more of your pipeline you stuff into a Dockerfile, the less you have to focus on "building your pipeline".


As I read it, the tip is to always build in a consistent environment. I think a CI pipeline counts in that regard.

The way I read it, they're more talking about when you're developing locally everyone should be building the application inside of a container rather than on their personal machines with differing set ups.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: