Hacker News new | past | comments | ask | show | jobs | submit login

That's why in my opinion you should always have the option of building locally, just what you need when you want it, instead of having to go through a slow CI/CD pipeline.



I built up our CI pipeline until it was the faster way of running tests. You can rent more compute than you can carry...


Just provisioning a new node, deploying a new kubernetes worker and downloading the source and pre-built artifacts takes longer than it takes to build stuff incrementally on my machine.

Also my local machine has resources entirely dedicated to me and isn't held back because someone else decided to rebuild the world.


why does your CI do all that after you start the build if it happens every time? Developer time is expensive.

Scale up ahead of time, so there’s always a machine ready. Prefetch the repository / packages when CI workers start, so they’re already installed before a build is triggered. Use your imagination - CI doesn’t have to suck.


If you scale up ahead of time then it's not really on-demand, that means you have dedicated hardware you're paying much more for.


The marginal cost to keep a large-ish spot instance running 99% of the time is dirt cheap (eg $400/mo to keep one extra c7g.8xlarge running).

If you're paying for an engineering team, that's a rounding error.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: