Thank you for the kind words. We’re always trying to share our knowledge even if Depot isn’t a good fit for everyone. I hope the scripts get some mileage!
This is what we focus on with Depot. Faster builds across the board without breaking the bank. More time to get things done and maybe go outside earlier.
This is a neat idea that we should try. We've tried the `eatmydata` thing to speed up dpkg, but the slow part wasn't the fsync portion but rather the dpkg database.
Founder of Depot here. For image builds, we’ve done quite a bit of optimization to BuildKit for our image builders to make certain aspects of the builds fast like load, cache invalidations, etc.
We also do native multi-platform builds behind one build command. So you can call depot build —platform Linux/amd64,linux/arm64 and we will build on native Intel and ARM CPUs and skip all the emulation stuff. All of that adds up to really fast image builds.
Just flagging that Depot now has macOS and Windows runners [0] as well if you're looking for even faster builds. I also recognize that constantly reevaluating runners isn't on everyone's priority list.
This was an interesting read and highlighted some of the author's top-of-mind pain points and rough edges. However, in my experience, this is definitely not an exhaustive list, and there are actually many, many, many more.
Things like 10 GB cache limits in GitHub, concurrency limits based on runner type, the expensive price tag for larger GitHub runners, and that's before you even get to the security ones.
Having been building Depot[0] for the past 2.5 years, I can say there are so many foot guns in GitHub Actions that you don't realize until you start seeing how folks are bending YAML workflows to their will.
We've been quite surprised by the `container` job. Namely, folks want to try to use it to create a reproducible CI sandbox for their build to happen in. But it's surprisingly difficult to work with. Permissions are wonky, Docker layer caching is slow and limited, and paths don't quite work as you thought they did.
With Depot, we've been focusing on making GitHub Actions exponentially faster and removing as many of these rough edges as possible.
We started by making Docker image builds exponentially faster, but we have now brought that architecture and performance to our own GHA runners [1]. Building up and optimizing the compute and processes around the runner to make jobs extremely fast, like making caching 2-10x faster without having to replace or use any special cache actions of ours. Our Docker image builders are right next door on dedicated compute with fast caching, making the `container` job a lot better because we can build the image quickly, and then you can use that image right from our registry in your build job.
All in all, GHA is wildly popular. But, the sentiment around even it's biggest fans is that it could be a lot better.
Depot looks nice, but also looks fairly expensive to me. We're a small B2B company, just 10 devs, but we'd be looking at 200+500 = $700/mo just for building and CI.
I guess that would be reasonable if we really needed the speedup, but if you're also offering a better QoL GHA experience then perhaps another tier for people like us who don't necessarily need the blazing speed?
We're rolling out new pricing in the next week or two that should likely cover your use case. Feel free to ping me directly, email in my bio, if you'd like to learn more.
This is just another thing in a laundry list of things from Docker that feel developer-hostile. Does it make sense? Sure, it might, given the old architecture of Docker Hub.
I'm biased (i.e., co-founder of Depot [0]) and don't have the business context around internal Docker things. So this is just my view of the world as we see it today. There are solutions to the egress problem that negates needing to push that down to your users. So, this feels like an attempt to get even more people onto their Docker Desktop business model and not explicitly related to egress costs.
This is why when we release our registry offering, we won't have this kind of rate limiting. There are also solutions to avoiding the rate limits in CI. For example, our GitHub Actions runners come online with a public unique IP address for every job you run. Avoiding the need to login to Docker at all.
> There are solutions to the egress problem that negates needing to push that down to your users.
Please do elaborate on what those are!
There are always lots of comments like this providing extremely vague prescriptions for other people's business needs. I'd love to hear details if you have them, otherwise you're just saying "other companies have found ways to get someone else besides their customers to pay for egress costs" without any context for why those people are willing to pay the costs in those contexts.
As someone mentioned, GitHub has something to prevent this, but it's unclear (or at least undocumented) what.
We at Depot [0] work around this by guaranteeing that a new runner brought online has a unique public IP address. Thus avoiding the need to login to Docker to pull anything.
Subsequently, we also do the same unique public IP address for our Docker image build product as well. Which helps with doing image builds where you're pulling from base images, etc.
One of things we're thinking about is automatic method/function call tracing. Something like attaching the entire stack trace of calls done to handle the API request. Ideally using the same UI so that you can see the headers/payload that was sent and the function-level stack trace right next to each other. None of the OpenTelemetry verbosity, all of the observability!
This idea is cool and well-scoped to a specific pain point that can be solved today. This is something that I think we need more of when it comes to YC batches.
However, you all should publish pricing before launching, and forcing me to book a call with you to use it is a nonstarter for me.
I don't want to get tied to this tool and then be charged for it in some weird way; give me your v0 pricing so that I can pay for it in a transparent way. As a fellow founder, I think you also know how little time I have for calls to check out demos for tools. So, just let me sign up and give it a spin.
Yeah, this is very valid. We opened up a self-service trial this week so that people can more easily give it a spin before deciding to join a paid plan, but we're still working on a lot of the customer experience things to offer a truly complete onboarding experience. Appreciate the feedback!