Hacker Newsnew | past | comments | ask | show | jobs | submit | mshekow's commentslogin

oci.dag.dev is fantastic. You can also self-host it, because it's just a Golang CLI.

I compared it with various other registry browser tools, and it was clearly the best one. See here for more details: https://www.augmentedmind.de/2025/03/30/the-9-best-docker-re...


I'm German myself. To me this looks like a category of problem where you can no longer translate the word in the literal sense, because chances are low that the consumer understands the word ("Bremsschwelle" or whatever you end up picking).

Wouldn't it make sense to rather think of a completely different analogy? One that is really well-known by the target audience? From what I understand, you are building an app that inhibits people from doomscrolling. That is a well-established "German" word, too. Using that, people immediately understand what you mean, rather than trying to follow a broken analogy.


I know from my own experience that us folks in the US tend towards a lot of idioms in our communication and these can be a struggle to effectively translate. Most translation software translates literally instead of figuratively.

I found myself when I first went to work with an international company having to break that habit when my European colleagues would look at me funny half the time because I used idioms a lot. Even though they spoke English perfectly, they couldn’t understand me.

Took some time to break me of that habit.

https://youtu.be/mY9gVIcRkkI?si=rdSofwaAH4bQCIBK


This guide looks really nice, targeting novice Git users.

I've also written a guide, targeting devs with basic Git experience. It is much shorter, maybe you or your team can benefit from it [1]

[1] https://www.augmentedmind.de/2024/04/07/ultimate-git-guide-f...


I found two ideas / techniques helpful in this context:

1) Conventional comments (https://conventionalcomments.org/) as an (agreed-upon) language to be used in PR comments

2) Ship / Show / Ask (https://martinfowler.com/articles/ship-show-ask.html), where "Show" and "Ship" are non-blocking PRs (or even directly committing to trunk, if you use trunk-based development), since not every(!) PR needs reviewing and/or should block the PR creator


Looks nice. I'm a Clockify fan myself. Your app and homepage also remind me a lot of https://timemator.com/ (which I ended up not using because it was unable to generate reports that show me the percentage(!) of time spent on different projects throughout the day).


Thanks for the compliment! I appreciate the comparison. I’m glad you like the look of Taim. I'm aiming to include advanced reporting features, including the ability to show the percentage of time spent on different projects throughout the day. If you have any other suggestions or features you’d like to see, feel free to share!


I agree.

Also, in my experience, writing UI code is usually more(!) work than writing the functionality underneath, because

a) styling / layouting has to be learnt from scratch (e.g. because of a proprietary language, e.g. QML or QWidgets for Qt)

b) you have to take care of every frikkin' single user interaction (which becomes worse the more dynamic and custom your UI is), and building proper accessibility is also no walk in the park


I took a detailed look at Docker's caching mechanism (actually: BuildKit) in this article https://www.augmentedmind.de/2023/11/19/advanced-buildkit-ca...

There I also explain that IF you use a registry cache import/export, you should use the same registry to which you are also pushing your actual image, and use the "image-manifest=true" option (especially if you are targeting GHCR - on DockerHub "image-manifest=true" would not be necessary).


After years of lurking, I made an account to reply to this

"image-manifest=true" was the magic parameter that I needed to make this work with a non-DockerHub registry (Artifactory). I spent a lot of time fighting this, and non-obvious error messages. Thank you!!

We use a multi-stage build for a DevContainer environment, and the final image is quite large (for various reasons), so a better caching strategy really helps in our use case (smaller incremental image updates, smaller downloads for developers, less storage in the repository, etc)


Thanks, this is a very thorough explanation.

Is there really no way to cache the 'cachemount' directories?


The only option I know is to use network shares/disks, but you need to make sure that each share/disk is only used by one BuildKit process at a time.


Sounds really interesting, and I'd also love a Firefox version :)


Will try to make one soon!


Yes. Sounds useful.


I also looked at this topic, see [1]. Some points are similar to the article posted by OP. My findings were:

- Docker Desktop and Docker engine (CE) behave differently, e.g. bind mounts, or file system ownerships.

- CPU/Platform differences (ARM vs. AMD64): many devs don't realize they use ARM on their mac, thus ARM images are used by default, and tools you run in it (or want to install) may be have differently, or may be missing entirely

- Incompatible Linux kernel APIs (when containerized binaries make syscalls not supported by the the host's kernel, for whatever reason)

- Using the same version tags, expecting the same result (--> insanity, as you know it :D)

- Different engines (e.g. Docker Desktop vs. colima) change the execution behavior (RUNNING containers)

- Different build engines (e.g. kaniko vs. BuildKit vs. buildah) change the BUILD behavior

For anyone who is interested: more details in [1].

[1] https://www.augmentedmind.de/2023/04/02/docker-portability-i...


I think a lot of this comes down to a broader difference between Mac/Windows Docker Desktop and "plain" Docker on Linux. The former is actually backed by a VM, so a lot of the painless simplicity comes from having a true virtual machine involved, rather than just a layer of namespacing.

A lot of people are in here complaining about how Docker is not reproducible enough. But reproducibility of image builds is a matter of diminishing returns, and there are other problems to worry about, like the ones you are pointing out.

Speaking of which, it's probably good to get in the habit of installing some Linux OS in a VM and trying to run your container images inside that (with "plain" Docker, no inner VM), before pushing it to your cloud host and waiting for it to fail there.


It took me a while to understand what your tool is doing. It looks like an abstraction layer for concrete monitoring/alerting solutions.

What confuses me is:

- You introduce Keep like this "Think of Keep as Prometheus Alertmanager but for all observability tools", but then there is no Alertmanager provider. Is this planned?

- You mention that you support the 3 hyperscaler clouds (AWS, ...), yet I do not see any examples or code that backs this up

- You mention that Keep can be used to _test_ alerts. How? Examples? Otherwise make it clear that you _plan_ that Keep can do this at some point.

In general though it looks very interesting :).


thanks for your feedback @mshekow! why was it hard to understand what Keep does? lack of some concrete examples maybe?

Regarding your questions:

- Working on the Grafana/Prometheus provider and it'll be shipped in <1 week

- AWS and GCP are in final tests, we thought we could release that before it gets any traction ^^" anyway, PRs will be available in <2 days

- There are two things here: 1. You can simply test if your alert "triggers" just by running the CLI with the --verbose and --alerts-file 2. We plan on adding tests that will both allow you to "mock" the alert to see that it acts as you planned & source control integrated (e.g. a GitHub action) tests that will tell you when a code change breaks your alerts

anyway, if you have any other questions/thoughts, feel free to open an issue and we'll address it right away


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: