It's not really custom. It's industry standard protocols for connecting independent (one might say "autonomous") networks.
Tailscale is this on easy mode, of course. There's a blog post by apenwarr somewhere that I can't find right now that lays out the fundamental thesis of Tailscale and its very similar to these folks' manifesto.
1Password is amazing for this, IMO. My spouse and I have been using 1Password together for more than a decade. One of the first things I set up is a "AAA Read Me First" note with links to a bunch of other notes and documents, including our estate planning stuff.
The biggest thing that makes me stick with 1Password, despite the semi-recent VC shenanigans, is the fact that if for some reason we fall behind on billing (for example, because the credit card got cancelled because I died) the account goes into read only mode forever. As long as 1P is a going concern the data we choose to put there is safe from the biggest risk in our threat model.
Separation of services is orthogonal to separation of concerns. There's nothing stopping you from having multiple entry points into the same monolith. I.e. web servers run `puma` and workers run `sidekiq` but both are running the same codebase. This is, in fact, the way that every production Rails app that I've worked with is structured in terms of services.
Concerns (in the broad sense, not ActiveSupport::Concern) can be separated any number of ways. The important part is delineating and formalizing the boundaries between them. For example, a worker running in Puma might instantiate and call three or four or a dozen different service objects all within different engines to accomplish what it needs, but all of that runs in the same Sidekiq thread.
Inserting HTTP or gRPC requests between layers might enforce clean logical boundaries but often what you end up with is a distributed ball of mud that is harder to reason about than a single codebase.
By concern I meant performing different actions and owning write access to different tables, not having completely separate code bases you seem to have construed for some reason.
I would also never connect services without a queue unless the message can be discarded (then I can use a pub sub). Using http is one of the most amateurish ways I can imagine to connect two services that I wrote. Even the thought is cringe. Is this common?
Portland, Oregon completed a project in 2011 that successfully eliminated almost all of the combined sewer overflows into the Willamette river and Columbia Slough (a swampy area on the south side of the Columbia river near the airport).
There was a ton of work done to reduce the amount of water ending up in the sewer during storms followed by some large infrastructure improvements to improve the carrying capacity of the sewer itself.
To preface, I'm not a Kubernetes or Mosquitto expert by any means.
I'm confused about one point. A k8s Service sends traffic to pods matching the selector that are in "Ready" state, so wouldn't you accomplish HA without the pseudocontroller by just putting both pods in the Service? The Mosquitto bridge mechanism is bi-directional so you're already getting data re-sync no matter where a client writes.
edit: I'm also curious if you could use a headless service and use an init container on the secondary to set up the bridge to the primary by selecting the IP that isn't it's own.
> so wouldn't you accomplish HA without the pseudocontroller by just putting both pods in the Service?
I'm not sure how fast that would be, the extra controller container is needed for the almost instant failover.
Answering your second question, why not an init container in the secondary, because now we can scale that failover controller up over multiple nodes, if the node where the (fairly stateless) controller runs goes down, we'd still have to wait until k8s schedules another pod instead of almost instantly.
I am making an assumption. I assume that you mean the deployment. The deployment is responsible for individual pods. If a pod goes away, the deployment brings a new pod in. The deployment controls individual pods.
To answer your question: yes, you can simply create pods without the deployment. But then you are fully responsible for their lifecycle and failures. The deployment makes your life easier.
I was referring to the pod running the kubectl loop. As far as I can tell (I could be wrong! I haven't experimented yet) the script is relying on the primary Mosquitto pod's ready state, which is also what a Service relies on by default.
BSL is a source-available license that by default forbids production use. After a certain period after the date of any particular release, not to exceed four years, that release automatically converts to an open source license, typically the Apache license.
Projects can add additional license grants to the base BSL. EMQX, for example, adds a grant for commercial production use of single-node installations, as well as production use for non-commercial applications.
You don't even need a dedicated GPU. Any Intel Core-series CPU past 5th gen has a built-in quicksync engine that can handle many simultaneous transcodes.
PostgreSQL defaults (last I looked, it's been a few years) are/were set up for spinning storage and very little memory. They absolutely work for tiny things like what self-hosting usually implies, but for production workloads tuning the db parameters to match your hardware is essential.
Correct, they're designed for maximum compatibility. Postgres doesn't even do basic adjustments out of the box and defaults are designed to work on tiny machines.
Iirc default shared_mem is 128MB and it's usually recommended to set to 50-75% system RAM.
reply