Hacker News new | past | comments | ask | show | jobs | submit login
Running Nomad for a Home Server (mrkaran.dev)
278 points by elliebike on Feb 15, 2021 | hide | past | favorite | 149 comments



We're pulling the trigger tomorrow to migrate the first productive system to nomad, and launch 1-2 new products on nomad first. It's quite exciting.

We chose nomad there, because it's a business requirement to be able to self-host from an empty building due to the data we process - that's scary with K8. And K8 is essentially too much for the company. It's like 8 steps into the future from where most teams in product development and operations are. Some teams haven't even arrived at the problems ansible solves, disregard K8.

The hashicorp stack with Consul/Nomad/Vault/Terraform allows us to split this into 4-8 smaller changes and iterations, which allows everyone to adjust and stay on board (though each change is still massive for some teams). This results in buy-in even from the most skeptical operations team, who are now rolling out vault, because it's secure and solves their problems.

Something that overall really impressed me: One of our development teams has a PoC to use nomad to schedule and run a 20 year old application with windows domain requirements and C++ bindings to the COM API. Sure, it's not pretty, it's not ready to fail over, nomad mostly copies the binaries and starts the system on a prepared, domain joined windows host... but still, that's impressive. And it brings down minor update times from days to weeks down to minutes.

Being able to handle that work horse on one hand, and flexibly handling container deployments for our new systems in the very same orchestration is highly impressive.


This is off-topic trivia, but since it's clear it wasn't a typo: it's "k8s", not "k8", and it originates from "kubernetes" having 8 letters between "k" and "s". Similar to "a11y" ("accessibility") or "i18n" ("internationalization") or "a16z" ("andreessen horowitz").


I assumed it was a typo but I like it. I’ll start calling it Kate.


What did you migrate from?


Currently, the automated parts are on terraform/chef and there is a bunch of manually setup stuff in the company.


I did/do run both myself, Kubernetes and Nomad, and it was a million times easier to set up Nomad (including Consul) on bare metal than it was to set up Kubernetes. Kubernetes offers more features, but you most likely don't need them and the increase in complexity makes it a pain to maintain. I'm running a three-node cluster on Hetzner [0] for Pirsch [1] right now and haven't had any difficulties whatsoever when upgrading and modifing the cluster. Adding Vault is a bit tricky, but I'm sure I'll get that done too. I highly recommend trying it if you think about maintaining your own infrastructure!

Let me know if you have any questions. I plan to write an article/kind-of-tutorial about the setup.

[0] https://www.hetzner.com/de/cloud

[1] https://pirsch.io/


I would be very interested in a more detailed write-up on Nomad vs Kubernetes for bare metal. I'm working through getting Kubernetes stood up, but I'm running into a dearth of features--namely you have to bring your own load balancer provider, storage provider, ingress controller, external DNS, monitoring, secret encryption, etc, etc before you can run any real world applications on top of it. I would be interested in how Nomad compares.

EDIT: Downvoters, I'm really curious what you're objecting to above, specifically.


It’s much easier with nomad as you are not forced in to a ”black box” with networking layers and surrounding requirements.

Bare metal nomad - use with consul and hook up traefik with consul backend. This would be the simplest, most ”zero conf”, way to go.

I’ve used this setup for a few years heavy production use (e-commerce & 50 devs)

As consul presents SRV records you can hook up a LB using those, or use nomad/consul templating to configure one.

Service mesh with mTLS is actually rather approachable and we’ve deployed it on selected services where we need to track access and have stricter security. (This however had us move off traefik and in to nginx + openresty)

Now if you want secrets management on steroids you’ll want vault. It’s really in many ways at the heart of things. It raises complexity, but the things that you can do with the nomad/consul/vault stack is fantastic.

Currently we use vault for ssl pki, secrets management for services & ci/cd, and ssh cert pki.

These things really form a coherent whole and each component is useful on it own.

Compared to k8s it’s a much more versatile stack although not as much of a “framework” and more like individual “libs”.

I always come back to the description: “more in line with the unix philosophy”.

In a mixed environment where you have some legacy and/or servers to manage I think using the hashicorp stack is a no brainer - consul and vault are tools I wouldn’t want to be without.


I'm currently working on an article exploring Nomad and how it compares in some aspects with Kubernetes, which i'll post on HN soon-ish.


I'd be really interested, too. Have you looked at k3s at all? We're considering trying to run https://github.com/rancher/k3os on bare metal servers.


Yeah, that's what I used. It comes with some providers out of the box, but they strike me as toys. For example, it gives you support for node-local volumes, but I don't really want to have to rely on my pods being scheduled on specific nodes (the nodes with the data). Even if you're okay with this, you still have to solve for data redundancy and/or backup yourself. The Rancher folks have a solution for this in the form of LongHorn, so maybe we can expect that to be integrated into k3s in the future. There's also no external DNS support at all, and IIRC the default load-balancer provider (Klipper LB, which itself seems to be not very well documented) assigns node IPs to services (at random, as far as I can tell) so it's difficult to bind a DNS name to a service without something dynamically updating the records whenever k8s changes the service's external IP address (and even then, this is not a recipe for "high availability" since the DNS caches will be out of date for some period of time). Basically k8s is still immature for bare metal; distributions will catch up in time, but for now a lot of the hype outpaces reality.


There’s metallb that lets you announce bgp to upstream routers. Another solution would be to just announce it via daemonset on every node and setup a nodeport. Or just add every frontend node IP into DNS. Obv all highly non-standard as it depends on your specific setup


Yes, to be clear, these problems can be worked around (although many such workarounds have their own tradeoffs that must be considered in the context of the rest of your stack as well as your application requirements); I was observing that the defaults are not what I would consider to be production-ready.


I don’t think kubernetes ever promised to be a turnkey system at least outside of cloud. There are many commercial vendors though willing to fill that gap.


> I don’t think kubernetes ever promised to be a turnkey system

No one is arguing that they did.


Obviously original authors want users to use their cloud.


I wonder how their Rio thing stacks up? https://rancher.com/blog/2019/introducing-rio


Any thoughts on RKE which runs a full k8s distro? Was able to deploy bare metal with just a single cluster config and “rke up”.


You have to bring all of those same things to a Nomad deployment as well. It’s generally more lightweight than Kubernetes, so it might be easier to wire those other components in, but you do still need to do that work either way.


> it might be easier to wire those other components in

IMO the few lines of yaml to set the path/host for an Ingress definition seems cleaner to me than using consul-template to spit out some LB config (as in the post's example).

For simplicity, a few years ago I preferred Traefik + Swarm. Add a label or two and you're done. But Swarm died :/


Let me try to do some quick mapping...

> load balancer provider

consul connect handles this, how you get traffic to the ingresses is still DIY... kinda. you can also use consul catalog + traefik (I've actually put in some PRs myself to make traefik work with a really huge consul catalog so you can scale it to fronting thousands of services at once). there's also fabio. you can also get bgp ip injection with consul via https://github.com/mayuresh82/gocast run as a system job to get traffic to any LB (or any workload) if that's an option.

i've also ran haproxy and openresty without any problems getting stuff from consul catalog via nomad's template stanza and just signaling them on catalog changes.

> storage provider

anything CSI that doesn't have a 100% reliance on k8s works. if you're also just running docker underneath you can use anything compatible with docker volumes, like Portworx.

> ingress controller

consul connect ingress! or traefik, both kinda serve double duty here.

> external DNS

no good story here -- with one exception, if by "external" you mean "in the same DC but not the same host," consul provides a full DNS interface that we get a lot of mileage out of.

if you're managing everything with terraform though there's no reason you can't tie tf applies to route53/ns1/dyn or anything else though!

> monitoring

open up consul/nomad's prometheus settings and schedule vmagent on each node as a system job to scrape and dump somewhere. :)

we also use/have used/will use telegraf in some situations -- victoriametrics outright accepts influx protocol so you can do telegraf/vector => victoriametrics if you want to do that instead.

> secret encryption

this is all vault. don't be afraid of vault! vault is probably hashicorp's best product and it seems heavy but it's really not.

there's a lot here that doesn't really compare at all, like the exec/raw_exec drivers. we use those today to run some exotic workloads that do really poorly in containers or that have special networking needs that can map into containers but require a lot of extra operational effort, e.g.: glb-director and haproxy running GUE tunnels.

something interesting about the above is i'm testing putting the above in the same network namespace, so you can have containerized and non-containerized workloads in the same network cgroup namespace so you can share local networking across different task runners.


I had the opposite experience (in 2 different companies). Setting up K8s was quite straightforward and docs were helpful. We ended up building a deployment UI for it though.

Consul is nice and easy to use.

Nomad has been a painful experience: the default UI is confusing (people accidentally killed live containers), we have some small bits and pieces that don't quite behave as we expect and have no idea how to fix them. Error rate is too low to care and there are more pressing issues so likely WONTFIX. We often found ourselves looking into github issues for edge cases or over-allocating resources to overcome scheduling problems.

We considered just switching to their paid offering, just not to have to worry about this.

It kind of feels like that's their business model: attract engineers with OSS software and then upsell the paid version without all the warts.


Yup. Setting up k8s with kubeadm on bare-metal is very straight forward and can be done within a few minutes on any linux host that is supported by kubeadm and docker + ssh access.


>> I plan to write an article/kind-of-tutorial about the setup.

Please post it here when you do. :)


bear metal sounds so sick.

I'm in for an industrywide rename :D.


There's probably a startup name in there somewhere.

Bear Metal Semiconductor. Bear Metal Fabrication. Bear Metal Labs. You could get so creative with the branding and logo.


It's sold as Thermal Grizzly Conductonaut


Woops :D


Are you using Nomad to schedule containers on those nodes? I'd also be really interested in a blog post or write-up about your setup!


Sure, that's what it is for :) We also run cron jobs, Postgres, and Traefik.


Nice product by the way. I might try it later on of my upcoming projects


I administer k8s daily at my full-time job and administer a 14-node nomad cluster in my homelab. This accurately captures my sentiments as well. My nomad cluster is even spread across three different CPU architectures (arm, arm64, and amd64) but still works great.

One of the points I'd highlight in this post is just how good the combination of nomad, consul, and consul-template is. Even when nomad lacks some sort of first-class integration that k8s might have, the combination of being able to dynamically generate configuration files using consul-template plus nomad automatically populating the consul catalog means that you can do almost anything, and without much hassle. I use consul-template in the homelab to dynamically wrangle nginx, iptables, and dnsmasq, and it continues to work well years after I initially set it up.

I often wish I had the luxury of relying on vault+consul+nomad in all the environments I operate in.


I haven't used nomad yet, but I use consul and consul-template for configuration on a couple clusters (work and personal) and they're great.


We moved from k8s to Nomad at my workplace, and I'm currently running almost all my self-hosted software on a 10-node Nomad cluster (with Consul and Vault) as well. The servers for each of the three gives plenty of headroom resource-wise when run on any recentish arm64 SBC, so you can get an HA cluster for not expensive.

If you integrate properly with them (which does take quite a bit of work with the ACL policies and certs), it really starts shining. With terraform, of course.

For these core services themselves and other OS packages, I use ansible, mostly because of the huge resources in the community.

It's fun and doesn't come with all the cognitive overhead of k8s. I'm a fan and will tell everyone they should consider Nomad.

It's obviously less mature, though. One thing that has been frustrating for a while is the networking configuration - a simple thing like controling which network interface a service should bind to (internal or external?) was supposedly introduced in 0.12 but completely broken until 1.0.2 (current version is 1.0.3).

Consul Connect is really awesome conceptually to make a service mesh, but is also just coming together.

There are really only two things I miss dearly now:

1) exposing consul-connect services made by Nomad (aka ingress gateway). It seems to be theoretically doable but requiring manual configuration of Connect and Envoy. If you want to expose a service from the mesh through e.g. an http load balancer, you need to either expose it raw (losing the security benefits) or manually plumb it (no load balancer seems to play nicely with connect without a lot of work, yet)

2) Recognize that UDP is a protocol people actually use in 2021. This is a critique of the whole industry.


What Ansible resources do you use for Nomad, Consul and Vault though? I've found a few but they all seem to lag behind release. Noone is up to date at Nomad 1.0.3. Would be nice with some half-way standard way of setting it up, like k8s has quite a few projects that do.


I'm running a Traefik instance on each node, so that I can expose a service by adding a bunch of labels. The load balancer is not part of the cluster and routes the traffic to the nodes. You might want to consider that too :)


I'm doing exactly that, actually!

Two bastion hosts/lbs sharing a virtual IP (keepalived), with two Traefik instances each (private and public). I actually schedule them through Nomad (on the host network interfaces) as well - since they solved the host networking issue I mentioned above it's properly set up with service checks. Super smooth to add and change services with consul catalog, and ACME-TLS included.

Things I don't like that make me want to try envoy instead:

* It eats A LOT of CPU once number of peers ramp up - there's an issue on their GH that this seems to have been introduced in 2.1.

* UDP is completely broken. For the time being I'm doing per-job nginxes for that until I have a better solution.

* It's very brittle. Did you ever misspell the wrong thing in your label? If so you probably also wondered why half of your services all stopped working as Traefik arbitrarily let it hijack everything.

* The thing I metioned above with Consul Connect. Each service can integrate with either but not both.

It was great for months though, but I guess I grew out of it just by the time I started properly understanding how all the configuration actually works (:


I recently setup Traefik 2.x to front a self-hosted Docker Registry, with automated Let's Encrypt renewals - I found the config to be really unintuitive and confusing! It feels like an awful lot of really finicky config for such a simply setup. Next time I'll try something else.


Traefik v1 is much simpler, v2 seemed to introduce so many extra layers which makes the simple stuff harder.


Caddy v2 seems to be doing something right although I don't think it comes with the same number of features out of the box. Plus it's more of http reverse proxy.


Maybe it's because Nomad is a commercial product as well, but it's really sad that it doesn't get more usage. Kubernetes is the new hotness, to the point where people don't care that the don't really need Kubernetes. They just know that the big boys are using it, so regardless of scale, so should they.

Kubernetes is exceedingly complex, and it needs to be, to support all it's use cases. However, I would claim that the vast majority of small project running on Kubernetes is just after one thing: Easier deployments.

These users would be just as well served, if not better by Nomad. It eliminates much of the complexity of running a Kubernetes cluster, but it still utilizes hardware better and you get a convenient way to deploy your code.


I’ve often thought that when developers saw Kubernetes, what they actually wanted was some form of PaaS


Are there any lightweight but production-ready PaaS offerings out there?

I know there's Dokku, https://flynn.io/ looked super promising but I think it's basically dead now, same for Deis that is dead and forked to https://web.teamhephy.com/.


There seems to be a significant lack of open source PaaS as you mentioned. And Dokku is unfortunately a very leaky bucket of abstractions - fun if you're learning docker, annoying otherwise.

This kind of makes sense given how much work it is and how easy it is to monetize as a paid service.


As the maintainer of Dokku, I'd love to hear more about what you think we can improve here. It is certainly an abstraction over basic Docker commands, but so are other schedulers and platforms.

That said, any feedback is more than welcome.


I can't recall the specifics, but keep running into issues when setting up port forwarding, attaching networks, etc. where I end up chasing the reason things don't work down to "docker inspect"-ing things. Usually it's some step I missed or overlooked, but the state I'm in just doesn't do anything useful. It makes sense I end up in that situation because I'm using something that maps to docker commands. I guess I expect a PaaS solution to own more of the system and tell me something like "the way you try to proxy doesn't make sense with the container you spawn", or "you added a network to attach to, and made other changes, the app will be restarted to apply it".

To be clear, I didn't want to say Dokku does things wrong. I'd like something that can validate changes and expected state. But if Dokku is supposed to be a "thin layer", that's cool.


Others I've came across on HN are Caprover, Flynn and Apollo.


Capnrover seems like the best option. I have the same perception of Flynn. Dokku is nice but I don't really see how you could use it in production since it's limited to a single server (there are definitely some cases where that's all you need but I can't imagine you'd need a PaaS for most of them).


Maintainer of Dokku here:

It isn't limited to a single system, and has a scheduler component for Kubernetes (https://github.com/dokku/dokku-scheduler-kubernetes) and Nomad (https://github.com/dokku/dokku-scheduler-nomad).


Cool! It looks like those are in beta/not very widely used though. And my impression is that I'd probably have to understand k8s/nomad pretty well to use them; really I'd like something with an experience closer to Heroku where that layer is as abstracted as possible.


I love the developer experience of Dokku!

That said, I would be very nervous to advocate using Dokku for any critical production systems. Are my fears ill founded? Or if not, is there any roadmap to making Dokku production-stable? How can people help?


If you're on AWS, you might be interested our product https://apppack.io.


Thanks for the link. Honestly one of my goals is to avoid becoming too dependent on on proprietary clouds. If I wanted something that only works on Amazon, there's always ElasticBeanstalk/Amplify & friends.


it's not a PaaS per say but Hashicorp new product waypoint looks promising at least when it comes to deployments. A common deployment interface over several of the tools we love/and hate(docker, k8s, nomad,...) making them feel like good old heroku. The operators are probably going to keep wrestling with the underlying complexity but at least the developers won't need to add yaml dev to their CV anymore(HCL is much better in my opinion).


I think we have passed the first stage where people have gone through a couple of disasters and have learned they actually didn't need majority of the features k8s offers at their scale. They are now actively looking for simpler tools which opens space for nomad and co. Plus success stories from companies like https://fly.io(yes I like them) with nomad are pilling up.


Nomad looks useful to me for use cases that have nothing to do with "a simple k8s" but more with distributed/HA systemd and cron.

Just deployed a small 3 node cluster in prod last week for this: run some binaries with some parameters, make sure they restart on failure, have them move to another node if one fails or is rebooted, and don't waste resources by having a classical active/passive 2 node setup that doesn't scale either.

It took me a couple of days to read the documentation which is good but not always up to date (I did have to dig in some github issues in one case), create a test setup and check failure scenarios. Gotta say I'm mostly impressed.


How easy is setting up nomad for this? Does this need a consul cluster too?


Yes, Nomad does not include the service mesh, so you need to set up Consul too. Setting up Consul is a little harder than Nomad, but still simple, and Nomad will integrate with it without additional configuration, so it "just works" when it discovers the Consul service.


It took me 2 work days to read the docs, tutorials setup and tests. I had no prior knowledge of Nomad. So I would say it was very easy. Consul is optional, for my use case I did not use/need it.


Hashicorp has very cool stuff but I am not a fan of the config language they use on all their projects. It’s fine when I’m in an infrastructure type role and recency recall is fine but when Hashicorps tools are in my periphery it is a pain. Anyone share this? I guess the alternative is templated tomls/yamls/pseudo-jsons. Wish we’d all agree to one templated configuration format.


I myself would love to see more usage of HCL... after writing a lot of Terraform configurations, it feels so much nicer for me than any JSON or YAML/Helm configuration I have written to this day. We should agree on some kind of HCL-based industry standard and leave all these workarounds via JSON/YAML behind us... e.g. doing Helm logic with these Go templates just looks like a step backwards after writing Terraform HCL code for the last 2-3 years.

I understand JSON as a simple interchange format between systems, and is here to stay, but I don't understand all this YAML stuff, with all its quirks, from the K8s/DevOps people, when we have the much nicer HCL...

For anyone not used to HCL: https://github.com/hashicorp/hcl


The thing I dislike about YAML is how unwieldy it gets and how quickly that happens. "Simple" YAML files tend to be half a dozen pages long.

HCL is more information-dense.


I too prefer HCL to YAML and especially to helm templating.

Loops and conditionals in HCL could still use some real work though. They are still clunky to work with.


I liked HCL until I did some infra writing in Pulumi Typescript, and now I just want all my config files to be code.


> Wish we’d all agree to one templated configuration format.

If you mean a standard configuration file format, that has almost never happened in the entire history of computing. There are standard data formats, sure, but to standardize a configuration file, all the applications need to be limited to the functionality expressed in a single config file format. Most applications hate that, because they want to add infinite features, and none want to wait to standardize some new version of a config file in order to release their features. (If they release before the config file changes happen, now you have a vendor-specific extension, which means waving goodbye to standardization)

The following are the only configuration file formats I am aware of that have been standardized:

  - bind zone configuration
  - inittab
Crontab isn't standard because different crons have different features. passwd, group, shadow, etc aren't standard configurations because they are actually databases. And anything else which goes over the wire instead of a file is a protocol, not a configuration.

Now, it's still not a bad idea to have standardization. The trick is to make it so they can change their configuration, but still be compatible with each other, and the general way to do that is to abstract the configuration elements via schemas. That way you can have the same basic functionality for each app, but they can define it however they want so that it can be re-mapped or interpreted by a different program.

However, that is in no way simple for a regular user to use. So to convince app developers to have a "standard configuration format", you need to reframe the value proposition. What's the value add? If you say it's just to a user can use one config file on many competing products, the developers won't care about what, because they want you to use their product, not be able to move to another one. If instead you reframe it as "extensible language for integrating and composing multiple independent pieces of software/functionality into one distributed system", then their ears might perk a bit. Basically, reformulate the proposed solution as a language to compose programs, like Unix pipes, but slightly more verbose/abstracted. The end result should be able to be read by any program supporting the config format, and "declaratively" (modern programmers love this ridiculous cargo cult) compose and execute distributed jobs across any system.


you basically described xml.

not sure I want to go back to that.


There was pretty broad ad hoc standardization in dos/windowsland on the ini-file format, which was broadly formalized as toml years later.


Those are really data formats, not configuration. Their only features seem to be expressing simple data in different types and structures. The only advantages toml even advertises are types and mapping to hash tables. Those are all things computers appreciate, but humans need more nuance and functionality.


I find it particularly hard to manage Vault from the CLI. I deployed it some time ago and set up a bunch of backends, I don't recall the names of things or how I configured them and I want to backup that config to replicate it.

The auto complete will give you the top level command and that's it.

I haven't looked too hard because I got fed up, but i just want to dump the entire config so I can replicate it without having to remember each thing I configured.


I've been there. You basically want to be able `cd` into vault and list the contents interactively, but you can't.

While the Web UI is probably the best vault explorer available, you might want to take a look at Vaku[1].

[1]: https://github.com/lingrino/vaku/blob/main/docs/cli/vaku.md#...


Thanks, I'll check it out.


We use Terraform to manage as much of the configuration of Vault (and Consul and Nomad) as we can. The provider is pretty good.

This makes it a lot easier to get a 1000ft view of the configuration.


This is ultimately my goal. Our current state has been prototyped iteratively over a year or so. The goal is to replicate and document the current state so that we can automate it such as with Terraform.

Unfortunately, unlike, say Kubernetes, which encourages a declarative approach, learning to integrate and use Vault comes with a lot of imperative operations, none of which is stored anywhere for reuse.


Even better, a standard API that you can create dynamic config against. :)

My best experience writing config files has been with Aurora config files for mesos deployments in pystachio-Python.

Once you’re used to using a real programming language to write/generate config, you never want to go back to YML.


It's really frustrating for me that there's no way easy way to use YAML/JSON for configuring hashicorp products. It's supported at a low-level, but not from the CLIs for some reason...

I really like Dhall as a configuration language and use it across my whole system, so it's a shame I'm forced to use HCL or write a dhall backend for it.


I just use JSON for all hashi config, you don't get comments but it is less mental overhead for me to keep track of.


Do their tools not support json5? That seems like an omission to me, if so.


> Wish we’d all agree to one templated configuration format.

I'm going to post the obligatory XKCD comic because your comment is exactly what this was created for: https://xkcd.com/927/


Not really, the parent wasn’t suggesting developing a new standard, only settling on one.


if all the current ones haven't been good enough then I think its implied that a new one would be created.


Well I disagree, if nothing else for the fact that they never expressed that none was good enough, just frustration that there are too many (or more than one, really). Perhaps they’d be happy if everyone just settled on YAML or something.


From the article: > I always maintain "Day 0 is easy, Day N is the real test of your skills".

Would be interesting to see how this applies to the author's use of Nomad. It's easy to shit on Kubernetes because of its complexity, but this article seems to be comparing the Nomad Day 0 experience with the Kubenrnetes Day N experience.

I'm firmly of the opinion that you don't need much more than systemd (or equivalent) + SSH for your home server.


The Nomad Day N experience is pretty good. I help maintain 4 Nomad clusters running approx 40k Allocations (like Pods) for work. We have basically no problems with Nomad itself, and requires pretty much no day to day intervention. Upgrades are pretty painless too. We've gone from 0.7.x to 0.12.x with these same clusters, and will be going to 1.x soon.

Happy to try to answer specific questions.


Do you run other services (Vault, Consul, etc.) for service discovery, configuration management, etc.?

Genuinely curious about the load of managing this on the infrastructure team.


Yep, we run the full stack. Consul for service discovery and as the storage backend for Vault. We use Vault for config, PKI, Nomad/Consul ACL auth, and we're just starting to experiment with MSSQL dynamic credentials.

Of the three systems, Vault probably takes the most of our time and effort, and that's probably only a few hours per month. We've struggled a bit with performance at least partially because the Consul backend is shared with service discovery.

All of the VMs are built and managed with Terraform using images built with Packer+Ansible. We also use the Nomad/Consul/Vault Terraform providers to apply and manage their configurations.

We have an SRE/Platform Engineering team of 12 (and hiring) that's responsible for the overall orchestration platform additionally including Prometheus/Thanos/Grafana for metrics and ELK for logs.

Hope that's helpful!


I only have praise for the HashiStack (Nomad, Consul, Vault). Folks often reach for Kubernetes as part of a larger initiative to containerize their apps, add ephemeral test environments, etc, but I really think Nomad is better suited to this.

Choosing k8s is just the first step - you have to deal with what 'distribution' of k8s to use, upgrades, pages of yaml, secrets stored in plaintext by default...

Once you've got Nomad running, it just works.

I help people move to Nomad, my email is in my profile if you want to chat :)


Or you can pay the price for it and use GKE.

Of course, not for a home lab :)


Nomad is good at running all sort of workloads, not just container workloads. Also there is a plugin[0] to run containers without docker so Nomad is headed in the right direction for scenarios where you only need a scheduler.

[0] https://github.com/Roblox/nomad-driver-containerd


Nomad already has a first-party driver for podman, doesn't that do it?


Author (https://github.com/Roblox/nomad-driver-containerd) here.

Not really, Podman and containerd are two different technologies, although both allow you to move away from Docker for various reasons (smaller CPU, memory footprint, better security etc). If you are invested into Red Hat container stack, podman makes more sense. However containerd is more universal.

K8s is already moving away from docker, and directly into containerd. Most recently they deprecated dockershim, and users now need to switch to containerd (since docker also uses containerd under the hood, and it doesn't make sense for the orchestration system to run a monolithic service like docker where it just need to launch the workloads)

Some reference links of k8s or PaaS build on top of k8s moving to containerd

- https://kubernetes.io/blog/2018/05/24/kubernetes-containerd-...

- AWS Fargate: https://aws.amazon.com/blogs/containers/under-the-hood-farga...

- Azure kubernetes service (AKS): https://docs.microsoft.com/en-us/azure/aks/cluster-configura...

This driver is similar to what CRI-containerd is doing in kubernetes (if you are coming from the k8s world)


Appreciate the thoughtful reply. So it does address the immediate issue of being able to run OCI containers without the intrusive docker daemon, but not in the same way (and as you say, less standardized/universal).


yes, that's correct. With both the drivers (podman, containerd) you can run OCI containers without the need to run docker daemon.

IMO, or atleast when we were making that decision, a key factor that we considered: Since docker has been more ubiquitous as a technology and millions of users have already been running docker in their infrastructure (and containerd under the hood without even knowing that it exists), containerd has seen the test of time a lot more than e.g. podman.

Also to add, containerd as a system is very pluggable and flexible. You can swap out an entire subsystem and write (plug) your own with containerd. e.g. you can write your own custom snapshotter and use that if you don't want to go with the default. e.g. The firecracker-containerd project (https://github.com/firecracker-microvm/firecracker-container...) wrote their own devicemapper snapshotter. From my limited understanding of podman, I believe it's geared more towards security and making containers rootless, which was a pain point in docker initially, since everyone ran containers with root.


One area, where containerd didn't had a first class support was CLI. the default containerd CLI "ctr" has a very naive implementation. The reason for that I believe is, containerd as a system was never meant to be consumed by humans, and was designed to be consumed by higher layers e.g. orchestration systems like nomad or k8s. However, with the deprecation of dockershim in k8s, and users moving to containerd, a new docker compatible CLI came out:

https://github.com/AkihiroSuda/nerdctl

If you just have containerd running on your system (with no docker daemon running), you can just install nerdctl and add

alias docker="nerdctl"

to your ~/.bashrc file.

Then you can just run any docker commands the way you used to with docker, and it will run those commands against the containerd API giving you the same CLI experience that you used to have with docker.


That's really neat, thanks for the link :)


We're using Nomad to power several clusters at Monitoro[0].

We're a super small team and I have to say our experience operating Nomad in production has been extremely pleasant - we're big fans of Hashicorp's engineering and the quality of their official docs.

Our biggest gripe is the lack of managed Nomad offerings such as GKE (on google cloud). However once things are set up and running it requires minimal to no maintenance.

I also run it locally for all sorts of services with a very similar experience.

As another comment mentioned, it's more of a better / distributed scheduler such as systemd. The ecosystem of tools around it is the cherry on top (terraform, consul, fabio ...)

[0]: https://monitoro.xyz


The selling point of GKE etc. is “minimal to no maintenance,” but of course somebody else is doing the maintenance and the customer is paying a premium for it. Says great things about Nomad.


Yeah, when making the decision it was quite harrowing to think of maintaining a cluster in production. Nomad had very little operational complexity compared to what we imagined.

We've had two main outages in months:

- Server disks were filling up and we hadn't set up monitoring properly at the time (ironic for the name of our company :) ). Not Nomad's fault.

- A faulty healthcheck caused all the servers of a cluster to restart at the same time, which caused complete loss of the cluster state (so all the jobs were gone. I like to call it a collective amnesia of the servers).

We're still looking for a good/reliable logging and tracing solution though. Nomad has a great dashboard, but only with basic logging, and it only gets you so far.

Overall, would recommend again!


Jaeger is pretty great for tracing, and can integrate with Traefik/Envoy ( or whatever you use for ingress/inter-service communication).

We're running Loki for the logs ( via nomad log forwared/shipper and promtail) and so far it's going great. I'll have to do a write-up about the the whole thing.


Thank you for the pointers, very helpful. I'd love to see that write up too!


I'd love to see your write-up on thr logging thing. Please do!


Would love to see that write-up!


Coincidentally my home server is also named Hydra, running Nomad on CoreOS/Flatcar. I've had this setup for years now running without issue (unless I managed to mess it up myself by, for example, changing the IP and not following proper procedures to restore the Raft cluster).

Recently I deployed K3s on the same node to add some new workloads, but now that I want to move those workloads to Nomad to get rid of the cpu and memory usage of K8s. I'm running into what is becoming my main problem with Nomad that I never thought I had.

With all it's complexities, getting something to run on K8s is as simple as adding a Helm chart name to a Terraform config file and running apply. Maybe I need to set a value or volume, but that's it. Everything below is pretty much standardised.

With Nomad however, it's benefit of doing only one thing very well also means that all the other things like ingress and networking need to be figured out yourself. And since there is no standard regarding these everyone invents their own, preventing something like k8s-at-home [0] to emerge. Also K8s is pretty agnostic in container backend where Nomad needs configurations for every driver.

I think writing your own Helm charts for everything would suck more than writing the Nomad configs. Though a lot could be automatically generated for both of them. But I'm missing a community repository of sorts for Nomad.

[0] https://github.com/k8s-at-home/charts


> Coincidentally my home server is also named Hydra, running Nomad on CoreOS/Flatcar

Author here. Ha, nice coincidence indeed :)

> But I'm missing a community repository of sorts for Nomad.

Indeed. I think it's time to build one! Indeed


> - Job: Job is a collection of different groups. Job is where the constraints for type of scheduler, update strategies and ACL is placed.

> - Group: Group is a collection of different tasks. A group is always executed on the same Nomad client node. You'll want to use Groups for use-cases like a logging sidecar, reverse proxies etc.

> - Task: Atomic unit of work. A task in Nomad can be running a container/binary/Java VM etc, defining the mount points, env variables, ports to be exposed etc.

> If you're coming from K8s you can think of Task as a Pod and Group as a Replicaset. There's no equivalent to Job in K8s.

Is that right? I haven't used Nomad (yet?) but, as described, it sounds to me more like Job ~= Deployment; Group ~= Pod; Task ~= Container?


I think your description is more accurate.

A Task is an individual unit of work: a container, executable, JVM app, etc.

A Task Group/Allocation is a group of Tasks that will be placed together on nodes as a unit.

A Job is a collection of Task Groups and is where you tell Nomad how to schedule the work. A "system" Job is scheduled across all Nodes somewhat like a DaemonSet. A "batch" Job can be submitted as a one-off execution, periodic (run on a schedule) or parameterized (run on demand with variables specified at runtime). A "service" Job is the "normal" scheduler where all Task Groups are treated like ReplicaSets.

Placement is determined by constraints that can be specified on each Task Group.


Author here. Hm, now that you point this out, I do get your point. However you can't run multiple different Pods in a deployment (you can run multiple containers in a Pod), that's why a Job isn't really comparable to a ReplicaSet.

I could very well be wrong, but this is my understanding.


I think the tricky part is Nomad merges replica count + definition into Group, while K8s separates Pod/RS/Deployment (and uses RS to orchestrate upgrades, which I think Nomad handles on its own somehow?).

Job ~= Deployment

Group ~= Replicaset/Pod combined

Task ~= container in pod


The biggest problem with these posts is that authors seem to fail to see how K8s isn't targeting home servers or small companies. Sure, there's K3s which allows you to have a more minimalistic experience but it's still K8s.

K8s is enterprise stuff and should be seen as such. It's complex because the problems that are being attempted to solve are complex and varied. That doesn't discount that Nomad is a great piece of software though.


Nomad's support for CNI and CSI is improving with just about every release, and they just proved you can run over 2 million containers in a globally distributed cluster.

https://www.hashicorp.com/blog/hashicorp-nomad-meets-the-2-m...

So what's an example of "enterprise" and "complex" problems you think it can't do?


K8s targets a long list of things that Nomad just doesn't want to deal with in the name of simplicity - and that's ok, if that suits your use case.

Some are mentioned in the post but I can think of secret management, load balancing, config management, routing, orchestration beyond workloads (e.g. storage), rollbacks/rollouts and many more. Perhaps in few areas there are some support but it's not what Nomad intends to do anyway. I also like these points from another comment: https://news.ycombinator.com/item?id=26142658

In order to supply those needs in Nomad you'll need to spend time finding out how to integrate different solutions, keep them up-to-date, etc. At that point, K8s may be a better answer. If you don't need any of those, use Nomad or anything else that's simpler (e.g. ECS if you're in AWS, K3s if that's simple enough for your home server, etc).


> Some are mentioned in the post but I can think of secret management

Hashicorp Vault seamless integration.

> In order to supply those needs in Nomad you'll need to spend time finding out how to integrate different solutions, keep them up-to-date, etc

Like how k8s secrets are secure out of the box ?


I can understand how they seamlessly integrate with their own products (and I like Vault a lot) - not sure how it'd work with other secret backends if you'd prefer them over Vault. It's also fine you picked an individual item of a long list to rebut what I said. But this is what I said:

>Perhaps in few areas there are some support but it's not what Nomad intends to do anyway

Look, I think we can agree that K8s have many features that Nomad don't and that's just how it is (not good or bad, just different). This comes with added complexity. If you wanted to have all these features in Nomad it'd cost you a lot to the point of being impractical. "I don't need all of those" then don't use K8s - but then don't complain when you bring a vast machine that does 1000 things to do 100 and complain it's too complex.


> of secret management, load balancing, config management, routing, orchestration beyond workloads (e.g. storage), rollbacks/rollouts

Nomad does all of these either natively ( rollbacks/rollouts) or via external tooling ( Vault, Consul, a load balancer). Unix philosophy and all that, and besides, Kubernetes doesn't do load balancing, and secret management is a joke.


If it does these things through external integrations, it’s not really something Nomad does.


By that logic, Kubernetes does nothing that is offloaded to operators, CRDs, CSI plugins, etc. The fact that you can extend it with extra features and it supports that is kinda doing something.


> authors seem to fail to see how K8s isn't targeting home servers or small companies

Well, I do run K8s in prod at my org. And my comparison was based off my experience with it running in prod.

> It's complex because the problems that are being attempted to solve are complex

Most people just want to deploy their workloads in a consistent, repeatable manner. That's why they look to run an orchestrator. K8s is the most popular choice but it's time we look at other tools which can also help us reach that goal with less headache.


> The biggest problem with these posts is that authors seem to fail to see how K8s isn't targeting home servers or small companies.

Who would you say are in the target audience of Kubernetes?

I doubt most medium to large companies I see implementing Kubernetes could be considered a good fit for Kubernetes. If you want to run on-prem / colo you are probably better of with something simpler like Nomad. If you want Kubernetes it's probably a better idea to use a hosted Kubernetes solution like Google's offering. For most teams it's probably too much complexity to be able to maintain, troubleshoot, secure, update, etc.


>Who would you say are in the target audience of Kubernetes?

Everything else - I literally said "K8s is enterprise stuff". Now if we go to specifics then it depends of what the company does, maybe they'd do just fine with Nomad or a managed solution like ECS.

>If you want Kubernetes it's probably a better idea to use a hosted Kubernetes solution like Google's offering.

Well I agree? All of the big companies I've been in used EKS and before EKS was decent there was some maintenance overhead. It'd still be less than the maintenance overhead to maintain the full list of features that K8s provides with Nomad, as Nomad doesn't provide any of those and you'd need to seek solutions outside of the product and try to fit them in.

The same way you'd not buy a car if you're going to drive yourself quarter of a mile once a week, you'd not use such a complex solution to run a few dozen containers.


> I doubt most medium to large companies I see implementing Kubernetes could be considered a good fit for Kubernetes. If you want to run on-prem / colo you are probably better of with something simpler like Nomad.

Our path has been Ansible -> Ansible+Docker -> Docker Swarm -> k8s. We absolutely don't need k8s, but the other options all had downsides.

1. Nomad was on our list and probably would've been better, but there were no managed Nomad solutions at the time and it was not as widely used as other solutions

2. Our time on Swarm was /ok/, but it was more and more obvious that being on the lesser walked path was a problem, and it's future made us run away from it

3. k8s gave us a nice declarative deployment mechanism

4. We can switch to a managed solution down the road with less friction


> If you want Kubernetes it's probably a better idea to use a hosted Kubernetes solution like Google's offering.

This may not be true in the future with distributions like k0s[1]

[1]: https://k0sproject.io/


> K8s is enterprise stuff

The comment you’re replying to already said whom.


> The biggest problem with these posts is that authors seem to fail to see how K8s isn't targeting home servers or small companies.

Indeed.

On the same bus one should be reading posts like "how I switched to a minivan for my family and dropped the complexity of enterprise-grade multi-carriage trains".


> (Not joking) You are tired of running Helm charts or writing large YAML manifests. The config syntax for Nomad jobs is human friendly and easy to grasp.

I write all of my kubernetes resources in terraform because I don't want to fight with helm charts. I was going to have to write something to monitor my deployments anyway and alert my co-workers that their deploys failed so why not just use terraform that tells you:

- what will change on deploy - fails when a deploy fails - times out

I didn't want to tell developers to kubectl apply and then watch their pods to make sure everything deployed ok when terraform does this out of the box..


Do you use terraform without helm (charts)? If not, don't you have to write charts from time to time?


We do use some helm charts for the bigger things, gitlab runners, istio, thanos, prometheus, argo, etc. Some of those are run as directly from helm but many are being converted to use the terraform helm provider.

Our initial rollout on kubernetes had me writing about 30 helm charts for internal services. Once we saw helms shortcomings then they were converted to terraform. It was easy if you:

- helm template > main.yaml - use k2tf (https://github.com/sl1pm4t/k2tf) - some manual cleanup for inputs and such.

So now all of our product is terraformed, each as a module deployed to a namespace as an entire stack.


> Increase developer productivity by making it easier to deploy/onboard new services.

> Consistent experience of deployment by testing the deployments locally.

> (Not joking) You are tired of running Helm charts or writing large YAML manifests. The config syntax for Nomad jobs is human friendly and easy to grasp.

Seems like most of the ux issues stem from configuring kube easily. I wonder what the author would say about using a metaconfig language like cue or jsonnet to make is super easy to define new workloads.


We used Nomad on Windows with the raw exec driver to schedule services on a project I worked on a year back. It worked great. Two binaries on each host along with a configuration file (for Nomad and Consul) and you are up and running.

Since then I have been a fan. Also worked a lot with Kubernetes, which has its merits, but the simplicity of Nomad is great.


In my previous job, I used Nomad and Consul to deploy and scale my machine learning models on Linux and Windows VMs without using any container technology. It was so fun and easy to work with.


> host_network as of present cannot bind to a floating IP address (DigitalOcean/GCP etc). I've to resort to using my droplet's public IPv4 address for now.

You should be able to bind to the anchor ip to make the floating ip work. https://www.digitalocean.com/docs/networking/floating-ips/ho...


Author here.

Thank you for the tip. I'll try it out.


I really liked Nomad last time I tried it (~2 years ago) but I was driven away by the subtle inconsistencies between Nomand, Consul, and Vault's HCL configs. Things that should be the same in all three (e.g. bind port, simple permissions options, etc. standard boilerplate) inexplicably had different config keys or HCL structures between them.

Maybe this has improved these days, I'll have to give it another shot.


this is compelling:

> Nomad shines because it follows the UNIX philosophy of "Make each program do one thing well". To put simply, Nomad is just a workload orchestrator. It only is concerned about things like Bin Packing, scheduling decisions.

so is the stuff about saner config syntax

'k8s is bad at everything and nomad is only trying to be bad at one thing' actually makes sense of my reality


I looked at Nomad about a year ago but deployment of a secure production cluster involved more moving parts than I liked. Especially the security seemed to be very complicated (with a private CA, certificates, Vault, etc. necessary). A simple shared secret would have sufficed imo, but that was not available as an option.


I ran a Nomad cluster a few years ago and the documentation wasn't great for solving problems. I ran into some issues with several cluster nodes, and wasn't able to properly troubleshoot.

I guess this an ecosystem problem, but in general I find Hashcorp documentation (SRE/Ops/Admin specific) lacking.


A whole lot of anti- Kubernetes "you most likely don't need them and the increase in complexity makes it a pain to maintain" and "Kubernetes is exceedingly complex" in this thread & somewhat in this article.

I agree that you probably don't need Kubernetes, and perhaps yeah it could be considered complex.

But I think it's the right fit for most developers/doers & over time most operators too. Kubernetes is not Kubernetes. Kubernetes is some base machinery, yes, but it's also a pattern, for writing controllers/operators that take Kubernetes Objects and turn them into things. Take a Postgres object and let postgres-operator turn it into a running, healing, backing-up replicated postgres cluster. Take a SQS object and let ACK turn it into a real SQS. Take a PersistentVolume and with Rook turn it into a Ceph store.

Kubernetes & cloud native in general proposes that you should have working models for the state of your world. In addition to the out-of-the-box machinery you get for running containers (deployment-sets), exposing them (services), &c, you get this pattern. You get other folks building operators/controllers that implement this pattern[1]. You get a consistent, powerful, extensible way of building.

Nothing else comes close. There's nothing remotely as interesting in the field right now. The Cult of Easy is loud & bitterly angry about Kubernetes, hates it's "complexity", but what is actually complex is having a dozen different operational environments for different tools & systems. What is actually complex is operating systems yourself, rather than having operators to maintain systems. Kubernetes has some initial costs, it can feel daunting, but it is radically simpler in the long run because _it has a paradigm,_ an all inclusive paradigm that all systems can fit into, and the autonomic behaviors this paradigm supports radically transfer operational complexity from human to computer, across that broad/all-inclusive range of systems.

There's a lot of easier this/harder that. No one tries to pitch Nomad or anything else as better, as deeper, as being more consistent, having a stronger core. Every article you hear on an alternative to Kubernetes is 98% "this was easier". I think those people, largely, miss the long game, the long view. A system that can adapt, that operationally can serve bigger & bigger scopes, ought to pay dividends to you as years go by. Kubernetes may take you longer to get going. But it is time enormously well spent, that will increase your capability & mastery of the world, & bring you together with others building radically great systems whether at home[2][3] or afar. It will be not just a way of running infrastructure, but help you re-think how you develop, and how to expose your own infrastructure & ideas more consistently, more clearly, in the new pattern language of autonomic machines that we have only just begun to build together.

I encourage the bold explorers out there, learn Kubernetes, run Kubernetes. And to those of you pitching other things, please, I want you to talk up your big game better, tell me late-game scenarios, tell me how your system & I are going to grow together, advance each other.

[1] https://kubernetes.io/docs/concepts/architecture/controller/...

[2] https://github.com/onedr0p/home-cluster

[3] https://github.com/k8s-at-home/awesome-home-kubernetes


Yeah if there were a one-click install _complete_ local distribution of Kubernetes with a GUI, and that just read and migrated all your docker-compose.yaml files I think it would see much less complaints and teeth-gnashing from new folks. The existing tools like minikube, kind, microk8s, are still too clunky and don't include everything you'd want for a good local setup (login/authentication, ingress, registry, easy mounting local host volumes, a good operations workflow with gitops primitives, etc.). Docker Desktop comes close with a one click turn on a kubernetes cluster, but the handwriting is on the wall for the end of docker & kubernetes tight integration. All of the pieces are out there to make a buttery smooth and slick local Kubernetes stack but unfortunately you have to seek them out and kludge it all together yourself.

But if you do feel ambitious, k8s + flux gitops toolkit + tekton CI/CD + knative serving & eventing + skaffold is one heck of a productive and amazing stack for development (and for bonus points switch your code to a bazel monorepo and rules_k8s for another awesome experience).


I agree pretty roundly here.

Some what-color-do-we-paint-the-bikeshed comments on your particular tools:

* flux seems to be doing great. the ondr0p home-cloud repo i linked is built around it.

* tekton looked very promising when i was evaluating event-driven-architecture systems ~18 months ago, but since then, they've re-branded as a CI/CD tool. it's just branding, it's still generally useful, but i very much worry about drift, & using a product against-the-grain from how the community around it uses it. i think there is a really epically sad story here, that this is a huge mistake for Tekton, which is much more promising/useful than "CI/CD" alone allows. talked about this some two weeks ago[1].

* knative was on my todo list. it's resource requirements are fairly daunting. i'm trying to pack a lot of work on to my 3GB k3s VPS and knative seems right out. it's weird to me that requirements are so high. serving seems a bit on the complex side but useful abstractions, they make sense. eventing is very high in my interests, and i would prefer having an abstraction layer over my provider, give myself freedom, but again the cost seems very high.

* need to try some skaffold. i don't know where it would fit in my world yet, kind of forget it some.

* k8s, tekton, knative, skaffold are all somewhat from the googleverse. honestly i'm hoping we see some competing takes for some of these ideas, see different ideas & strategies & implementations. kubernetes is such great material for innovation, for better ways of composing systems. let's try some stuff out! please kindly think of those who don't have a lot of memory too.

[1] https://news.ycombinator.com/item?id=25993294


My problem is there is no way to get started with those things without knowing all of them.


Author here. I explicitly mentioned about the Operator Pattern in my post and why K8s probably makes more sense in this context :)


I agree with a lot of the points you make.

Have you looked into nomad, consul and vault; along with everything they provide?


I really like all their offerings! Hashicorp writes incredibly easy to run & operate software. They have a wonderful software-design sense, identifying core problems & driving a well-defined approach forward. Rarely are companies so technically savvy in their offerings. It's clear from having run into all manners of Hashicorp people at conferences (back when that was a thing) that they have really great engineers too, talented & knowledgeable & engaged!

My Nomad familiarity is definitely a bit on the low side, & that's something I wouldn't mind changing. Consul & Vault I used to be able to operate & understood ok, but my knowledge has faded some.


Why would you run Nomad for a home server? This isn't for home server use. This is for practicing skills only required in commercial environments. Skills you only have a need for because someone pays you: because it's not fun and stupid.

Taking that process home and doing all that instead of just running nginx from your distro repos on the sbc OS is cargo cult insanity for 99% of cases.


I guess the #1 reason to run Nomad on your home server is "because you're interested in it". Which is also probably the #1 reason to run a home server to begin with, for most people.

Also, while I don't wanna dispute the effectiveness of it, you should evaluate if "just running nginx from your distro repos" is fine under your threat model when you start exposing stuff to the internet or start introducing services that handle personal information.


I would generally agree with you when talking about things like running a k8s/k3s cluster for hosting your weblog on your Raspi. With Nomad it's different though, setup on a single server is basically a 10-line systemd service definition and then you can start submitting Nomad jobs to it that are equally easy to write. You get things like cron jobs (which is more complex with systemd), parametrised jobs with multiple parameters (not possible with systemd), restart policies (easy to mess those up with systemd) and a really polished admin UI which allows to start/stop/restart jobs and sh-ing into running jobs / containers. And all for the overhead of a single systemd service.


It’s software? It’s all of triggering an ordered copy-paste process (downloading and installing is just a less random copy-paste).

I dunno about you but I don’t download Ubuntu’s package repo. I have to run an install command and then customize nginx. What’s the real logistical difference to the user if they run this command or that command? Or set config values in this file or that?

Why are you using Linux at home? Unix is for servers!

Nginx “replaced” Apache. Did we expect nothing would replace it?

https://en.m.wikipedia.org/wiki/Functional_fixedness


As the article says,

>Nomad is also a simpler piece to keep in your tech stack. Sometimes it's best to keep things simple when you don't really achieve any benefits from the complexity.

Simple is better. But in this case he doesn't realize that he's stuck way up the complexity stack in a local minimum that's way more complex than most of the potential software landscape. None of that is needed to expose a webserver. Just run nginx on the machine from repos with no containers, no deployment, etc, etc complexity and forward the port.


The second sentence of the article answers your question.

And if you follow the link in the third sentence, the image at the top of the README also answers your question.


If we are to take your assertion and those sentences at face value then the article title and hook is false. It is not about running a home server. It is about padding his resume and skills for providing services for pay at a corporation. It has nothing to do with running a home server. Perhaps a better title would be, "How to practice paid work (that's irrelevant at home) at home".


Yes I get how it works in the base case.

What we disagree on is the level of complexity.

It’s cli commands and text editing regardless of which method is used.

There is no iron clad hierarchy for organizing computers files on top of the OS, only personal experience. What may seem more complex for you is still just cli commands and text for me.

I do not see config as having weight, heft. Data, sure. Config is arbitrary.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: