Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What do you think will come after Kubernetes?
141 points by spIrr on May 14, 2021 | hide | past | favorite | 170 comments
I mean, Kubernetes became a thing after Docker became a thing before(and so on, all the way down to VMs), so it's reasonable to assume that something new will eventually emerge and become the industry-standard.



As businesses start to realize microservices aren't really worth it due to the complexity and cost it entails, they will start reverting back to hosting monoliths on VMs or services like Heroku/Netlify/Laravel Forge/Beanstalk, and they'll find out they can save a lot of money on compute and man hours by doing this.

For more simple workflows, e.g. a single API endpoint, there's serverless and other SaaS services out there which will let you build off from that, and lets you save a huge amount of time and money compared to building it traditionally with a Web framework.

Now that we've had a few years of experience with how people are actually using the platform , I could see a simplified version of Kubernetes taking hold for businesses still running complex sets of services, something that's easy to install, actually comes with batteries included and is secure by default.


The biggest problems with microservices are the lack of injecting cross-cutting concerns like security, monitoring, circuit breakers, resource metering, alerting, service dependencies, logs, resource isolation, upgrading on-the-fly, migrations, backups/restores of data, and configuration. These all have to be handled.

Docker with k8s is awful when it comes to resource metering. Docker is terrible with security and resource isolation. K8s, in general, is awful IMO because of the way it was architected and implemented.

Xen-ified "containers" (regular VMs but managed) with an ability to inject uniform, templated, configured concerns is how to do services: micro or macro; ephemeral app, infrastructure, or backend. Also, backups and logs have to live somewhere else.

Basically, a APPaaS/PaaS/IaaS in the spirit of K8s + OpenStack but much cleaner, workable, easier, simpler, fewer choices, more usable, more reliable, less stupid, and more standardized. (Discovery is insecure and fundamentally unworkable; top-down provisioning knows exactly what exists and is alive so use that as the authoritative single-source-of-truth.)

There is no other way I can see.


> but much cleaner, workable, easier, simpler, fewer choices, more usable, more reliable, less stupid, and more standardized.

You can say this about 80% of tech yet here we are.


You can do circuit breakers with a monolith. "Release It!" is an excellent book that covers many patterns that do not involve the last resort of distributed systems.


No sane business is saving money with Heroku. It offers zero mid or back tier flexibility. PaaS and serverless are here to stay.


Heroku savings come in your payroll, not your monthly hardware budget.


For a startup perhaps. Regardless, it’s Opex. Heroku’s low barrier to entry does not provide value in the mid/long term.


Only so long as you're based in Silicon Valley, NYC, or similar US hub. Maybe in UK, but not so much.

The decision to go with GKE saved me enough money to cover payroll and then some compared to Heroku.


Heroku is so damn expensive. It's basically worthless because it's competing with free.


I had the same experience with AWS lambda. Every app with mid to big scale was 10x more expensive on Lambda compared to Kops managed k8s cluster that was already there.


Microservices are worth it, just not for every use case. It sounds like you're blaming Google for misleading PMs into using something they didn't need. However, Google made a great product for those who need it. It's not google's fault that PMs love cargo-culting.


What makes you think it’s PMs making architecture decisions? That’s usually the engineers and engineering leaders (as it should be).


Are μServices complex or ppl just don’t like learning new dev patters?

In my relatively short but rich career, I rarely came across a problem that was caused by the tech. Be it k8s or PHP, the problem was ppl using PHP while thinking in terms of Go/Python, or using Kubernetes for stateful monoliths, etc.

We blame the tech because we don’t spent time to understand how it works. So we use the next tool very happily until we come across a new design pattern, context, etc. The circle repeats...

Micro services have very well documented pros and cons. Monoliths too. At this point it’s all about the engineers using the tool at hand properly.


A micro service architecture can introduce a lot of complexity that depending on the developers and their management, can bring with it a lot of downsides.

However what seems to be missed in a lot of assessments is what I’ve started calling “flying buttress architecture”… you have your central monolithic structure, but rising around it are dozens of smaller supporting elements, not the same as having a seconds task queue system, this is building these components as stand alone parts. These “flying buttresses” can be spun up and down ephemerally in a Kubernetes environment, built as a light weight services or as “run once” jobs and cronjobs scheduled by Kubernetes to use up the spare resources left over in each Kubernetes node around the main monolithic applications.

This makes Kubernetes more useful as a deployment and ops tool when dealing with traditional or existing/legacy software.


How does serverless make more sense for a simple web api than a traditional web framework?


Adding on to the other points.

- Zero to very minimal devops / syadmin knowledge required. - Monitoring and few other important tooling is usually out of the box

- Add on / Related services in serverless stacks like vercel/netlify etc also handles email/auth type components, a framework will only provider library for these, implementation is in developer hands.

- Simplified deployment : most services can sync with Git repos, provide one click deploy and have simple CLI binaries .


-cheaper (dont have a vm running 24/7 just for you)

-Easier to auto-scale because it does not have any state

-Can easily run at the Edge (look into lambda at edge of cloudflare worker)


Serverless? There are always servers. APIs don't exist in a vacuum or scale magically.

If you don't understand how to build what's underneath, then you don't understand what's involved and are at the mercy of vendors or someone else.


I have built what was underneath — and then threw it away, because the managed-services vendor built it better, and the OpEx of paying them to manage it was less than the OpEx of paying myself to maintain it.


It absolutely depends on scale. Smaller teams of better staff can sustain cloud infrastructure, especially if grabbing as many internal needs as possible so as to gather critical mass to make it pay for itself.


This has been the story repeated over the past 30 years


Not thirty years. 10-15. IT was a big industry that everyone did up until about 2010.


I’ve been involved in IT for over 25 years, I’ve seen a lot :)


By that logic I as AP dev should know systems programming and OS. A systems programmer should know kernel development ? A kernel developer should know assembly and microprocessor architecture?

Everybody has at best little or shaky knowledge of underlying abstraction. Yes it helps to know more, but is not really practical to expect that knowledge as the minimum required.


Yes you should have at least some knowledge of architectural principles and design of an api layer you directly depend on. How is that not just common sense?


Yes. You should know one layer of abstraction below the one you are using. Always. Because everything leaks, and eventually you have to fix it.


Every time someone mentions serverless some smartass feels obligated to mention that "there are servers". Yes we know. It's a specific technical term to refer to category of serverless products like AWS Lambda.


The Herokus of the future will run on top of kubernetes.


Can confirm, building a Heroku of the future on top of Kubernetes.


Portainer comes to mind.


I think the pendulum has swung a bit too far from its optimum with serverless and microservices. The main issue I have with both is the huge complexity and friction between how it runs on a developer machine vs how it runs in production.

For me the sweet spot is monolith(-ish) 12-factor applications packaged up in containers. In this setup I can just `docker-compose up` my dependencies (postgres, redis, rabbitmq, other services, ...) and run the application from my IDE against those (so I can use the debugger instead of printf-to-cloudwatch). For production I can package the app in a container and deploy to a container orchestration platform (Kubernetes, ECS, or something else).

To answer your question, what comes after Kubernetes?: I'm hoping for a platform that is somehow a consolidation of the good ideas that we've seen in Terraform/Pulumi/Kubernetes/Cmake/DockerCompose/Swarm. I want to write a portable idempotent "deployment build script" that I can apply against a cloud provider or bare metal or localhost in a similar way. With good support for different configurations depending on the environment (like C ifdefs or cmake options).

For example: when I apply the script against my localhost it spins up a postgres container, when I apply the same script against my AWS account it configures an RDS postgres instance. Both invocations will pass along the connection string to dependant services.

Basically morph docker-compose.yml into a portable Cmake for container orchestration.


There are three main classes of services:

0. 12factor applications

1. Backend datastores

2. Infrastructure and miscellaneous, which is sort of turtles all the way down.

Disambiguating them is key to maintain proper infrastructure of any kind.


What I'd really like to hear is the story for after you're gone and the business has several hundred services spread out over 5-10 years of development, with varying historical stratums, and plenty of them being left on life support, where the business just doesn't want to pay barely anything to keep them running.

And then how does the teams dealing with cross cutting concerns like security, logging and monitoring deal with them.


I’m not sure what you are getting at. Several hundred services probably means going too far to the microservices side.

Cross cutting concerns like logging and monitoring are handled by 12factor. Services log to stdout and have the container orchestration pipe it to a logging backend. Monitoring can be standardised as well with healthcheck and metric endpoints. Security always has a maintenance cost attached unless you want to keep running on 12yo tomcat.


> Several hundred services probably means going too far to the microservices side.

You can easily hit that in a small fortune 500 company with ~10,000 employees, without microservices.


I'm pretty sure you just described the value prop for CloudFoundry.

Seriously though you're talking about another abstraction layer on top of whatever cloud provider(s) you're using. And the issue with such abstraction layers is that to really make them portable you're stuck with the lowest common denominator of features. Such a thing doesn't really play well with the immaturity and velocity of cloud providers - we need and use the latest features from the cloud providers and get stuck waiting for TF providers to add support. And that's just talking control plane - you also need abstraction layers for all your data plane interactions to homogenize those APIs.

Back in the day RightScale tried to do this but they consistently lagged support for AWS feature releases by upwards of 6 months making it a nonstarter (this was back when things like IAM and VPCs were just getting launched and critical features to make them usable were coming fast). CloudFoundry's approach is to provide an API to run everything on top of a compatible hypervisor control plane API - the lowest common denominator.


The abstraction layer with lowest common denominator is the tricky problem that The Next Big Thing will likely need to solve.

Terraform tries to do it but as you mentioned it’s often a frustrating experience because of lagging providers and Terraform itself is a moving target (at least it was 1-2 years ago). This quickly leads to code rot and dependency hell.

I think containers are the right abstraction for packaging, but orchestration wise we’re still trying to figure out the optimal workflow. K8S, TF and Ansible have shown idempotency is an interesting concept to have. Pulumi improves on that by having procedural logic to describe the desired state. And in my eyes The Next Big should build on those ideas while avoiding or fixing the current rot/lag/dependencyhell problems.


> For me the sweet spot is monolith(-ish) 12-factor applications packaged up in containers.

exactly this.

Once you go with this you take technology out of a microservices definition and all that remains is how you scope your service.....


I think there will be a rise in monoliths and a trend towards reducing as much distributed state as possible.

I think this will be driven by a few trends:

- Very fast disks to store data SQLite style, auto replicated transparently at the disk level.

- Typed languages becoming more usable (tooling and language improvements), which makes it easier to design and operate systems (stack traces vs distributed debugging/tracing).

- Improvements in Heroku-like systems from AWS/GCP etc.


For some serivces in a kafka-based system i replaced the Postgres some services were using with File-based BoltDB stores or SQLite DBs.

With kafka as the data source every instance was autonomous and would keep itself up to date using the kafka partition offsets.

Went really well, took the DB cluster out of the scaling equasion and offloaded a lot of the cost to the SSD instead of memory or a managed DB Cluster.

If eventual consistency is enough i feel that this is a nice way to handle things.


Kafka is works very well when you have a IF "receive-this" then that kind of problem.

I find a bit complicated using streams / queues for everything because it is a paradigm shift (turning the database inside out).

BUT when you have a clear use case like ingestion -> refinement step 1-> refinement step 2 it is a _very_ natural way to do stuff with the added benefit I can have as many parallel workers as topic partitions.


I very much hope to see Hashicorp Nomad getting more traction, as a simpler, yet feature rich solution to probably most use cases in our industry. Pretty interesting to see the adoption of podman and also micro-vms (like fly.io does it), and other Nomad providers that does not nessasarly need to be docker.

I know there are great terraform modules out there already, but i wonder if many of Hashicorp's products soon will be available as "managed" services as well, allowing people to try something new with less maintenance cost.


If anyone wants to test nomad/consul on AWS, and you just want to type 'terraform apply' and then have a consul cluster, nomad cluster, and example app to play around with: https://github.com/groovemonkey/tutorialinux-hashistack/

It's a work in progress but might be helpful to someone!


Thank you!


Mh. We are pushing production to nomad, and I'm not sure, because Nomad has a different focus than K8s.

K8s in my book tries to solve arbitrarily complex deployment problems. To do so, K8s overall accepts being a complex monster. You have to fully commit to use K8s, you have to put everything you have into K8s, and then K8s solves a staggering amount of problems.

Nomad is much simpler. A good way I've found to describe nomad is: Nomad+Consul+Vault is essentially a distributed initd/SystemD + cron + secured store. That's all. It's powerful, because there are no problems you can't solve by scheduling + discovering VMs, containers and applications. But it's at a more fundamental and lower level, and it requires more work to utilize well. It's also less of a jump compared to K8s, but some stuff more or less solved in K8s is not in Nomad, so far.

I strongly enjoy working with nomad and we see other operations teams grow interested, because nomad functions and handles similar to something like VMWare. Sure, containers are weird, but it can do VMs, and it's still similar. For example, for an ES cluster on hardware with grafana/kibana/2 logstashes running "somewhere on these hosts", nomad is very pleasant to use.

But I doubt nomad will be as hyped as K8s because e.g. you cannot throw a helmchart at it to create a postgres/patroni cluster on abstracted storage for a servide with a relational storage requirement, as the hyperabstracted way needs.

CSI support exists and is being enhanced, but you will still need a CSI provider, and e.g. the GlusterFS CSI provider got eaten by K8s and now you're stuck with S3 or running Ceph. And running Ceph is a ... thing. And for example, you'll have to run the nomad autoscaler besides it, and other orchestration and management systems around it that K8s has, or has integrations for. There's just a lot of edges to handle, if you could just buy hosted K8s.


Problem is, hashicorp stuff is so scattered and it's really not documented how to combine all the things (TF, nomad, consul, vault, all documented and released independently). Plus you need a bunch of extra computers laying around to run the nomad and consul servers before you can even begin with the jobs on it.

The benefit is huge though - you can orchestrate VMs and processes in addition to containers


You can pay hashicorp (a lot) and they manage things for you https://www.hashicorp.com/contact-sales

One company I worked with went managed, another one didn't. I think hashicorp stuff is too barebone in general. I noticed way less headaches with the managed solution and I would just host it with them.

In contrast, I feel comfortable running a k8s cluster for a smaller business of mine.


Yeah i love nomad! We've been testing to migrate to it


I love how easy it is to get up and running with Nomad + Consul, I really hope to see them gain more traction as well


They are indeed working on that! https://www.hashicorp.com/cloud-platform


Simple yes, feature rich compared to k8s? Nope.


The new thing thats going to “emerge” is already here. VMs and Containers were around for about a decade or more before they became really widespread.

Frankly I think the serverless model will end up dominating in the long run. Whats really missing there isnt a better abstraction for k8s. Its the dx workflow and local development experience that still requires lots of work. Emulators for services, workflows, good ide integration, open service standard. That will all be where the next layer happens.


I think as long as we are emulating the cloud runtime and services it will not reach it‘s full potential.

Developers need to be able to run the real platform, like they can with k8s/k3s.


Serverless functions built on computronium.


Docker enabled Kubernetes. Before then we really didn't have a unit of compute that could be reproduce easily in different environments.

However, Docker has some downsides:

(1) It centers the unit of computation in the Operating System layer

(2) Docker relies on a completely homogeneous infrastructure (same chipset, almost same Operating System).

(3) It relies on a Virtualization layer (usually KVM) to be run securely

I believe (1) caused a vast amount of complexity when scaling, since you are scaling "OS Instances", not applications or functions, which caused Kubernetes to inherit most of this issues and be perhaps much more complex than it should.

I think (2) is key because with new trends coming up (such as Edge Computing, Serverless, IoT and 5G) we are starting to have the need of running software universally across a more diverse infrastructure (different chipsets).

And (1) and (3) causes a significant slow-down for startup times (see Firecracker state of the art project with a ~250ms best startup time - not good enough).

I believe most of this issues are solved with WebAssembly: simpler scaling, universal execution across diverse infrastructure and fast startup times. And I'm actually working full-time in Wasmer [1] to make of this vision a reality! :)

[1] https://wasmer.io/


It can be longer than you think. While our focus is on the parts of the industry that churn endlessly, the real typical pattern is that there's a cutting edge/early adopter churn, then typically some set of technologies "wins" and prevents anything from developing, if only because they raise the bar so high for what the competition has to do to be even remotely credible that it becomes impossible to do the work to displace the entrenched competition.

I think the most likely outcome is that K8S continues to entrench itself and essentially nothing ever displaces it on its own turf. Eventually, the game will move to some other turf and the process starts over.

My statement here is about general trends, and not a full explanation of how all software lifecycle works. My point here is more that if you look more carefully, it is not the case that everything everywhere is always in constant churn, therefore you can safely assume that K8S is on the verge of being displaced. The antecedent is not as true as it may appear on the surface. Under the boiling froth, there are many more things in the software space that are relatively stable.


I think Kubernetes will be a part of things (as Docker still is) but it'll be the next level of abstraction -- the thing that turns Kubernetes into Heroku.

There's been a few popping up recently, time will tell who wins


What have you seen in that space so far?


OpenShift is also an alternative: https://www.redhat.com/en/technologies/cloud-computing/opens...

Had some fun a few years ago with toy projects. Can't tell its current state...


OpenShift has been kubernetes since like OpenShift 3. Their serverless/functions-as-a-service capabilities are powered by the kubernetes project "knative".


Space Cloud https://news.ycombinator.com/item?id=26997595

and Porter https://news.ycombinator.com/item?id=26993421

were the ones I've seen recently, although admittedly on re-look I may have misunderstood Space Cloud and I could have sworn there was a third


Convox[1] is another one

[1] https://convox.com/


Knative is the obvious first answer.


Kubernetes automates and standardizes some of the best practices around building reliable systems in a platform agnostic way. We never got there with VMs: Amazon ec2 instances are not the same as GCE instances are not the same as VMware VMs but a kubernetes api is the same everywhere (and providers are incentivized to standardize their offerings). Whatever comes next would have to be something that can standardize even more.

For the next 10 years, I don’t see kubernetes going away but rather evolving into something that’s dead simple for anyone to setup and use easily. It’s flexibility in being a platform capable of easily running whatever you want to is somewhat unmatched.

Honestly the only way I see k8s being replaced is by a fork if there is some Governance dispute. I don’t see the platform changing for at least 10-20 years.


> We never got there with VMs: Amazon ec2 instances are not the same as GCE instances are not the same as VMware VMs

This is true but is that such a huge deal? We successfully moved from AWS to GCP and it wasn't a massive deal; just different terminology for the same concepts.


It really depends on the kind of engineering org that you operate. As a general rule, more standards and easier migration paths win.

Compare that to redeploying your k8s manifests from an EKS cluster to a GKE cluster or vice versa. Its definitely not fully platform agnostic, but it requires much less engineering effort.


I’ve been looking into this for about a year. This is going to sound crazy right now but I’m saying Fuschia is going to just straight up change computing in the medium term future. I’m expecting it’s first release next week at I/O and right now everyone just thinks it’s the “new Android” but it becomes kind of clear when you read the docs that it’s actually much much more than that.

Things that are interesting about it:

- any device and any workload from IoT to servers.

- totally new security model from the ground up which is how I expect it to be the main driver for business use cases

- new app delivery model which is kind of like a weird mix of the web as you know it currently, native apps and kubernetes.

- seems to be developing an interesting interop story where the goal is that you will also be able to run Android and linux apps on it despite the fact that it’s not Linux and not based on Linux. Everything is custom made from the ground up.

- the development story is also super interesting, all IPC happens over a gRPC like protocol and provides a nice language neutral way to develop for the platform and to integrate with everything else.

Some of the other projects I’ve seen coming out of Google make way more sense in this context too. Flutter for web is a good example which I know is not exactly a crowd favorite here on HN but I’m serious when I say that now might be a good time to go and look at Dart again. It’s actually a really nice modern language if you are coming from JavaScript, Typescript or Java you will be up and running very quickly.

I think this is about to become a hugely disruptive force. If Google manage to not screw this up that is.

For a slightly longer term bet I’d say by the time it hits say v5 it will be the biggest OS / platform in the world by a considerable margin.


Serverless compute solutions like AWS Lambda, but with increased capabilities.

Why do I need to build a container image when I can just list my dependencies and provide some code to run? For example: "I need Programming_Lang version 4.14, Library_A version 2.2, and Library_B version 1.5".

I don't care what the underlying operating system / system libraries are as long as it is fast and secure.

I just need to run my code. Why should I need to manage the rest as long as it works?


Lambda supports container images now and it works really well for letting you choose what languages and runtimes you want to support.

The next step might be standardizing activation and dispatch, possibly with knative or a common API surface over vendor libraries.


Agreed, but the functionality you see as a dev still needs an implementation layer, and it will likely be containers. Knative is relatively mature and offers something like the experience you are describing.


Simpler solutions that behave the same on your laptop as it does on a cloud setup. More Heroku like. Most people / companies don't need all the intricate settings of Kubernetes (or docker). You basically want to have a docker-compose file, put some cpu, mem, scaling, replication numbers and that's it. Deploy locally or on a cluster and it'll just work, scaling as you go, if available on your deployed platform.

I know this is already possible, but it's not very easy, especially across laptop, cloud and baremetal; Kubernetes on baremetal is a pain (no simple ingress if you don't have access to the router or loadbalancing).

I hope someone tells me this exists but I tried a lot of solutions and they all fail at something (complexity / dependency on big cloud providers / dependency on specific network capabilities etc).


Docker Swarm was closer to this but never really got the traction to become a serious contender. I really liked being able to use a compose-like file to deploy. It had a much smaller feature set than k8s and some gotchas, but I felt like it was a lot easier to grok than k8s.

The thing that led to my moving away from it was the lack of a mainstream hosted version.


I think the question is: what comes after micro services/distributed systems? Whatever that is, will make k8s obsolete.

My bet: rich fat clients. Super powerful phones that can run your entire e-commerce monolith to process my checkout. I’m talking about hosting your backend (most of it) and DB on my phone. Cheap syncs from my phone to your “main” server is fine because I’m always online and I have 10G connection. This will also force a shift in the way we develop programs.

So you no longer need to deploy and maintain hundreds of services. You’ll only need to deploy and maintain one that gets synced to millions of devices (no app stores) rather fast (because, yes, a new compression algorithm will emerge and your whole monolith is now only a couple of megabytes).


How would such a fat client setup handle authorisation?

Closest thing to what you mention is fire base as a back end. I guess that can work.


- More serverless platforms with more heterogeneity. Each one will be better at certain things. fly.io seems to be specializing on computing at the edge (networking). Others will be better at GPU computation; others will have different types of storage and databases.

- More people using multiple clouds. Because putting all your eggs in Amazon's will start to be less acceptable, for cost reasons and lock-in reasons. Kubernetes was supposed to be a neutral interface but I don't think that has panned out. I'm curious if anyone actually uses Kubernetes in a cloud-portable way.

- Less reliance on cloud SDKs and cloud-specific features, and more open source tools. Less reliance on Docker (which is rightly being refactored away.) I hope to do something with Oil here: http://www.oilshell.org/blog/2021/04/build-ci-comments.html .

- In case it needs to be said: more usage of the cloud. It does make sense to offload things rather than host your own, but right now it's still in an awkward state.

Good thread from a few months ago about problems with serverless dev tools: https://news.ycombinator.com/item?id=25482410

Good list of obstacles and use cases for serverless: https://arxiv.org/abs/1902.03383


As an addendum to the specialization thing, I think that Snapchat was built on Google App Engine basically because it happened to have a really good XMPP gateway or something like that? If anyone has details I'd be interested


Compiling every dependency into a single WASM binary (including database engine, language runtime, etc) and just deploy and scale it on a serverless platform.

No more containers to develop or to deploy, and eventually, no UNIX filesystem around the runtime.


This, WASM, some virtual OS API, and edge deployment (5G), even on user devices. Also some energy efficient blockchain mechanisms and smart contracts may bring us closer to trusted computing. Although that won't happen quickly enough, not next after K8s.


I'm thinking of this as well. Given how frontend frameworks like Svelte are moving towards a "framework as compiler" approach and how newer runtimes like are Deno now support compiling to a self-contained standalone binary, my guess is that WASM will become the main target for cross-platform development.


Something much, much, simpler. I'm not convinced that /for most companies/ k8s adds enough value when compared to all the added complexity (not to mention the yaml soup!).


Working for a small startup that uses both k8s and more traditional infrastructure, I've found the yaml soup that comprises a reliable k8s system is not much worse than the yaml/hcl soup of our ansible/terraform provisioning scripts.


The best thing is that unlike ansible's horrible text-templated yaml soup (though kudos to them for making text templating indentation-aware unlike Helm), the fact that with k8s YAML is optional and I can use things like Jsonnet, CUE, any random programming language to build manifests with full reasoning about objects, makes it much nicer than most.


I find Ansible simple: it's just a way of sending over the wire idempotent commands. It's like bash scripts, but better. On the other hand, k8s is a best of hundreds of heads.

Sure, yaml is a pita in both of them.


I don't like running all my stuff with yaml. We have to introduce templating and then things get messy. There needs to be a simpler, declarative way to share values, variables, etc between k8s types.


Try Pulumi.


I genuinely worry about this community when after >100 comments nobody has said "k9s"


Kubernetes/Docker are still fundamentally "CaaS" (container-as-a-service). Definitely an improvement from the mostly IaaS period before, but still too low level for anyone to run services reliably on with low effort. From someone who runs 400+ services on kubernetes – I'd like to make this clear: Kubernetes is not a "platform" for services, it's just the low-level container infrastructure. You have to build the platform yourself, which is not easy.

I don't think the replacement for kubernetes will be something equal or even-more-lower level (more barebones, like nomad). It would be something higher level, enabling more features, not just equal.

What next could be a proper, self-hostable PaaS. There are a few out there, but most are either closed wall (fly.io, heroku, app engine, beanstalk) or self-hostable but complicated or not easily scalable (cloudfoundry, etc).

In a way, Kubernetes also did the same thing that most of it's predecessors did. But the main difference was – it offered a common low-level abstraction of APIs and operators which allowed a lot of solutions to be built on top. It was not just a CaaS, it was also the "standard model" to run things underneath. The unit of the model was always a "workload" or container.

Similarly, the next PaaS could also do the same thing as solutions today – but if it becomes the new "standard model", where the core unit of the model is an "application" (not just a container), it would be amazing. Deploy applications with hundreds of standard, open ended plugins like distributed tracing, etc. Open ended heroku.


The pendulum swings back somehow and we'll have offline capable personal computers again.


Offline? I think we've almost lost the capability to deploy and run software without an always on network. I maintain an "allowlist" for various things, and people really expect to be able to hit random endpoints to deploy. Even the security monitoring stack needs to have external IPs reachable from the otherwise most locked down networks.


I've been agonizing over devops lately and for me K3s looks like a lighter version of the full API. Another option is NixOS, if we integrate some Kubernetes orchestration features with an autoscaling group of NixOS nodes we get a ton of benefits - version controlled declarative configuration of the hosts enables Dev Prod parity, shared immutable dependencies across containers makes them pull and cold start faster, automatic and easy rebuilds with latest updates...

NixOS is weird and Nix language is super weird but the concept is powerful and the dev community can benefit tremendously from it

If Hashicorp does substantial work to put the pieces together on a full Hashistack deployment to compete with K8s that will be a good option, too.

Also, there's the option of just using cloud stuff instead of an orchestrator. You could use AWS ALB and autoscaling groups to do much of the same thing, and even manage the infrastructure in Typescript or Python with both Pulumi and CDK (or just use TF or CF)


Somebody wins the market and stays until the market disappears due to innovation. If we were to say that it is not established enough and it could be disrupted within the existing market then I like Nomad, good enough for most cases I think, a lot simpler.

Disclaimer: I have not Kubernetes experience, show a few documents and opted for Nomad with great success so far.


There will have to be something to solve all the problems that are created by Kubernetes, which itself exists to solve the problems with Docker, which exists to solve the problems with VMs, which exist to solve the problems with Operating Systems.

I can't wait for the next layer of complexity to be added. It will no doubt be called something like WizzlyBobATron.


> WizzlyBobATron

Tron?

The Master Control Program?


Something will almost certainly replace Kubernetes at some point, but I'm fairly certain what replaces it will look a LOT like it.

By that I mean, entirely API driven, declarative, etc.


A new programming language which hides the fact that your code is running across multiple machines from you. Similar to OOP, you'll define some level of structure and the compiler will figure out how to best distribute it across the resources given to it.


Hiding the network has been tried before, but I don't believe it's ever worked well. (See the CORBA and "distributed objects" craze in the 90's.) The costs of networking are too high.


I worked with a Corba based system back in the nineties and what killed ours wasn't network costs, it was circular hidden dependencies making the system fragile.

The support guys had a complicated procedure for restarting it that involved a very carefully orchestrated sequence, which sometimes didn't work so they had to start from scratch.

It was awful to watch.


I was on a team that used CORBA to distribute across five machines: input, output, logic, UI, and whatever the CORBA object discovery thing was. (I don’t know what they were thinking.)

They didn’t try to run it all together until it was time to ship.

Networking overhead definitely killed us. That, and incompetence.


For serverless, the move to much lighter runtimes like cloudflare workers or deno deploy seems like an obvious improvement. If you build your entire app on the Cloudflare "Stack" that could mean serious lock in, but it could also go in an open standard direction. I think to "replace" k8s it needs to be something that many people can run in different places. Would love to see an open serverless application runtime based on v8 workers. I think a future based on a whole lot more JS and WASM is not unlikely.

But maybe monoliths are the way, as many commenters are hoping for, because sure, distributed systems are hard, no matter how nice the abstraction.


A meta OS that works like Linux, but corresponds to an array of machines that is represented via one interface. Think Plan 9. Map all resources as mount points. Make standard commands work across machines by default. ps, ls, ip, taskset and so forth. The OS will be the same regardless of which machine you log in to. "Just a bunch of machines" data stores.

Applications will probably be flatpaks instead of containers. With some config for required data stores, NICs, port openings, CPU and RAM requirements.

In short. Plan 9 using Linux tooling.


Something simpler. Please god.


I think the good of Kubernetes may be the sweep-away standardization in the network layer. IIRC it prescribes a certain network visibility of all the services that a lot of IT departments may have resisted under the banners of various motivations (security! obstinance! laziness! security!).

Also k8s adoption standardizing the visibility and forcing security to all (ssl everywhere, encryption, authentication for each request, etc).

That mass organizational evolution in enterprises opens up a lot more simplicity for just-containers or bespoke scaling strategies. Go ahead and run micro, nano, macro, or mega services. Don't be limited by docker "conventions" on size or complexity.

Of course serverless was the natural evolution to k8s, but the "DC OS" is still a very very very nascent thing that is far behind something like a POSIX standard or anything like that for portability. Serverless is all lock-in right now. k8s was nice in a way because at least it was SOMEWHAT non-lockin as an architecture/framework. If you squinted. Hard.

Standards and portability breed true flexibility and good tools, and we probably need a lot more of that in the cloudrealm.


I think it will be abstracted kubernetes - something a little like ec2, but where the cpu, memory, and disk are multi-tenant, overprovisioned, and your code and aps run on the os provided, but are suspended during non use and unsuspended by the supervisor service on demand. Common architectures are available (elk, ssh/scp, tls load balancer with lamp/tomcat-postgres/etc.) Out of the box with automating, overprovisioned, multi tenant bandwidth, storage, etc.

Virtual private servers, but suspended/resumed/scaled for you without you needing to worry about the underlying method of deployment. AWS lambda + aurora but running actual operating systems. You want more threads or memory - they're there for you. Every user process is metered and monitored. Charged by cpuops + memoryops + iops + hdstorage bytes by the second. You never need to worry about how much disk you need, it's there, unlimited except by your pocket book. You never need to worry about how much compute you need - it's there, unlimited by your pocketbook. You never need to worry about how much memory you need. Backups of your drives are automatic.

All data is encrypted to allow it to live amongst all the other tenants' data. All wire traffic is encrypted.

It works exactly like ec2, except it's on-demand usage and overprovisioned and multi-tenant. Alarms and user-set limits for price and scale are respected. The only thing that will leak about the abstraction is that you'll have to mark processes as suspendable and dependent. You don't need to run your metrics collector if your app isn't running. Cron jobs should wake up the server, etc. Ssh/scp just works. You get service discovery out of the box, and your point of entry is the app lb dashboard. A real, on-demand, virtual private cloud.


An abstract layer to automate the creation of k8s which is an abstract layer to automate application deployment and application management

What we need is an abstract layer to automate the systems underneath so we can leverage that abstraction and create more complexity

There are still some folks who understand the abstraction all the way up and down. We cant have that


Ironically the success of unix/linux is that it is a very effective and uniform abstraction to accessing network, compute, storage and memory resources.

The big win of kubernetes is not that it lets you get access to computational resources easier (yaml rather than systemd scripts) but that it is a temporary escape on the "file a ticket to get new compute resources" (temporary because just like picking up hardware for the data center goes from just buy it to get it approved; VM creation goes from just click a button to what's your cost center and VP signoff; new namespace creation hasn't had time to get put into service now, but it will be). Like Docker let developers bypass restrictive OS imaging, k8s lets developers spin up more resources with less constraint. It's not really about the technical abstractions, running a go binary on bare metal is not different (only faster) than running it in a k8s cluster except I have no way to get permission to run it on bare metal.

And my skills are shifting from "maximizing the product value delivered by this heap of hardware" to "minimizing the cost of the MVP in this cloud billing environment." Before if we had unused compute, we'd stick a cache in or precompute something that will reduce latency for customers down the line. Now, we are like, don't do that computation, it mightn't be needed.


... and my skills are getting used to fix the issues that folks dont understand when these layers break

Keep building those layers... thats job security


And strace and tcpdump (and now perf and friends) can still fix almost any problem with access to the source and enough thought.


add sysdig to your tool box


I don't really see any value in automation of creating k8s clusters.

From my point of view 80-90% of software people doesn't need k8s really.

There is strong trend to have "low code-no code" at some point we want abstraction of CI/CD and having small apps that can be built by specialists without need for developers. With k8s managed by cloud vendors we are in the middle of "no infra", cloud vendors will be managing k8s clusters but it is not going to be that everyone wants to spin up his own cluster. There is not enough market for higher infra abstraction, there is enough market for level where we are at.


Whatever we do, please do not solve the actual root issue under any circumstances. That will not do. We need to stick another layer on top of the pile so we can have conferences and hashtags about it.


Maybe what is needed is what zfs did to raid. A freaking layering violation that breaks old assumptions and puts pieces together in better ways.


Something compatible with the Kubernetes API.


Like Dynamo did with Mongo XD


https://temporal.io

if the lesson from k8s is that after containers we needed container orchestration - the business logic analog is that services will need service orchestration.

(disclaimer: i work here so i'm both biased and have skin in the game)


Temporal cannot replace a general purpose compute orchestrator.


Hey, I'm the head of product at Temporal. I would love to understand what you mean by this.


I can’t have an api running on top of temporal. Temporal is great for my api to delegate data processing but I can’t deploy a go http server purely “on temporal”. I still need it to run somewhere, on VMs or kubernetes.


Maybe not quite the right specific area here, but IMO the next "big thing" will probably be AI-generated code of some form. This will let organizations employ fewer developers and instead license AI services (e.g. GPT-3) to have it generate code for some app, then have maybe 1/10th the developer overhead they otherwise would go over that generated code and tweak it as necessary to make the end result better and compensate for the shortcomings of AI at present, with the eventual goal being to replace the developer all together with AI-generated apps.

Then, hosting those apps in some form will likely need to adopt AI-centric workflows in some way. We might even see AI-driven request routing and AI-driven WAF features at some point, too.


For small to medium sized applications; Edge workers like Cloudflare workers. Complexity and cost are much lower.

And perhaps geographical regulations will become so demanding that you need something like durable objects to store & process data in country of origin.


I feel like it’s a bit like asking what’s next after linux. hard to say because the solution is good enough it’s going to last. There almost has to be a fundamental revolution in computing to overthrow linux at this point.

That will come, but it’s a ways out and difficult to see. Most of what we’ll see are different abstractions that will utilize linux under the hood. Same with kubernetes. It’s not that a new kind of OpenStack is going to usurp it, it’s that what comes next will be built on top of it as it becomes mroe and more the norm… until the next major computing revolution. (completely decentralized storage on phones a la pied piper? quantum computing?)


I agree completely. Look at all of the things that K8s is growing with —- service mesh, etc. The platform is doing more and more for you.


Plenty of comments here have documented the headaches around microservices and predict a shift back to monoliths.

However an alternate perspective is the microservices were not micro enough to make it worth the overhead + cost. My opinion is the reason this none of this feels great is because a container is not small enough of a base unit to enable you to completely forget about infrastructure.

That's why I'm betting the next paradigm that gains traction is fully serverless architectures. The overall direction things have been going for decades is to make hardware more invisible and I think we finally get pretty close with serverless.


Here is my dockerfile. Give me a endpoint, a simple ui to manage it. The end. Why is this so hard in 2021? I don't want nodes, pods, yamls, puppets, chiefs, swarms, terraforms, ghosts or even monsters. Thanks.


You may already be aware of the options, but fly.io can do it in a clean/simple way right from the command line. Google Cloud Run can do it if you‘re willing to wrangle the cloud console.


people want to go back to monoliths, but don't want to lose the upsides of microservices. once you get a taste of serverless, it's really hard to go back to overprovisioning. at the same time, distributed state and logic that crosses service boundaries is hard. I don't know what comes next, but what I _want_ to come next is powerful static analysis that works across languages and services, that can "compile down" to serveless services


Maybe something like ballistacompute.org? I can see a language agnostic distributed solution powered by async/await with type definitions constantly automatically compiled to the target language, all wrapped up in an ide with team-level versioning. Native support for event driven, streaming, and batch data with little ceremony. Throw in distributed ipfs-like storage with persistant references and eager caching and a strongly-typed graph database for good measure.


Now I don't need to download, setup and install Postgresql, I can run it with Docker with one command.

Why don't do the same for a typical Kubernetes setup with Hasura, API server, Gateway, load balancer, Postgresql, Redis, and a couple of services for logs and monitoring all in one, with all the networking stuff already setup? Just give me an image or template made by someone else, maintained on Github, and ready to customise and develop for my needs.


Isn't this Helm?


It is often said that while not many people bought Velvet Underground records, those who did went out and started a band. For software developers of a certain era, Heroku carries a similar legacy. Every developer who came into contact with Heroku continues to chase some version of that legendary developer experience today. “It absolutely is the Velvet Underground of developer platforms”

We're still waiting for that magical band to appear.


But Erlang / Elixir seemed very promising to me.

- Supervisor processes. - Small process size. - Failure friendly design.

All of which allows easy vertical / horizontal scaling, robust concurrency and scalability, and automated process management.

Seemed promising the way WhatsApp / Discord have used this. Obviously not a classic DevOps deployment or a direct competitor for Kubernetes. But doesn't disruption happen from the sidelines?


Microservices will be integrated into a single binary developed with clear boundaries in mind to be able to be able to refactor them out where scale is needed.

There will be languages to design distributed systems without having to manually design each component for it. The language rather allows generating interfaces which generate a monolith while certain parts are generated as microservices to scale.


I think complete CI/CD+hosting solutions are somewhat likely; open versions of things like AWS lambda or GCP cloud functions. Roll the entire tech stack into a automated service.

What most backend devs want is to write code and stick it behind a URI endpoint and make sure it keeps running with networking, auth, security, monitoring, scaling, and reliability taken care of by someone else.


I agree. That’s the idea behind https://www.ibm.com/cloud/blog/ibm-cloud-code-engine-enjoy-y... . (Full disclosure, I work for IBM.)


I think edge computing with docker support will be the next big thing. Right now cloudflare lambda at edge only supports specific runtimes but someone will support docker images soon. Eventually there will be edge computing monoliths for websockets or something persistent.

They might even allow on prem edge compute. Why not have a device in each employees home or a few devices at the office.


Infrastructure as code.

One way to do it is you have semi-imperative code that runs, the output of the code is a description of the system to be deployed. Then you have some kind of diffing system that figures out how to take your existing cloud deployment and turn it into the new version described by the output of your code.

This is how Pulumi works for example.


Something I am waiting for is a cloud platform that competes with AWS/GCP/Azure where IaC is the only way to access it.

No dashboard where you can muck about, no shared account that mixes together resources from all your environments, no messy state managment errors because you are always working with the same state: the truth of what is running right now.

The console would only be for monitoring, observability and maybe some disaster recovery actions.


It sounds like you're describing Kubernetes. It already does container and volume management that way. All that's left is to continue building on top of it so that it can provision managed services like storage buckets and databases.


You can manage Kubernetes with infrastructure as code.

The idea of infrastructure as code in relation to Kubernetes is that you have the full power of a programming language to build with, not just yaml files. You can also tie into other functionality of cloud providers as long as your IAC provider supports it. So you could have your Kubernetes cluster connect to some serverless code, or to a managed database, all deployed from one codebase.

Having said all this I didn’t actually have a great experience using Pulumi and switched to plain K8s. It makes a ton of sense in theory though and I’ll probably try again.


Skimming over the comments I get the impression that a majority is quite sure that K8s is going to make a pass any time soon. This seems delusional given its rate of adoption and reputation. My gut feeling is it won't make way for something new any time soon. It will evolve but not go away.


A declarative solution that combines terraform + k8s + hardware management, that elegantly partitions between stateful and stateless resources. Stateful resources will have well defined, easy to understand modes of behavior, explicitly describing caching policies and persistence layers.


It may abstract many differences between monoliths and microservices


Kubernetes was precipitated by the development of containers in the kernel while the industry was focused on virtual machines. The next iteration will be whatever foundational technology is introduced that requires more abstraction, usability, and distribution.


Fully distributed state management.

You want a collective that nodes can be added to simply, with compute, state, storage and networking to be fully distributed autonomously.

I imagine being able to address an object via some hash (IPFS style) rather than networking kludgey.


Webassembly sandbox maybe?

I had great hope in google Native Client [1] when it came out.

[1]https://en.wikipedia.org/wiki/Google_Native_Client


k8s provides standardisation (technologically as well as business-wise), that's why is now lives at CNCF. Google wants everybody to use these as it allows for easier migration of these workloads on their cloud platform. k8s is at heart of their Anthos offering.

see https://www.nextplatform.com/2019/02/20/google-wants-cloud-s... for better explanation from Google's Urs Hölzle


I think the VM isolation in Dalvik and iOS is probably the right model, if we can fix hardware isolation, library compatibility, and resource provisioning. Otherwise, unikernels are the next logical step.


the truth is developers at individual companies should only have to focus on the business logic that individuate those companies. all the other concerns - logging, autoscaling, security, messaging, routing, data storage - are very similar company to company. this leads me to believe that some sort of serverless workflow engine will be the next big thing. problem right now is that the developer story behind lambdas and step functions is horrible and leads to lock in. that will need to be solved. whoever solves it will be very rich indeed.


Does anyone make anything that just takes your docker compose file and run it really easily with a lot more features?

I’m happy with compose. It works. I know swarm was supposed to solve this but is dead now (?).


ECS can take a docker-compose file and run it, although it is not turnkey. You still need to create the ECS cluster, set up auto scaling, and create any load balancers needed so I would not say it is “simple”, but it is definitely simpler than k8s.


I am wondering about this too. After playing around with all of this, to me it seems this is what people actually want.


The simplicity of docker compose and ease of using it for local development is really nice


I'd like something like Azure container instances but crossed with Azure app services.

I upload a chart or compose about the cluster and kubernetes just happens.

No tweaking, no command line - just a GUI.


>> No tweaking, no command line - just a GUI

This is the problem that we need to solve though; nobody wants to customize with a CLI script, until they do.

A universal GUI interface is as unrealistic as 100% visual programming; it's easy to do simple things and impossible to do hard things.


Isn't Helm already after K8s?

Hopefully we have more iteration on configuring k8s. I'm not convinced we'll so easily go back to managing instances with monoliths.


and for devs tilt replaces helm. It's turtles all the way down.


I think it will be a mix of serverless, k8s, vms and unikernels. I personally am very interested in last one but not sure if it will ever become mainstream.


#serverless


Simpler and monolith (an endlessly re-suggested reply) has it's place, but I see it as impossible for any "simple" solution to ever gain enough mindshare to win. A lot of people suggesting monoliths & hosted services, but they are never going to have the community, the presence of something like Kubernetes, which unites people, which people collaborate over, in the same way we all got to learn & experience & co-develop for Docker. The question posted somewhat gets it wrong, "Kubernetes became a thing after Docker became a thing (and so on," implies that they're different things, that tech is about different things, but in many ways Kubernetes is a natural extension & outgrowth, it is a part of the Docker scene, & has continuity with it.

Kubernetes is a thing now, but it's patterns are still underspoken of, underpracticed, underdeployed to the rest of the software world. We will get better at being like Kubernetes, for great benefit. Folks learning how control-loops are advantageous, and learning to use universal API Servers for all their systems will continue to drive not just Kubernetes, but the patterns underlying Kubernetes further into our applications & services & technologies. Tech like KCP[1] is an early indicator of this interest, in using the soul of kubernetes if not it's specific machinery, by creating an independent, non-Kubernetes, but Kubernetes compatible API Server. Having universal storage, having autonomic system control are huge advantages when system building, and gaining those benefits is fairly easy, with or without Kubernetes itself.

I'm hoping we see a DIY cloud'ing become more of a thing. Leaving everything in the hands of hyperscalers is a puzzling and hard to imagine state of affairs, given the hardware nirvana we've experienced in the past two plus decades. Kubernetes is the first viable multi-system operational paradigm, the first diy-able cloud we have, and it's shocking it took that long, but smaller practitioners getting good at converting their sea of individual boxes into something more resembling the Single System Image dreams of old, albeit through a highly indirect Kubernetes-ish route, is a decade or decades long quest we seemingly just started in to a couple years ago.

I'm hoping eventually ubiquotous and pervasive computing starts to dovetail with this world, that we start to have better view & visibility of all the computing resources around us, via standardized, well known interfaces. Rather than the hodgepodge of manufacturer controlled, invisible, un-debuggable overlay networks that alas, constitute the vast majority of the use of the internet these days. Alas the news there is never good, the new Matter standard is, like Thread, inaccessible, unviewable; consumers are expected to remain dumb, ignorant, unaware of how any of it works, merely thankful for whatever magic they receive[2]. But as a home-cloud, as the manor re-establishes computing as base & competency for itself (#ManorCompute), & as good projects like WebThings[3] or whatever takes it's place light the darkened damp corridors only robots patrolled, I hope for a reawakening, hope a silent majority becomes more real & known, hope the fed up, sick of this shit, disgusted with remote-service-based technology world starts to manifest & apply real pressure to emerge a healthy, pro-human, pro-user ubiquotous & pervasive computing that gives us an honest shake, that shows what it is, that integrates into our personal home clouds.

I think there's a huge pent up demand & desire for flow-based/evented systems, for Yahoo Pipes, for Node-RED[4]. The paradigm needs help, I think there's too many missing pieces for something like Node-RED to be the one, but re-emerging user-control, giving us ALL flexible means to compute, is key. Exposing & embracing some level of technical literacy is something people want, but no one knows how to articulate it or what they want. We're mired in a "faster horses" stage, and it's fundamentally incorrect.

Last, I have huge hopes for the web. There are incredibly awesome advances being made in the range of peripherals, devices, capabilities the web supports. The web can do so much more. We're barely beginning to use the cutting edge ServiceWorkers, barely beginning to use Custom Elements ("WebComponenets"), and these aren't even that new any more. These are fundamentally revolutionary technologies. Things like File System Access just came back on the scene after over a decade of going nowhere. Secondary screen working group is tying together multiple systems in interesting ways. There's a lot of high-tower shit, in WebAssembly (especially when Interface Bindings starts to allow real interop with JS), in TypeScript, but to me, I think rather than just building up up up there's some very real re-assessments we ought to be making about how and what we build. Trying to make self-documenting machines, trying to make computing visible, these aren't concerns of industrial computing, but they are socially invaluable advances that have been somewhat on hold in the age of Pax React-us, and we're well over half a decade in & while there's endless areas to learn about, improve, get better at in this highly industrialized toolset, I want to think there are some slumbering appetites, some desires to re-assess. I'm a bit afraid/scared of WebAssembly being a huge tower-of-babel time-sink/re-industrializing-focus that distracts from the need for a new vision quest, but I have hope too, I see the yearning. Albeit often expressed in lo-fi counter-culture, which to me is a distraction & avoidance, rather than the socially empowering act that has been a quiet part of the web's promise[5].

So I have a lot of tech's that seem promising to me. I want to just leave off somewhat with where I started, which is about communities and winners. Whatever happens, for it to go gangbusters, it needs to be accessible. It needs to be participatory, allow a rife ecosystem to grow up & flourish around it. VMs, Docker, Kubernetes, these examples each spun up huge waves of innovation around them. They pass the Tim Oreilly "Create more value than you capture" test, which is core to the tech metastatizing from a specific technology into a wide social practice, into something deeply engaged with by a wide range of users, each testing & advancing various frontiers of the idea, of the tech. Tech that can't seed & keep vital it's ecosystem ossifies & becomes boring. Tech that can grow a healthy community of adept, knowledgable, driving practitioners has a chance of gaining the social collaboration, social presence, to matter & become a new pivot, has the chance to leave a real mark. Each of the techs I've mentioned struggles with an industrial vs social good problem, struggles to become free enough, to matter enough to become interesting again, but I think we're in a much better place than we've ever been to take any one of these- diy clouds, ubicomp, flow-based systems, the web- to the stars.

[1] https://github.com/kcp-dev/kcp

[2] https://staceyoniot.com/project-chip-becomes-matter/ https://news.ycombinator.com/item?id=27123944

[3] https://webthings.io/

[4] https://nodered.org/

[5] https://webdevlaw.uk/2021/01/30/why-generation-x-will-save-t... https://news.ycombinator.com/item?id=27083699


I took a pretty wide swing at what's next in computing.

In ops world, I think getting much better at Kubernetes is going to be ongoing for a while. Devs running k3s or microk8s will become more common for local-dev. We'll need better deployment tools, and #GitOps & project like Flux are going to help a lot.

Helm does ok, but it could definitely be disrupted. The various code-oriented approaches are ok but honestly uncompelling. Endless attempts to build/promote new templating languages seems to be ongoing, but Helm remains the baseline expectation.


I appreciated the read.


Fuschia, Genode, or some other capability based operating system. VMs and everything that follows it is a crude approximation of capabilities.

The mental model of capabilities is something a 5 year old can grasp, like taking a dollar out of a wallet, the most you can lose when you give it to the child is the dollar.

You can't give a task N cycles, and these 4 files ONLY on any of the current round of frameworks. This is 50 years overdue.


Twobernetes: Tokyo Drift

Realistically VMs, the toolsets are very mature, and it was an incredible waste to spend so much time getting them to work so well and then dumping all of it for containers. Which has you starting all over at zero again due to the lack of maturity, tooling that is all over the place and requires tons of elbow grease to integrate and make work for your use case.


BEAM based Elixir projects shall have a say in the future which might replace Kubernetes.


Hopefully some sanity! But realistically serverless functions.


Native desktop apps running on personal PCs.

No, really. It's a cycle.


A lighter weight self-install toolchain that fits into modern Terraform workflows and allows people to remove vendor dependencies while improving SLA/SLO at the same time.


Makefiles. One big circle.


Websites using sqlite.


Container on the edge!


A competitor with a simpler API




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: