Hacker News new | past | comments | ask | show | jobs | submit login

It feels like Docker (Inc) is becoming less and less "relevant" for each year that passes. At least from my perspective, they led the popularisation of containerisation and the whole cattle-not-pets approach of deploying apps. They created big and long lasting change in the industry.

But they seem to have lost the production environment race to Kubernetes, at least for now. They are the biggest player in the dev-machine market, but more alternatives are popping up making it even harder to monetise. And containerd isn't a part of Docker (Inc) any more.

They do have Docker Hub, and its privileged position as the default registry of all Docker installs. But I don't really see why paying (i.e enterprise) customers would pick Docker Hub over their friendly neighbourhood cloud provider registry where they already have contracts.

Will Docker start rate limiting the public free repos even harder? Maybe making big orgs pay for the privilege of being hosted in the default docker registry? Charging to have the images "verified"?

Anyways, I hope Docker find some viable business model, it would be sad to see them fail commercially after arguably succeeding in changing the (devops) world.




Kubernetes is so much more than most people should want or need. It's far too complicated and heavyweight for smaller or simpler deployments. In AWS, most people should use ECS/Fargate instead. There are other competing container environments as well. Your point still stands; Docker popularized containerization and are in danger of becoming irrelevant because they ceded the container execution environment to others.


I beg to differ. The jump from learning Docker (and containers generally) to learning Kubernetes is not “hard”. Sure it’s a different paradigm of application deployment but I’ve seen far too many posts on HN that completely undermine its value in the name of difficulty.

You can use it if you’re not “at scale” completely fine and reap all the benefits as if you were.

Idk it’s because people hate Google, so they hate Kubernetes, whether they’re “get off my lawn” DevOps heads who want to maintain their complicated walled garden deployments they hand-rolled to maintain job security or what but it’s frankly embarrassing.


Using k8s to deploy is easy, setting up a cluster with the 'new' admin command is also straightforward...

Doing maintenance on the cluster isn't. Debugging routing issues with it isn't either, configuring a production worthy routing to begin with isn't easy either. it's only quick if you deploy weave-net and call it a day.

I would strongly discourage anyone using k8s in production unless it's hosted or you have a full team whose only responsibility is it's maintenance, provisioning and configuration


Very few people who suggest using kubernetes are suggesting using kubespray or kubeadm. 99% of companies will want to just pay for a managed kubernetes cluster which, for all intents and purposes, is basically AWS ECS with more features and less vendor lockin.

It should be also known that all "run your code on machines" platforms (like ECS) have similar issues. I remember using ECS pre-fargate and dealing with a lot of hardware issues with the instance types we were on. It was a huge time sink.

> it's only quick if you deploy weave-net and call it a day

That's exactly the benifit of kube. If something is a pain you can walk up to one of the big players and get an off-the-shelf solution that works and spend very little time integrating it into your deployment. No cloudformation stacks or other mess. Just send me some yaml and tell me some annotations to set on my existing deployments.

> I would strongly discourage anyone using k8s in production unless it's hosted or you have a full team whose only responsibility is it's maintenance, provisioning and configuration

If you have compute requirements at the scale where it makes sense for you to manage bare metal it should be pretty easy for you to find budget for 2 to 5 people to manage your fleet across all regions.


So 1/4 to 3/4 of a million per year in salary.

Plus disrupting all the developers.

So far every large scale implementation i have seen has cost the developers a year in productivity.


Hi. I run my production 7 figure ARR SaaS platform on google hosted k8s. I spend under 10 minutes a week on kubernetes. Basically give it an hour every few months. Otherwise it is just a super awesome super stable way for me to run a bunch of bin-packed docker images. I think it’s saved me tons of time and money over lambda or ECS.

It’s not F500 scale, but it’s over 100 CPU scale. Confident I have a ton of room to scale this.


If you end up making a blog post about how you do your deployments/monitoring and what it's enabled you to do I think it'd be a great contrast to the "kubernetes is complicated" sentiment on HN.


This sounds like fun. Kind of a “how to use Kubernetes simply without drowning”. Though would it just get downvoted for not following the hacker news meme train?


Tbf you are an experienced operation savvy engineer. Your hourly is astronomically high so you’ve minimized your costs via experience.


Hey you worked with me and know I am neither experience nor savvy :)


I have heard of people taking "years" to migrate to kube but only on HN and only on company who's timelines for "lets paint the walls of our buildings" stretch into the decades. But, even once you move, you get benefits that other systems don't have.

1. Off the shelf software (even from major vendors [0])

2. Hireable skill set: You can now find engineers who have used kubernetes. You can't find people who've already used your custom shell scripts.

3. Best practices for free: zero-downtime deploys can now be a real thing for you.

4. Build controllers/operators for your business concepts: rather than manually manage things make your software declaratively configured.

[0] - https://cloud.netapp.com/blog/cvo-blg-kubernetes-storage-an-...


I might have misunderstood you but there is a huge difference between a developer being able to use docker and understand the basics of containerization and CI/CD, and a devops/ops person managing servers/clusters using docker swarm or kubernetes. The latter of the two is so far more difficult to master than the first.

Managing a kubernetes cluster has so many possibilities to shoot yourself in the foot without realizing it. There are dozens of tutorials online how to set up a simple linux/nignx/python/postgres cluster (including lots of results for common error google searches) while routing problems of your legacy php application that is behind an istio controlled ingress running on a specific kubernetes version will leave you for yourself.

Sure you won't be able to scale indefinitely. Switching a solid containerized project running on your self-managed machines to a kubernetes setup will be quite easy (if you heeded devops best practices).


In my experience, adopting Kubernetes is seldom a well informed decision weighting the pros and the cons. Usually it's a stampede effect of higher-ups pushing for Kubernetes, because everyone else is, without really understanding what it entails.

The truth is, Kubernetes is awesome, it brings many features to the table. But it also requires ~10% additional very expensive headcount, ~20% more tasks overall, and prolongs the release cycle by ~20%. Figures are from my experience. Those drawbacks are rarely ever discussed - it's just dumped onto existing teams on top of their existing responsibilities, leading to struggle and frustration.


Speaking from personal experience, I feel like you just pulled those numbers out of thin air.

At my job, we went from overly complex Elasticbeanstalk deployments to pushing out new releases via Helm charts into k8s...deployment time vastly improved as did cognative load on what was actually happening.

I'd never go back.


Elastic Beanstalk is a halfhearted attempt at reproducing Google App Engine or Heroku. It is not comparable.

GAE, on the other hand, is dreamy compared to K8s. I once moved some infrastructure to K8s because it was costing too much on GAE; I ended up moving it back because it was worth it. We've subsequently moved it to Digital Ocean's PaaS but that's a different story...


Beanstalk, while great when it came out, is not a great solution now. It was also never really meant for teams who run things at scale. It also got quite complicated because it just didn't expose a lot of knobs.

I think you'd find ECS or similar as easy to work with as k8s and all of them will be faster than beanstalk.

Beanstalk is ALWAYS purposefully slow, this is by design. It mimics how amazon deploys internally, slow and steady wins the race to safety. It also has some really bad issue if from from a bad deploy back to a broken app; eg you can wedge it pretty bad.

Anyway at this point I don't think Beanstalk is a fair thing to compare to. It's good you moved off.


To add to that, a step to use anything from Google is a step onto Google's infamous "deprecation treadmill". A rather frustrating lifestyle (unless you are inside Google and your code gets updated/maintained in the monorepo).


Go to any Kubernetes page and it's all heavyweight "nodes" and "containers" and "tasks" and "resources" (some of which seem to have very special meanings). It's not easy to get into.

I don't think this is some oversight on behalf of the technical writers. They don't lack the ability to explain it in simple terms, they're putting up a warning sign. If you want to join Kubernetes, it's going to mean your entire way of doing things will now be the Kubernetes way, it's not just going to be a few lines of code you add to a Make file.

A lot of people are wary of these heavyweight systems, because it's going to end up a fairly hard dependency.


I beg to differ. The jump from learning Docker (and containers generally) to learning Kubernetes is not “hard”.

Unless you are at a scale that you can employ a full-time Kubernetes team, you probably don't need Kubernetes, and if you insist on using it for production anyway, you absolutely should use one of the many managed offerings (DO is probably cheapest, I have no affiliation with them), or shrinkwrapped product like Tanzu.

Bootstrapping from scratch on bare metal remains non-trivial and an in-place upgrade is an order of magnitude harder.


Almost two years ago it was MUCH easier for me to learn how to deploy on DO-managed Kubernetes than ECS or Fargate.

Probably didn't need it, but whatever, it worked and I had to learn something new anyways.


Actually with k3s it’s pretty dang simple!


Production workload on k3s? It’s what you run on a laptop!


What if you run production on your laptop? :)


Or a NUC ;)


You can just run Nomad to reap the benefits of a cluster manager without all the headaches with Kubernetes.


I think you missed the biggest value proposition of Kubernetes: the scheduling and container orchestration are nice, but that's not why you use it. Portability is the killer feature. Being able to stand up reproductively a clustered application without relying on custom-made sh script and ansible playbooks is a godsend. Using ECS goes against portability and just coerce you into more vendor lock-ins.


This is big in enterprise software.

Some corporate customers have bare-metal servers, some OpenStack, some run in AWS, some on VMWare, some on Azure, more exotic options are not rare either.

Kubernetes smooths out the differences, letting you develop an application against a standard, google-able API that is deployable anywhere.


I don't know about "ceded". The container execution environment wasn't a very defensible position. People recognized very quickly that Docker's execution environment was a very thin layer over existing Linux kernel functionality. At my company in 2013, we launched our own internal container containerization about the same time Docker came out, based on LXC.

That said, I agree about a higher level PaaS-style offering being a better fit for most companies.


I find Docker Compose really useful for single server deployments.


Is there any profit to be gained from knowing which repositories are the most active? Which get downloaded the most? I mean... you'd think there would be some "market research" type of thing that could be sold, but now that I think about it more, I'm not sure. I assume most of the repositories are either OSS that are pulled a ton, or are pulled by an individual or small team. I'm not sure what the business opportunity is from that knowledge. If there was such a market for that information, I assume they'd have already tried to exploit it...


There's definitely this, but it's more of a thing that I'd expect Deloitte or a consultancy to want. There's a huge amount of traction here: undiscovered gems, trends in adoption of new technology, etc.

There's also a lot of things they aren't tapping. Just from a security POV alone, a "dependabot for Docker" would probably do a lot for the ecosystem, but hasn't yet happened.



IMO docker did an amazing paradigm shift of many apps from heavy weight VMs to Micro Services,lot in CI/CD etc .. But all of them dnt make sense without an orchestration platform . Just like how a standalone VM does not make any commercial sense. This has been a question for long time about their revenue model. I guess they did try with compose , swarm , etc but the space was already taken by Kubernetes . I dnt know docker as company would be profitable ..


I was already doing containers with HP-UX Vaults in version 11 back in 1999.

Just like any tool that doesn't offer more than an abstraction layer over OS features, eventually it becomes irrelevant as OS tooling improves.


> Just like any tool that doesn't offer more than an abstraction layer over OS features, eventually it becomes irrelevant as OS tooling improves

You'd think. But I think what we're seeing here is the opposite side of the coin flip of that thread that smug idiots like to continually link here where people were saying Dropbox could be implemented in a day using basic Linux tools. Those people in the thread were always correct (I mean, this is "Hacker" news, so people will approach every problem with their hammer... shocking).

Dropbox just happened to get lucky. Docker, not so much. Both have serious competitors, including Google.


From [1]

> This is a Virtual Vault release of HP-UX, providing enhanced security features. Virtual Vault is a compartmentalised operating system in which each file is assigned a compartment and processes only have access to files in the appropriate compartment and unlike most other UNIX systems the superuser (or root) does not have complete access to the system without following correct procedures.

It's cgroup + chroot, in the closest form.

I took it as a very technically incorrect implication with "I was already doing containers with HP-UX Vaults in version 11 back in 1999." Docker is an development product that removes OS as the core concept from application development process. This is at least a milestone as fundamental as VMware's VM tech.

The commercial failure of Docker container is unfortunate.

But if the technology community cannot appreciate its significance, and let the VM-driven mindset belittle it, that's a true tragedy that puts off the drive to innovate.

[1] https://en.wikipedia.org/wiki/HP-UX


You forgot to look up what happened since 1999, like Virtual Vault having been replaced by proper containers on HP-UX,

https://support.hpe.com/hpesc/public/docDisplay?docLocale=en...

And Tru64, Solaris, BSD also had similar capabilities on the UNIX linage, and naturally IBM and Unisys also had their own versions of the theme on their platforms.


And Slack is just IRC for people who don't know better am I right?


It pretty much is. A lot of people still use Git GUIs and automatic transmission has handily beaten manual transmission in the US - not everyone understands or even wants to understand the tech they use.


Regarding transmission though, why hasn't the automatic transmission handly beaten the manual transmission in the rest of the world? My guess is because of the increased cost of maintenance and repair. I guess people are more willing to pay for support when abtracting the internals of their VCS away compared to others who understand it at a low level.


Interesting analogy vs car transmission. I always find auto frustrating because it doesn’t give me the love of control I’m comfortable with ...


probably neither here nor there, but I always see manual transmission in new cars as an anachronism bordering on placebo. Primarily because everything else is still an abstraction. Specifically steering. I recall BMW or maybe Porsche getting raked over the coals for their lifeless floaty steering in a few of their newer models. Modern steering is all emulated anyway, giving you that "road" feel. Along with cars piping in engine noise via the speakers (ugh)


It's purely preference at this point and likely mainly for older people like me who grew up in the era where manuals were cheaper and more efficient. Neither is really true anymore, manuals have become an expensive option in most cases in the USA and the new automatics are more efficient. Complexity and cost to repair on the other hand...


There was a huge uprising against its removal from Porsche cars until they finally relented and added the option back starting with the GT4 but now it is also in the 911s.


Point being that there is value in the abstraction, people value it, people pay for it. I know how to use a stick shift and I'll still pay for automatic for the ease of use.


Though for one who is experienced driving a vehicle with a manual transmission, a lot of the actions become second nature, meaning that it's not really more or less difficult to use. The only time a manual transmission vehicle is arguably more difficult to drive is in stop and go traffic, but I've handled that by maintaining a larger following distance and trying to maintain pace at idle speed in first or second gear.


Portability is key. Being able to run an Ubuntu container on macOS is a killer feature.


FreeBSD can run use LinuxJail to run a Ubuntu jail (it’s not a VM). I wonder why a billion $ company like docker can’t do this with Linux on macOS. It seems so obvious.


Containers aren't virtual machines.


In the macos case, they actually are. docker runs in a vm on macos.

Actually, I believe one text file is the docker killer feature.


Nowhere in what makes a proper container does a VM come into play, unless we are speaking about Docker and the idea of shipping a full blown OS in a zip, to work across heterogeneous hardware.


Yeah so the Docker daemon on a Mac is run inside of a VM.


Because they bundle a x86 GNU/Linux package as a runtime.

It doesn't work bare metal for the new macs, and it is extra bloat instead of making use of macOS capabilities.


It doesn't work bare metal for any mac. It runs in a VM.

Too bad apple didn't help out docker with a macos native version.

  FROM macos:10.13.3 
  RUN xcode-build
would have been really useful


> It feels like Docker (Inc) is becoming less and less "relevant" for each year that passes.

This is underscored for me by the fact that their latest end-user (dev) tools aren't even free software any longer. They started off being unixy as hell, doing one thing and doing it well (and being hackable in the process), and now they ship closed-source spyware under the exact same brand.


When I went to install docker on macos and it started phoning home from inside the installer, my opinion of them changed.


> They are the biggest player in the dev-machine market, but more alternatives are popping up making it even harder to monetise.

As someone who loves the feature set of Docker for development but grows increasingly disillusioned with its performance on Mac, would you mind elaborating what these alternatives are?


I haven't done a lot with it, but VMWare Fusion has a `vctl` command you can execute in Terminal:

https://github.com/VMwareFusion/vctl-docs/blob/master/docs/g...

Looks like it supports doing some kubernetes stuff using a `kind` cluster.

I've had pretty good experiences with Fusion, so, yeah, there's some real Docker alternatives up and coming. I think Docker's great, though, and I feel that we'd never have seen a `vctl` on a mac without it's existence.


I want docker to succeed but I agree with you... I just love typing the docker command and the registry was great.


To be relevant they need to fork Google Test and add cgroup eBPF expectation tests. Run integration tests with thousands of mini-instances that no-op the network stack.

Also start making pull requests for a Kubernetes killing feature in the Linux kernel - distributed cgroups and ulimits.


> Anyways, I hope Docker find some viable business model, it would be sad to see them fail commercially after arguably succeeding in changing the (devops) world.

It if had a sustainable business model it would be deploying it now.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: