Hacker News new | past | comments | ask | show | jobs | submit login
Docker Raises $23M (docker.com)
299 points by nickjj on March 16, 2021 | hide | past | favorite | 268 comments



It feels like Docker (Inc) is becoming less and less "relevant" for each year that passes. At least from my perspective, they led the popularisation of containerisation and the whole cattle-not-pets approach of deploying apps. They created big and long lasting change in the industry.

But they seem to have lost the production environment race to Kubernetes, at least for now. They are the biggest player in the dev-machine market, but more alternatives are popping up making it even harder to monetise. And containerd isn't a part of Docker (Inc) any more.

They do have Docker Hub, and its privileged position as the default registry of all Docker installs. But I don't really see why paying (i.e enterprise) customers would pick Docker Hub over their friendly neighbourhood cloud provider registry where they already have contracts.

Will Docker start rate limiting the public free repos even harder? Maybe making big orgs pay for the privilege of being hosted in the default docker registry? Charging to have the images "verified"?

Anyways, I hope Docker find some viable business model, it would be sad to see them fail commercially after arguably succeeding in changing the (devops) world.


Kubernetes is so much more than most people should want or need. It's far too complicated and heavyweight for smaller or simpler deployments. In AWS, most people should use ECS/Fargate instead. There are other competing container environments as well. Your point still stands; Docker popularized containerization and are in danger of becoming irrelevant because they ceded the container execution environment to others.


I beg to differ. The jump from learning Docker (and containers generally) to learning Kubernetes is not “hard”. Sure it’s a different paradigm of application deployment but I’ve seen far too many posts on HN that completely undermine its value in the name of difficulty.

You can use it if you’re not “at scale” completely fine and reap all the benefits as if you were.

Idk it’s because people hate Google, so they hate Kubernetes, whether they’re “get off my lawn” DevOps heads who want to maintain their complicated walled garden deployments they hand-rolled to maintain job security or what but it’s frankly embarrassing.


Using k8s to deploy is easy, setting up a cluster with the 'new' admin command is also straightforward...

Doing maintenance on the cluster isn't. Debugging routing issues with it isn't either, configuring a production worthy routing to begin with isn't easy either. it's only quick if you deploy weave-net and call it a day.

I would strongly discourage anyone using k8s in production unless it's hosted or you have a full team whose only responsibility is it's maintenance, provisioning and configuration


Very few people who suggest using kubernetes are suggesting using kubespray or kubeadm. 99% of companies will want to just pay for a managed kubernetes cluster which, for all intents and purposes, is basically AWS ECS with more features and less vendor lockin.

It should be also known that all "run your code on machines" platforms (like ECS) have similar issues. I remember using ECS pre-fargate and dealing with a lot of hardware issues with the instance types we were on. It was a huge time sink.

> it's only quick if you deploy weave-net and call it a day

That's exactly the benifit of kube. If something is a pain you can walk up to one of the big players and get an off-the-shelf solution that works and spend very little time integrating it into your deployment. No cloudformation stacks or other mess. Just send me some yaml and tell me some annotations to set on my existing deployments.

> I would strongly discourage anyone using k8s in production unless it's hosted or you have a full team whose only responsibility is it's maintenance, provisioning and configuration

If you have compute requirements at the scale where it makes sense for you to manage bare metal it should be pretty easy for you to find budget for 2 to 5 people to manage your fleet across all regions.


So 1/4 to 3/4 of a million per year in salary.

Plus disrupting all the developers.

So far every large scale implementation i have seen has cost the developers a year in productivity.


Hi. I run my production 7 figure ARR SaaS platform on google hosted k8s. I spend under 10 minutes a week on kubernetes. Basically give it an hour every few months. Otherwise it is just a super awesome super stable way for me to run a bunch of bin-packed docker images. I think it’s saved me tons of time and money over lambda or ECS.

It’s not F500 scale, but it’s over 100 CPU scale. Confident I have a ton of room to scale this.


If you end up making a blog post about how you do your deployments/monitoring and what it's enabled you to do I think it'd be a great contrast to the "kubernetes is complicated" sentiment on HN.


This sounds like fun. Kind of a “how to use Kubernetes simply without drowning”. Though would it just get downvoted for not following the hacker news meme train?


Tbf you are an experienced operation savvy engineer. Your hourly is astronomically high so you’ve minimized your costs via experience.


Hey you worked with me and know I am neither experience nor savvy :)


I have heard of people taking "years" to migrate to kube but only on HN and only on company who's timelines for "lets paint the walls of our buildings" stretch into the decades. But, even once you move, you get benefits that other systems don't have.

1. Off the shelf software (even from major vendors [0])

2. Hireable skill set: You can now find engineers who have used kubernetes. You can't find people who've already used your custom shell scripts.

3. Best practices for free: zero-downtime deploys can now be a real thing for you.

4. Build controllers/operators for your business concepts: rather than manually manage things make your software declaratively configured.

[0] - https://cloud.netapp.com/blog/cvo-blg-kubernetes-storage-an-...


I might have misunderstood you but there is a huge difference between a developer being able to use docker and understand the basics of containerization and CI/CD, and a devops/ops person managing servers/clusters using docker swarm or kubernetes. The latter of the two is so far more difficult to master than the first.

Managing a kubernetes cluster has so many possibilities to shoot yourself in the foot without realizing it. There are dozens of tutorials online how to set up a simple linux/nignx/python/postgres cluster (including lots of results for common error google searches) while routing problems of your legacy php application that is behind an istio controlled ingress running on a specific kubernetes version will leave you for yourself.

Sure you won't be able to scale indefinitely. Switching a solid containerized project running on your self-managed machines to a kubernetes setup will be quite easy (if you heeded devops best practices).


In my experience, adopting Kubernetes is seldom a well informed decision weighting the pros and the cons. Usually it's a stampede effect of higher-ups pushing for Kubernetes, because everyone else is, without really understanding what it entails.

The truth is, Kubernetes is awesome, it brings many features to the table. But it also requires ~10% additional very expensive headcount, ~20% more tasks overall, and prolongs the release cycle by ~20%. Figures are from my experience. Those drawbacks are rarely ever discussed - it's just dumped onto existing teams on top of their existing responsibilities, leading to struggle and frustration.


Speaking from personal experience, I feel like you just pulled those numbers out of thin air.

At my job, we went from overly complex Elasticbeanstalk deployments to pushing out new releases via Helm charts into k8s...deployment time vastly improved as did cognative load on what was actually happening.

I'd never go back.


Elastic Beanstalk is a halfhearted attempt at reproducing Google App Engine or Heroku. It is not comparable.

GAE, on the other hand, is dreamy compared to K8s. I once moved some infrastructure to K8s because it was costing too much on GAE; I ended up moving it back because it was worth it. We've subsequently moved it to Digital Ocean's PaaS but that's a different story...


Beanstalk, while great when it came out, is not a great solution now. It was also never really meant for teams who run things at scale. It also got quite complicated because it just didn't expose a lot of knobs.

I think you'd find ECS or similar as easy to work with as k8s and all of them will be faster than beanstalk.

Beanstalk is ALWAYS purposefully slow, this is by design. It mimics how amazon deploys internally, slow and steady wins the race to safety. It also has some really bad issue if from from a bad deploy back to a broken app; eg you can wedge it pretty bad.

Anyway at this point I don't think Beanstalk is a fair thing to compare to. It's good you moved off.


To add to that, a step to use anything from Google is a step onto Google's infamous "deprecation treadmill". A rather frustrating lifestyle (unless you are inside Google and your code gets updated/maintained in the monorepo).


Go to any Kubernetes page and it's all heavyweight "nodes" and "containers" and "tasks" and "resources" (some of which seem to have very special meanings). It's not easy to get into.

I don't think this is some oversight on behalf of the technical writers. They don't lack the ability to explain it in simple terms, they're putting up a warning sign. If you want to join Kubernetes, it's going to mean your entire way of doing things will now be the Kubernetes way, it's not just going to be a few lines of code you add to a Make file.

A lot of people are wary of these heavyweight systems, because it's going to end up a fairly hard dependency.


I beg to differ. The jump from learning Docker (and containers generally) to learning Kubernetes is not “hard”.

Unless you are at a scale that you can employ a full-time Kubernetes team, you probably don't need Kubernetes, and if you insist on using it for production anyway, you absolutely should use one of the many managed offerings (DO is probably cheapest, I have no affiliation with them), or shrinkwrapped product like Tanzu.

Bootstrapping from scratch on bare metal remains non-trivial and an in-place upgrade is an order of magnitude harder.


Almost two years ago it was MUCH easier for me to learn how to deploy on DO-managed Kubernetes than ECS or Fargate.

Probably didn't need it, but whatever, it worked and I had to learn something new anyways.


Actually with k3s it’s pretty dang simple!


Production workload on k3s? It’s what you run on a laptop!


What if you run production on your laptop? :)


Or a NUC ;)


You can just run Nomad to reap the benefits of a cluster manager without all the headaches with Kubernetes.


I think you missed the biggest value proposition of Kubernetes: the scheduling and container orchestration are nice, but that's not why you use it. Portability is the killer feature. Being able to stand up reproductively a clustered application without relying on custom-made sh script and ansible playbooks is a godsend. Using ECS goes against portability and just coerce you into more vendor lock-ins.


This is big in enterprise software.

Some corporate customers have bare-metal servers, some OpenStack, some run in AWS, some on VMWare, some on Azure, more exotic options are not rare either.

Kubernetes smooths out the differences, letting you develop an application against a standard, google-able API that is deployable anywhere.


I don't know about "ceded". The container execution environment wasn't a very defensible position. People recognized very quickly that Docker's execution environment was a very thin layer over existing Linux kernel functionality. At my company in 2013, we launched our own internal container containerization about the same time Docker came out, based on LXC.

That said, I agree about a higher level PaaS-style offering being a better fit for most companies.


I find Docker Compose really useful for single server deployments.


Is there any profit to be gained from knowing which repositories are the most active? Which get downloaded the most? I mean... you'd think there would be some "market research" type of thing that could be sold, but now that I think about it more, I'm not sure. I assume most of the repositories are either OSS that are pulled a ton, or are pulled by an individual or small team. I'm not sure what the business opportunity is from that knowledge. If there was such a market for that information, I assume they'd have already tried to exploit it...


There's definitely this, but it's more of a thing that I'd expect Deloitte or a consultancy to want. There's a huge amount of traction here: undiscovered gems, trends in adoption of new technology, etc.

There's also a lot of things they aren't tapping. Just from a security POV alone, a "dependabot for Docker" would probably do a lot for the ecosystem, but hasn't yet happened.



IMO docker did an amazing paradigm shift of many apps from heavy weight VMs to Micro Services,lot in CI/CD etc .. But all of them dnt make sense without an orchestration platform . Just like how a standalone VM does not make any commercial sense. This has been a question for long time about their revenue model. I guess they did try with compose , swarm , etc but the space was already taken by Kubernetes . I dnt know docker as company would be profitable ..


I was already doing containers with HP-UX Vaults in version 11 back in 1999.

Just like any tool that doesn't offer more than an abstraction layer over OS features, eventually it becomes irrelevant as OS tooling improves.


> Just like any tool that doesn't offer more than an abstraction layer over OS features, eventually it becomes irrelevant as OS tooling improves

You'd think. But I think what we're seeing here is the opposite side of the coin flip of that thread that smug idiots like to continually link here where people were saying Dropbox could be implemented in a day using basic Linux tools. Those people in the thread were always correct (I mean, this is "Hacker" news, so people will approach every problem with their hammer... shocking).

Dropbox just happened to get lucky. Docker, not so much. Both have serious competitors, including Google.


From [1]

> This is a Virtual Vault release of HP-UX, providing enhanced security features. Virtual Vault is a compartmentalised operating system in which each file is assigned a compartment and processes only have access to files in the appropriate compartment and unlike most other UNIX systems the superuser (or root) does not have complete access to the system without following correct procedures.

It's cgroup + chroot, in the closest form.

I took it as a very technically incorrect implication with "I was already doing containers with HP-UX Vaults in version 11 back in 1999." Docker is an development product that removes OS as the core concept from application development process. This is at least a milestone as fundamental as VMware's VM tech.

The commercial failure of Docker container is unfortunate.

But if the technology community cannot appreciate its significance, and let the VM-driven mindset belittle it, that's a true tragedy that puts off the drive to innovate.

[1] https://en.wikipedia.org/wiki/HP-UX


You forgot to look up what happened since 1999, like Virtual Vault having been replaced by proper containers on HP-UX,

https://support.hpe.com/hpesc/public/docDisplay?docLocale=en...

And Tru64, Solaris, BSD also had similar capabilities on the UNIX linage, and naturally IBM and Unisys also had their own versions of the theme on their platforms.


And Slack is just IRC for people who don't know better am I right?


It pretty much is. A lot of people still use Git GUIs and automatic transmission has handily beaten manual transmission in the US - not everyone understands or even wants to understand the tech they use.


Regarding transmission though, why hasn't the automatic transmission handly beaten the manual transmission in the rest of the world? My guess is because of the increased cost of maintenance and repair. I guess people are more willing to pay for support when abtracting the internals of their VCS away compared to others who understand it at a low level.


Interesting analogy vs car transmission. I always find auto frustrating because it doesn’t give me the love of control I’m comfortable with ...


probably neither here nor there, but I always see manual transmission in new cars as an anachronism bordering on placebo. Primarily because everything else is still an abstraction. Specifically steering. I recall BMW or maybe Porsche getting raked over the coals for their lifeless floaty steering in a few of their newer models. Modern steering is all emulated anyway, giving you that "road" feel. Along with cars piping in engine noise via the speakers (ugh)


It's purely preference at this point and likely mainly for older people like me who grew up in the era where manuals were cheaper and more efficient. Neither is really true anymore, manuals have become an expensive option in most cases in the USA and the new automatics are more efficient. Complexity and cost to repair on the other hand...


There was a huge uprising against its removal from Porsche cars until they finally relented and added the option back starting with the GT4 but now it is also in the 911s.


Point being that there is value in the abstraction, people value it, people pay for it. I know how to use a stick shift and I'll still pay for automatic for the ease of use.


Though for one who is experienced driving a vehicle with a manual transmission, a lot of the actions become second nature, meaning that it's not really more or less difficult to use. The only time a manual transmission vehicle is arguably more difficult to drive is in stop and go traffic, but I've handled that by maintaining a larger following distance and trying to maintain pace at idle speed in first or second gear.


Portability is key. Being able to run an Ubuntu container on macOS is a killer feature.


FreeBSD can run use LinuxJail to run a Ubuntu jail (it’s not a VM). I wonder why a billion $ company like docker can’t do this with Linux on macOS. It seems so obvious.


Containers aren't virtual machines.


In the macos case, they actually are. docker runs in a vm on macos.

Actually, I believe one text file is the docker killer feature.


Nowhere in what makes a proper container does a VM come into play, unless we are speaking about Docker and the idea of shipping a full blown OS in a zip, to work across heterogeneous hardware.


Yeah so the Docker daemon on a Mac is run inside of a VM.


Because they bundle a x86 GNU/Linux package as a runtime.

It doesn't work bare metal for the new macs, and it is extra bloat instead of making use of macOS capabilities.


It doesn't work bare metal for any mac. It runs in a VM.

Too bad apple didn't help out docker with a macos native version.

  FROM macos:10.13.3 
  RUN xcode-build
would have been really useful


> It feels like Docker (Inc) is becoming less and less "relevant" for each year that passes.

This is underscored for me by the fact that their latest end-user (dev) tools aren't even free software any longer. They started off being unixy as hell, doing one thing and doing it well (and being hackable in the process), and now they ship closed-source spyware under the exact same brand.


When I went to install docker on macos and it started phoning home from inside the installer, my opinion of them changed.


> They are the biggest player in the dev-machine market, but more alternatives are popping up making it even harder to monetise.

As someone who loves the feature set of Docker for development but grows increasingly disillusioned with its performance on Mac, would you mind elaborating what these alternatives are?


I haven't done a lot with it, but VMWare Fusion has a `vctl` command you can execute in Terminal:

https://github.com/VMwareFusion/vctl-docs/blob/master/docs/g...

Looks like it supports doing some kubernetes stuff using a `kind` cluster.

I've had pretty good experiences with Fusion, so, yeah, there's some real Docker alternatives up and coming. I think Docker's great, though, and I feel that we'd never have seen a `vctl` on a mac without it's existence.


I want docker to succeed but I agree with you... I just love typing the docker command and the registry was great.


To be relevant they need to fork Google Test and add cgroup eBPF expectation tests. Run integration tests with thousands of mini-instances that no-op the network stack.

Also start making pull requests for a Kubernetes killing feature in the Linux kernel - distributed cgroups and ulimits.


> Anyways, I hope Docker find some viable business model, it would be sad to see them fail commercially after arguably succeeding in changing the (devops) world.

It if had a sustainable business model it would be deploying it now.


I really see this as trying to buy enough time for someone to rescue the investors, maybe the founders and definitely not the employees with an acquihire.

Docker is a good example of a company that (IMHO) should never have raised so much capital. It just doesn't have the moat to justify the valuation.

HN has posted several submissions (eg [1]). Containers aren't new. Anyone can do it. So where is the moat? Possibilities include orchestration (which they lost to Google's Kubernetes). There's no barrier to creating images or even having a public registry of images.

It always seemed tike containers were just going to be another feature on cloud platforms. Don't get me wrong: I think containers are a really good technology, for building, testing, deployment and so on.

Docker never had a clear value add and over the years has failed to develop one.

[1]: https://news.ycombinator.com/item?id=22244706


While I am normally critical of the unicorn pizza-kitchen-on-wheels type of excess, I think this is only true in hindsight and has a lot to do with Docker lacking a commercially minded founder. Open source isn’t a business model. There are very synergistic business models that go hand in hand with OSS but that distinction is important. Github does not really have a defensible moat. I think the network effects there are mostly trying to back-solve for its popularity. But it’s just a good product that became synonymous with modern version control. Docker had similar potential, but needed more commercial creativity.


Github is a good comparison here because obviously anyone can run a Git server but Github did create a lot of value adds and the UI helped build a network effect for the ease os use, cloning, PRs and so on.

What's more, Github became the engine for dependency management. Go springs to mind here. I actually thought this was a terrible system (eg putting repo owner names in import strings) but it speaks to ubiquity of Github.

But what are Docker images? Maybe a few hundred lines of Dockerscript at the end of the day.

Losing in orchestration I think was the obvious big fail. But they had an uphill battle here anyway because you really need to integrate such a thing with cloud platforms.

I'm really not sure what Docker could've done differently here.


> I'm really not sure what Docker could've done differently here.

Completely agree, it’s trickier than GitHub. But this is why founders of these companies can potentially make billions: if it were easy, everyone could do it.

I think they realized the CI/CD potential far too late. In another universe you push to GitHub, Docker builds and tests your images and deploys to your provider of choice. Their potential was probably not directly tied to containers but tied to their position in the engineering process between commit and before deploy.


> I'm really not sure what Docker could've done differently here.

I think they should have realized orchestration was “the” thing for production much sooner. It’s not like you can’t integrate with cloud vendors on your own; there are plenty of managed service providers where you can get hybrid cloud solutions, Docker could have bet big on this.

Instead they came with swarm, which was focused too much on self-managed “on-prem”, while people really wanted something more complex, managed and with a healthy ecosystem of service providers.

Docker got stuck with being a software vendor, but they should have pivoted to being a service provider much, much sooner.


They were initially dotCloud!


Considering how entrenched Docker has been in global tech infrastructure for so many years, I automatically assumed that the company was worth billions already, but guess not. I wonder why they haven't been bought out by Microsoft or Google, if nothing else then just for the talent.


>I wonder why they haven't been bought out by Microsoft or Google, if nothing else then just for the talent.

Because they received an insane valuation years ago and probably aren't even worth break-even on their funding rounds. Google already has far more K8s knowledge internally than anyone at docker, so what would be their gain?

MS did try to buy them back in 2016 but Docker pulled an Jerry* Yang and said no [1]. I don't see why MS would bother at this point, they are also headed down the k8s path, and anything they needed from an expertise perspective they likely already received through their partnership agreement [2].

[1] https://www.sdxcentral.com/articles/news/sources-microsoft-t...

[2] https://www.docker.com/blog/docker-microsoft-partnership/

*I incorrectly said Andrew Yang initially, my apologies for my bad memory and any confusion it may have caused. Thank you Hexcles for the correction.


i'm not up on my political references; what does "pull an Andrew Yang" mean?


It has absolutely nothing to do with politics. Microsoft offered to acquire Yahoo for $44.6 billion dollars in 2008 [1]. Jerry* Yang turned them down claiming the offer "substantially undervalued" the company. 8 years later they sold to Verizon for $5 billion [2].

[1] https://www.cnet.com/news/yahoo-rejects-microsofts-bid/

[2] https://www.forbes.com/sites/briansolomon/2016/07/25/yahoo-s...

*I incorrectly said Andrew Yang initially, my apologies for my bad memory and any confusion it may have caused. Thank you Hexcles for the correction.


I think that's Jerry Yang.


facepalm You are correct, I will update. That's what I get for trying to remember names from 12 years ago.


loll i was like "oh no what did Andrew Yang do"... as an asian and non american i was a casual fan of his but kinda knew he had no chance in the primaries. still.. maybe someday.. he's young


Ah hence the asterisk- I keep looking for a footnote!


The Alibaba and Yahoo Japan stakes were included in Microsoft’s bid


This is the challenge with taking venture dollars, your interests aren't aligned.

As a founder post Series B you probably have at least 10% of the business, that's a nice $400MM payout coming your way.

But if the VC invested at a $1B price, picks up 10% and 80% of their return goes to their LPs they are personally pocketing much less.

So it is advantageous for them to continue to shoot for the stars rather than sell short. Plus they have multiple bets going on at the same time while the founders only have one.

So a $4B acquisition nets the founder $400MM, but and the VC firm is picking up $300MM of which 80% is going back to LPs, leaving $60MM for the partnership, perhaps split 3-7 ways, with maybe a kicker for the partner that sourced and led the investment.

So let's say that's roughly $10MM going to the VC partner.

Well you got one person staring at $400MM and another staring at $10MM.

The interests aren't quite aligned.

Also survivorship bias has us focused on companies that turned down acquisition offers and made it big, like Facebook, Google, Netflix, Snapchat, and so forth, but we don't really hear about the companies that turn down the offer and then fail to meet that acquisition price later, because that story just doesn't sell as well and the company fades into irrelevance so it isn't a piece that is picked up often.

Certainly at the time it could have been a good decision given how new the market was and the potential upside, but obviously hindsight is 20/20.


It is obvious. This is a common pattern:

1. Offer open-source 100% free product that's absolutely ground breaking, unlike anything else.

2. Get a shitload of users and free PR machine gets rolling. Network effects kick in.

3. Go to investors with your active user count in excess of millions.

4. Investors go bonkers and their eyes swell up with all the ways they can exploit these addicted-to-free user base.

5. Company has trouble monetizing the users. Users are pissed.

6. We all wonder why they couldn't make billions.

It just happened to Elastic search a couple of months ago when Amazon swiped the rug from under them. Good. This should be a lesson to all the companies that want to follow this pattern. Without Step 1, Docker would have had a much more difficult time to get traction and would have to compete on a level ground. So, they short circuit this competition by going full 100% free product route.

I have zero empathy for these companies and their investors.


> I have zero empathy for these companies and their investors.

Wow, imagine being so hateful towards amazing open-source tools


being open-source is not a magical shield against criticism


Open source tools were developed by the community and they're being harrassed now for monetization.

How is that being hateful? I am looking out for the community in this sense.


They are not developed by a 'community'. The developers who build it are paid for by investors. Those investors want a huge return. It's not open source, it's bait and wait


Exactly, when you take a look at the mainstream "open-source" projects, you 'll see that those projects are all developed and maintained by some people who gets paid to do that. (e.g. Kubernetes, Firecracker, Gvisor, Bottlerocket, Podman)


Bait and wait and ... But what's step 3? Doesn't seem to work super well, hmm


MySQL was acquired for 1B. Elastic is currently worth 10B. Redhat was acquired for 34B.

What's not working well?


Good points. So step 3 can be to do an exit, when the software is really popular.

I wonder what's next for Elastic. And mongodb etc. And all their users and customers


When open-source companies develop the majority of their product, haters will complain that they’re “not open enough”.

When open-source companies invite more contributions from their community, haters will complain that they’re “harassing the community for monetization”.

Simply put, some people will never be happy no matter what Docker does, and clearly you are one of them.


There's a silent majority size x100? that is happy with both approaches :-) look at Actix-web for example (when the founder quit), the happy but silent people were like 500x more than the a bit angry ones


> when Amazon swiped the rug from under them. Good.

> I have zero empathy for these companies and their investors.

Are you trolling? Or do you want to see all amazing open source tech move behind closed doors and payed walled gardens?


You're presenting a false dichotomy. I think the parent would have preferred these projects be successful and sustainable without the sleazy tactic of baiting users with free stuff, taking a mountain of VC cash, and then desperately trying to find ways of "monetizing" those users. Bleh.

F/OSS software works best when it's not a revenue source of the business that develops it. "Hey, I ran into $problem at $work and created $project to fix it! I'm giving it away so that it might solve your problem too! Hit me up on $mailing_list if you want to contribute." Red Hat is pretty much the poster child of this model. They make $0 on all the FOSS software they develop and release to the world but they dogfood it into their own products to collect dividends.


> 5. Company has trouble monetizing the users. Users are pissed.

Are we pissed?

Docker hasn't started showing me ads or spewing MOTDs asking for donations.

I haven't run into limitations that would have me purchase an upgrade.

It hasn't been bought by a proprietary company that would make me start worrying about its licencing.

It's a solid workhorse that's been at all my previous jobs and will be at all my future jobs for the foreseeable future.

Most criticisms of it boil down to "there are other container technologies".


Elastic has a market cap of over $10billion. It is literally one of the most successful tech companies ever.


> absolutely ground breaking, unlike anything else

Containers existed for decades as a concept and for years on Linux (using lxc or VirtSquare and later nspawn)


It is true that the technologies existed for a while. I played with (Free)BSD jails and Solaris Zones long before Docker. Docker however made things "trivial" with the integration from container building, to public registry and docker-compose, which can make developer lives (depending on domain) much nicer. With jails and zones it was never as convenient.


That was docker's real value add, making it easy and developer centric. Previous containerization technologies were heavily ops-centric. I come from an ops background and I remember thinking at the time that Docker was in some ways a developer workaround to barriers that ops set up to limit what they could do and protect them from themselves.


Except Docker had already raised $10M before launching. So they do not match your pattern.


Doesn't matter. $10M initial round to offer free product to get users aboard.

The key point is to get users addicted to free.

Uber did this by undercutting the entire taxi industry at a loss. Ever wondered why your rides were so cheap!? Jio did the same in India, offer free unlimited internet on their phone service and wipe out the competition.


Who cares? Why not just milk their VCs for the free handouts and then move on like we do with every other VC funded unsustainable service.


Their initial product was a platform-as-a-service (dotCloud), which wasn't free, aside from the early beta. Docker was essentially a pivot. I see no evidence that the company was founded as a nefarious scheme to get users addicted to a free product.


You just made my point. The reason why I’ve never heard of dotCloud and literally everyone knows Docker.

Docker is popular because it’s free. A lot of people would be upset if they take down their free hosting repositories.

As a user, I want to pay for stuff to sustain companies. They can still be open source.


If enough users had wanted to pay for dotCloud, they wouldn't have needed to pivot in the first place.

My understanding (as a complete outsider) is that they raised a $10+ million series A for their PaaS. The PaaS then presumably wasn't successful enough to raise a subsequent round, but one of their technologies (Docker) was, despite the monetization path for it not being clear yet. So they pivoted to focus on that.

I'm not sure what other outcome could plausibly happen in this scenario. If they stuck with the PaaS as their main focus, they would have gone out of business many years ago.


> Docker was estimated to be valued at over $1 billion, making it what is called a "unicorn company", after a $95 million fundraising round in April 2015.

https://en.wikipedia.org/wiki/Docker,_Inc.


The fact that they had a $95 million round in 2015 and then a $23 million round in 2021 leads me to believe their valuation is a lot lower now.


Mirantis acquired Docker Enterprise, last year, so at the very least they lost whatever portion of their valuation was tied to that... though I can't imagine it was that much.


I’d imagine that some of their valuation was based on the future success of docker swarm and what they would monetise around it


Was there ever a technical reason why swarm never won out in the market? I've still got a couple of multi-node swarms that work great and are WAY easier to configure than k8s. I never really understood why it didn't take off.


This is kind of reductive. You could ask the same for nomad, mesos, and lots of other things. It wasn't just swarm vs k8s


> It wasn't just swarm vs k8s

I'm not seeing anything in their comment that would imply this.


How is it reductive?

I do ask the same question for all those other systems, or the meta-question: "How is it that the bloated monstrosity of Kubernetes somehow became the de-facto container orchestration tool?"

Is this just sysadmins buying themselves job security?


On the contrary, nobody was thinking of the sysadmins (until we injected the notion of Operators rather late).

Devs chose K8s, I think the evangelism phrase was “developer dopamine”. It felt like the Rails of DIY infra, where devs could inherit an opinioned pattern for doing n>1.

There’s still decades of resentment of devs being gated by IT.


Is that so? My experience is the opposite: devs don't want to learn Kubernetes YAML, they just want to git push and have someone else take care of deployment.


Devs chose K8s, really?


As someone who doesn't want to be a sysadmin, but wants to deploy applications to the cloud, the options are fairly limited. Kubernetes handles updates and scaling, networking between services, has managed offerings from all the major cloud providers, has an enormous ecosystem surrounding it, with many tools providing out of the box support for it.

I could use docker swarm, or nomad, but I have to manage infrastructure, write my own integrations, manage the underlying hardware.

Or I can run az aks create and be off to the races


Sure, at this point it's a self-fulfilling prophecy: K8s is almost the only game in town because...it's almost the only game in town.

But how did it get to that point? How did something so big and unwieldy that even billion-dollar cloud providers can't do upgrades on it properly (*cough* looking at you, EKS) become the go-to standard for running apps in containers?


Kubernetes had necessary features first, was more stable, and had many more options that enabled other teams to hang more features off the runtime. Flexibility on networking runtime and rbac both made a huge difference.

That and, unfortunately, the cloud vendors could fully deploy k8s for free because swarm was a hybrid enterprise product.


I'd also like to know more about this, since for smaller to medium sized deployments (think 1-100 nodes, maybe running between 1-1000 containers) Docker Swarm does seem like a pretty reasonable solution, especially with tools like Portainer ( https://www.portainer.io/ ) for a web based UI to manage it.

I guess some of the reasons for the popularity of Kubernetes could be:

  - Kubernetes had Google as a big name behind it, so there was a lot of development resources put into it and eventually, lots of learning resources available, in addition to overall publicity; for example, i don't think anything like this exists for Docker Swarm https://www.katacoda.com/courses/kubernetes
  - in addition to the marketing and PR, it got picked up as a solution for many managed offerings by cloud hosts (managed Docker Swarm has almost none), a bit like what happened with serverless and AWS Lambda
  - Kubernetes allows for CRDs, has a pretty good API and has a large ecosystem built around it, to help manage its complexity (even distros like K3s and MicroK8s could be mentioned), as well as many to implement additional functionality (Istio + Kiali come to mind)
  - this further snowballed into turn-key offerings like Rancher and OpenShift that had financial incentives behind them, the idea of building a new distro that vendor locks clients into a particular company's offering, resources, support etc.
  - almost everyone (oftentimes incorrectly) believes that they need to be able to scale a lot and therefore chased the hype
  - FOMO further motivated a whole bunch of developers to use Kubernetes for their projects, instead of looking at the alternatives like Docker Swarm or Nomad
  - however, knowing Kubernetes can help to be more easily onboarded and to work with deployments in many different companies (except for when it isn't), the skills carry over nicely
Of course, some of these may be my subjective views and not at all accurate.

Personally, i think something between Docker Swarm, Nomad and K3s would be the sweet spot for containerized app deployments and orchestration, but personally i just like the Docker Compose manifests more than i do Kubernetes' and it feels like the popularity of Helm (or Kustomize) supports this line of reasoning.

Ideally we wouldn't even need containers and something like FreeBSD jails with a user friendly API around it would be sufficient. But the popularity that Docker gained seems to highlight that perhaps something was missing from those older technologies.


> the popularity that Docker gained seems to highlight that perhaps something was missing from those older technologies.

I would put money on it being the Dockerfile and the developer UX around that.

I don't honestly find it any more slick to use than e.g. FreeBSD jails, but I've lived in unix for long enough that that's because I was re-using lots of knowledge I already had.

There's a comparison here to the fact that I'm perfectly comfortable writing SysV rc scripts (though BSD rc scripts are vastly more pleasant to put together) but I've watched enough people struggle their way to something that only mostly worked that I can see why for many people writing a systemd unit file instead is a vast improvement.


I think people really undervalue the base design of k8s that is responsible for the existence of CRDs and other related pluggability (even before TPR became CRDs).

Pretty much every other option was more closed and with no extensibility, plus docker swarm was plagued (at least from my PoV) with stories of instability... and I dunno about others, but I and various people I talked with were burned with running docker in production, something that k8s nicely repackaged removing a lot of the things that were problematic.

All this went into giving some serious base beyond marketing and PR - the closest I've seen from other players is classic Rancher and Nomad, and especially the latter seems much less capable.


I'm really sad that Swarm didn't take off.

I used it extensively at a job for deploying production replicas for developers and full-integration testing (the plan was to eventually deploy prod the same way). It was SO nice to use. If you could use docker-compose, then docker-swarm was a natural next step.

I can't remember the details (it's been a few years), but the biggest hangup I had was the default network "mesh" wasn't stable. But I was able to work around that by using a different implementation (i think it was the network mesh that came out of k8s at the time. Used etcd).


It all depends on what they got for the money. I have no idea what it's valuation is based on these numbers, but if $95M bought 15% and $23M bought about 3.5% then the valuation is about the same.

I'd expect the valuation to be lower also, but that's based on their revenue prospects, not their the $ amount of investments.


Considering how entrenched Docker has been in global tech infrastructure for so many years

It is not entrenched at all, that's their problem. You can literally drop in Podman as a replacement and alias the command it it will "just work" - and Podman has some significant advantages in many environments/use cases. Docker, the company or the product, has no concrete barriers to entry to protect itself.


My best guess is they slowly become less valuable until IBM buys them for the name recognition


Docker has Donnie Berkholz running product as of recently. I met him when he was at Redmonk and I was incredibly impressed by how sharp he is. Given they now have him as VP of Products, this cash injection is certainly more interesting as I'd expect him to do something useful with it.


Thank god, someones gotta pay for the free container registry all my sideprojects depend on!


Have you tried GitHub packages?


Until GitHub Packages Docker Registry stops asking for authentication to pull public images, it might as well not exist to me. Have they resolved that?


GitHub Packages have an option to make the image as `public` or `private`. Public images can be pulled without auth (as the case should be).


I don’t see the option anywhere, nor can I locate an announcement of any related change, but I tried an old public repo of mine (which was impossible to pull without auth, so it didn’t even work inside GitHub Actions) and it seems to work now. Gotta say the documentation and communication leave a lot to be desired.

Edit: I guess I’m on grandfathered (?) GitHub Packages Docker Registry, instead of the newer GitHub Container Registry (which is still in beta?).


The messaging regarding the migration to GitHub container registry is confusing, there seems to be some delay or clear guidance on the migration atm.


GitLab registry allows anon pulls from public projects.


Yes. Someones gotta pay for the free container registry all my sideprojects depend on!


You are all over this thread being snarky, and commenting in what seems to be bad faith. Please stop, this is not reddit.


I’m sorry if it came across this way. The snarky comment highlights pretty well that we magically expect almost without a second thought that GitHub is free.

Support SourceHut and pay for hosting git if people want to break away from ostensibly free services. Or pay for Github. SourceHut is also open source.


In what way is Github not free?


Apologies in advance for the snark:

https://github.com/pricing


Interesting to see this called a Series B. Did Docker hit reset after the split?

Here's a separate thread for "Docker Series B: More Fuel To Help Dev Teams Get Ship Done" - with a bit more info on what the plan is from Scott.

Interesting to see this news on the same day that "We don't need Docker" was also on the front page of Hacker News. I think we absolutely still need Docker in 2021.

https://news.ycombinator.com/item?id=26478669


> "We don't need Docker" was also on the front page of Hacker News.

I read it as a typical "We don't need [Complex Tool] because we don't have [Complex Tool Solutions] problems". Not trivializing it, I think those articles are valuable hype-free analysis of the latest tool-of-the-day. "They don't need Docker" and "We absolutely need Docker" are non-contradictory.


There are two very different and kind of opposite "we don't need Docker" perspectives. One is that of the previous article— "we don't need docker because we don't need containerized environments since our tooling produces a single binary which somehow has no dependencies, configuration, or data files, so there's nothing to containerize."

The other is "we don't need Docker because we use tools like buildah, img, or kaniko to build OCI containers, our devs use podman, and we run this stuff in prod on a someone else's k8s PaaS that under the hood is backed by containerd."


Problem is the cargo-culting. A lot of startups don't have the complex problems

Many are anticipating scaling problems they'll never have and wasting a lot of time, effort and money in that process


Maybe, but it's such a balance. I'm finally getting into Kubernetes and after years of hearing how awful it was to get it going, I was shocked at how painless it was to stand up microk8s locally and sling Helm charts at it, get Jenkins generating agents on it, get metrics from it, etc. If I were deploying something to a cloud, I would absolutely do this approach with some k8s-as-a-service provider over rolling my own machine images or having to deal with remote controlling instances using something like Ansible.

Yes, it would be possible to get sucked down a rabbit hole with over-emphasizing scaling, clustering, whatever upfront, but IMO these tools are now mature enough that it's a reasonable workflow even if you're just deploying a single instance of a container with one statically-linked binary in it.


Yesterday was in a meeting discussing our build pipeline and we had this moment of introspection where we realized "wow, we do a completely containerized, micro-serviced app and it actually works really well for the most part". When Docker was very new I remember dealing with all manner of bizarre issues, mostly because the engineers just weren't used to how to use it yet. But if you have some decent idea of how to architect it then Docker is a huge boon IMO.

People are also totally right to question why some new fancy tool is needed when the old way works. Its best to just view all these things as tools at your disposal rather than necessities.


This 100%

Better to think of these titles as "When you might need Docker" and "When you might not need Docker" so you can consider the tradeoffs rather than interpret it as a blanket statement, Docker is good/bad.


A lot of those articles and posts are what I think of 'exploratory' someone doesn't want / need a thing,t hey present their way of doing things and we can all learn something from them... even if we don't do things like they do.

God knows how much "don't need JavaScript" gets posted...


There's also, though, things like K8S not using Docker, podman becoming popular, etc. Neither is a definitive nail, but it does erode Docker's moat a bit.


K8S uses containerd which is the official Docker runtime.


The difference being that Google has deprecated the shimming to Docker they had been doing with the “Docker” runtime to access containerd, so now it will go straight to the source by default.

Red Hat OpenShift also switched from using Docker as its runtime with OpenShift 4 in 2019, though it was in favor of CRI-O rather than containerd.


All accurate. My point is that Kub deprecating the shimming does not affect Docker’s popularly or market share either way. The existence of the shim was an implementation detail and Docker themselves have been encouraging the switch to containerd. They clearly want the Docker brand to be attached to developer-facing tools instead of a hidden piece of increasingly commoditized infrastructure.

If a critical mass of kubernetes deployments switched from containerd to cri-o, that would be more problematic for Docker, but that seems unlikely to happen. Openshift to my knowledge is the only major kubernetes distribution not based on containerd. At this stage of the adoption cycle, cri-o is unlikely to be more than a distant second to containerd.


Yes, and I did not mean to diminish that context, more so expand upon the "k8s not using Docker" side of things from the parent you were responding to.


"official Docker runtime"

Yes, though not produced by Docker and does not require Docker.


Docker recapitalized in 2019 when they sold the enterprise business to Mirantis.

So, yes Docker hit the reset button and wiped out all the existing shareholders.


They already did series-e and looped back around: https://www.crunchbase.com/funding_round/docker-series-e--a2...


As a 10+ years old company / YC company + significant amount of market share, I was expecting Docket to be like series D-F or even going public. Wondering if it is actually common for companies in CA to be like that...


Looks like original Docker did Series E in 2017: https://www.crunchbase.com/organization/docker/company_finan...

This is apparently "new/restructured Docker" which did Series A in 2019. From the footnote in the article: https://www.docker.com/press-release/docker-new-direction

It does seem weird to just start the letters over like that, as if it's a new hot startup.


Doing a series H might make investors think harder about why they need 8 rounds of funding over 12 years or so and still didn't manage to turn a profit.


I had a recruiter a few years ago pitch me about an amazing opportunity at a series F company that he wouldn't name in the initial pitch email. I didn't see that as an advantage at all. That's 6 rounds of investors who are all going to need to get paid and then some before you see a penny. It also implied to me the early growth/fun stage was over and they either limping along or so big to be unrecognizable as a legit startup.


That's quite interesting, let's see if it will pivot a bit for the "new" trend like going serverless


I always get amused by D-Wave, the 22 year-old "startup" that is now on its 19th funding round: https://www.crunchbase.com/organization/d-wave-systems/compa...


Does the Docker company even do anything to stay relevant?

* The popularized containers but their core tool has been replaced by superior alternatives like Podman.

* They sold their enterprise registry which wouldve earned them actual money.

* The consumer registry has tons of free/cheaper alternatives like github's container registry, something from gitlab and on the enterprise side, there is Red Hat's quay.

* Docker Swarm is dead compared to Kubernetes.


I mostly agree here but I'm not really sure that Podman has replaced Docker. I'm also curious how you're determining that Podman is superior to Docker?


It definitely has on Red-Hat.


I think podman is a good start, but not there yet. Give it a year or two.


I'm confused -- is this a liquidity event or something (or somehow to let employees cash out options)? The release doesn't mention anything about that, and yet $23M seems like a small amount of money for Docker to have to beg the equity market for, given how small that figure is compared to their revenues:

https://www.marketwatch.com/press-release/docker-monitoring-...


I've been waiting for two features from Docker:

1. Launch a container by specifying its image digest, not image ID [0] [1]. You can pull an image with a specific digest, but then it gets an ID that is unique to the image repository. Later deployments must use that different image ID. This makes deployment tooling needlessly complicated. And it breaks the security guarantees of the digest by allowing the repository to modify the image.

2. Copy a file into a container with docker-compose, without requiring Swarm [1].

Do financial problems explain their slowness? I wish they would just charge $100/year per seat for Docker for macOS and then fix the long-standing problems.

And sell a hosted tool to do trusted builds of docker images from hashed sources. Reproducible builds would be great, too.

[0] https://github.com/moby/moby/issues/16482#issuecomment-29782...

[1] https://windsock.io/explaining-docker-image-ids/#contentaddr...

[2] https://github.com/docker/compose/issues/5523


Heh, regarding your:

> 2. Copy a file into a container with docker-compose, without requiring Swarm

I'm not quite sure I see why you can't get by with "docker cp" and need compose to resolve the container.

But I also were unaware of the Docker sub commands "cp" and "commit".

I think I prefer building containers and mounting config - but I see how the two could be abused, focusing on images rather than Dockerfile-s (and woe to the person that looses the carefully evolved Debian old-stable based base image that runs a mix of outdated oldstable packages and a few bits from current stable from two years ago when they were in testing, along with a custom build of node 13 and an outdated driver for a proprietary database...).

Not sure I believe it's a good idea, but now I know it's possible.

https://thenewstack.io/container-basics-how-to-commit-change...


I would pay 100$/month if they would solve the slow filesystem performance. Every workaround solution has some problems, most often high latency or simply stopping the sync.


I hope they work on Docker Swarm. “Docker compose” for multiple machines needs working on!


I'm still using Docker Swarm in production, and it's great! You write Compose files (which are succinct compared with k8s), with an optional smattering of just a few extra functions (such as for configs and secrets). You can easily specify how many replicas you want, constrain them to certain nodes with labels, use health checks for auto restarts etc.

If you can write Compose files, you can do Docker Swarm - it's so wonderfully simple!

I am increasingly nervous about Swarm support staying in Docker though, and plan to at least look into Nomad for my next project.


Your sentiment is precisely why I prefer Docker Swarm. I have heard good things about Nomad. I'm not sure if it's as simple, though.


I believe that Swarm and the other enterprise-oriented pieces all went to Mirantis. Docker, the company, is now specifically oriented to developers.


This is correct. I really think the play by Mirantis is to let Docker Swarm die while maintaining contracts and selling those existing contracts on K8s moving forward. I found Docker UCP very troubling to work with.


I really love Docker Swarm, way way way simpler and easier to work with. So far having a 3 manager with 6 workers works flawlessly with not a single issue.

I've started with 1 Manager/Worker only and grow from there to 3 managers/workers and later kept the 3 managers and added worker nodes to the cluster.

It made me get into Docker even though for whatever reason I hate anything Docker related.


Didn’t they drop Swarm ~1 year ago when they sold their enterprise offering? I thought they decided to instead focus on k8s.

All of this becomes so confusing.


Now there's something called Swarm mode built directly into the main Docker engine. I think that was the "replacement" of the classic Docker Swarm.


That migration was way before the acquisition. The Mirantis deprecation is about Swarm Mode.


All of this is so confusing…


So, who owns the "Docker Engine" ? Mirantis or Docker Inc?


Mirantis


While it has way more bells and whistles than swarm, check out nomad if you haven't :)


Yep! Such a fantastic alternative to k8s for most cluster needs.


I'd pay for a pro version of macOS Docker Desktop that didn't peg the CPU at ~30% with no activity in the containers :)


I think that's because macos has to run a VM to support docker.

I figured apple would latch onto docker and get it running natively, but nobody over there thinks outside the apple ecosystem. It's all like the jackling house

Wouldn't it be cool to say:

  FROM macos:10.13.4
  RUN xcode-build ...


lol, docker recently consumed 22gb of ram on my desktop with no containers running. Windows 10 though


There are at least 10 alternatives to Docker listed in those comments. I can't imagine what someone relatively new to containerisation think about it. Is this the equivalence to "JavaScript fatigue" in the DevOps world?

Anyway, happy Docker user here. It changed the way I develop and distribute Python applications. Took me like 2 hours to learn enough to be productive. I'm sure there are better alternatives but Docker just covers my needs, it's well documented and easy to get started, everybody knows it and every cloud platform supports it.


Im still a little surprised Microsoft hasn't bought Docker (or one of the other big compute-infra players).


Docker sold its compute infra business.


I wasn't suggesting they buy Docker's (former) compute-infra business but that it would be a good complimentary buy to Microsoft's or one of the other compute-infra player's businesses to own Docker since it is an additional tap into the developer market.


Why bother? They have their own container management infrastructure.


Inbound marketing and developer mind share. Seamless integration between docker tooling and Azure etc etc.


If they did, they would rename it as Microcubes.


I feel sorry for the investors. They could have given the money to me and wouldn’t get it back either. The time of docker is over. Now you can run whatever container engine you want on WSL2. I doubt the macOS market is enough to earn money from it. And it’s still freeware.


I believe the reason it's "Series B" after like 10 VC raises is that Docker, Inc recapitalized and basically wiped out existing shareholders (former employees, etc). I'm surprised there's not been a stockholder lawsuit, since they could've presumably sold the company for something and returned some money to the prior stockholders.


Funding rounds overflow after Z. It is known.


Ive tried giving docker money MANY times, but their pre-sales support is absolutely abysmal. Maybe this will help fix that!


Somebody should just buy to them out of misery. Looks to be very cheap for the name recognition and some people.


I just wish Windows has built-in Docker so that i don't need to install anything like wsl,... Make docker run on bare-metal Windows is a huge improvement to my life !

Why? WSL or VM is stupid because it costs around 5-6GB of RAM without doing anything.


Just install WSL standalone edition. It runs on bare metal with no virtualization. Also GUI is much better and no tracking!


Thanks for the info. Could you give me more info on the link ?

As i tried, wsl need a VM to run.


Well, Docker is a Linux technology. If you really need Docker that badly and can't take the overhead of a VM, then it sounds like you would be better off just running Linux like the rest of us. :)


There is a container runtime / API to abstract this. Pretty sure Windows has equivalents of namespaces and cgroups.


The docker tools themselves ARE cross platform, but 99% of docker images in the wild are based off Linux, and have ELF binaries inside. Windows does not have equivalents for every single Linux system call. This is why you need WSL.

You can build an image based on a Windows base image, and run it natively in Windows.


Which makes sense, containers aren't virtual machines.


I’m sure it has. Containerd will get Windows support soon. So we will have a Full Open Source Container Engine on Windows. (Remember Docker Desktop is closed source).


Aha, i think so.

But i need to spend some times with Windows softwares, too ;)


WSL is not even close to 5-6GB of RAM.

My current instance of Ubuntu 20.04 is hogging around 1200MB while running rust-analyzer and some other devtools.


Windows has containers, no need to use WSL.


Should have went the SPAC route


should have went the SPAC route

Or issued Doggercoin


How much money has lxc raised? ;-)


It is my hope that Docker will continue to be a self-correcting problem.


does docker make net profit? or is the "future" potential earnings at a crazy multiple still making sense when yields are rising and inflation is expected to rise?


May our free lunches continue forever. Thanks docker!


I’m happy to see docker get a little more lifeline.


And still a horrible advanced experience, the only thing the update does is change the UI.


At first, I wanted to read the article. But having to wait more than one minute to submit my cookie preferences, I closed the tab and left.

If anyone at Docker is reading this: Please reconsider your cookie banner implementation.


Of course it's annoying, but posts like this end up becoming annoying too, and even damaging, when they gather upvotes and mass at the top of a thread, choking out the on-topic discussion. That's why we have this rule:

"Please don't complain about website formatting, back-button breakage, and similar annoyances. They're too common to be interesting. Exception: when the author is present. Then friendly feedback might be helpful."

The upvotes are really more the problem than the comment, but please don't post such comments and then such upvotes will have no surface to stick to.

https://news.ycombinator.com/newsguidelines.html


These cookie banners are so annoying - govt messes this type of stuff over and over. Let me control cookies on my end, I can choose not to accept cookies using my own browser controls, or delete them after 1 hour etc.


"govt" didn't develop the specific cookie banner implementation Docker is using.


Yeah the government is far from perfect, but people also treat it as a punching bag. It was companies who used the lack of regulation in tech to abuse cookies, and our government responded with regulation. And now some sites are implementing those rules in very annoying ways. Instead of getting mad at the party who is trying to protect you in this situation, how about directing that anger at the parties who caused the problem in the first place?


I control who sets cookies on my machine. We all do. Don't like them? Block them.

What is gained, SERIOUSLY, by these stupid pop-ups? I'm serious, has anyone analyzed this? It really shows how the heavy hand of govt has ZERO cost/benefit constraint or analysis. Browsing on phones is particularly painful.

I wish we could just set an accept all cookies header in our browser and govt would let these websites then stop displaying these damn notices, banners and consent boxes.

The GPDR ones (if you use euro websites) are getting even crazier.


> What is gained, SERIOUSLY, by these stupid pop-ups?

I appreciate the ability to allow "required" cookies, but reject all other cookies.

I agree that I would absolutely prefer an HTTP header for cookie preferences instead of pop-ups. But the new cookie popups add some value to me, in letting me allow session authentication cookies, and reject all others.

An outright block on all local cookies tends to break authentication for many sites.


Why not have a setting or two in Firefox/Chrome/Safari menu:

  - reject all cookies
  - allow only required cookies
  - allow all cookies
And never to have to fall into 100s of different dark patterns by people who have spent dozens of hours coming up with solutions that would basically trick people in clicking whichever button is highlighted (usually the "accept all cookies" one) just so they can browse some content?


See "do not track" for how that goes. Remember, people that do actively make the choice to design such a popup to trick you, they do not intend to use the solution that respects your interests best. And at the same time, it makes sense that legislation refrains from requiring specific technologies.


Well, that would require some kind of interoperability between the client and the website (like an HTTP header that sends cookie events).

I absolutely would prefer that to the world we have now. I'm all on board, you don't have to convince me it's a good idea.

But, that doesn't exist today. I do prefer having the stupid annoying popup that gives me the option to allow only required cookies to having no choice at all.

The new GDPR compliant cookie popups give me that option. It's a step up above not having the option at all.


> What is gained, SERIOUSLY, by these stupid pop-ups?

Disclosure


I disagree. It's the over-regulation that calls for these pain in the ass implementations.


Downvote all you want, but you're in denial. GDPR and the like are nothing more than "privacy theater". I work in this industry and know it very well. Cookie opt outs or forced opt-ins on publisher pages aren't helping anyone with anything. The whole thing is just a farce so that they can enforce this against bigger tech companies when they want to. They should just tax them outright and save us all the trouble.


No kidding. The first sign is how intrusive it all is. The second sign is how ineffective it is at anything web / crime related people actually care about (normal people).

On HN it's like - go ahead and let me cable company and cell phone company track / sell and target me on my browsing history (wildly intrusive), but random website doing pet necklaces, try to stay on top of all the popups you need to shove at your users (who will all say OK) so you can show your page.

If you can't share the data you buy up the other companies into groups and then are just using it "yourself" etc.


Malicious compliance.


Google is facing a 5B lawsuit over incongnito mode. The fact that someone goes overboard with this crap is not unreasonable.


yeah this is Docker's legal team.


It's actually not about the cookies, a website is allowed to have technical cookies without a banner, but getting your consent to track you and use and sell your data.


Exactly. But trackers try to hide this and make it seem harmless.


GitHub has no cookie banner[0], how come docker.com needs one?

[0]: https://github.blog/2020-12-17-no-cookie-for-you/


Per my comment on the HN discussion at the time, that was based on a very dubious interpretation of the law. They are getting around needing the popup by only using "necessary" cookies, which doesn't need consent, but then they turn around and use the cookies for unnecessary things (like analytics) that therefore do require a consent popup, but they don't ask for it.

Analogy would be like:

Law: you can't store someone's picture or personal data without their consent, unless it's necessary for the transaction.

Most companies: <nag you for consent to store your picture>

GitHub: We authenticate you by your face, so it's necessary to collect that, so we don't need to get your consent for it. Then, once we have it, we do whatever the fudge we want with it.

https://news.ycombinator.com/item?id=25457903


They dont need consent for cookies. They need consent for tracking


Heads up - most users do NOT care about tracking, and click yes to these popups. Do folks on HN not get this?

This is all posturing. If you want to reduce tracking, use a browser that reduces the tracking. Seriously, just use total cookie protection or something on firefox.


> most users do NOT care about tracking, and click yes to these popups

By a corollary, some users DO care about tracking and click "no" to these popups.


Which ends up doing almost nothing in terms of actual tracking.


There is a cookie banner filter list for uBlock Origin :)


Docker's implementation is instant if you accept, but takes an eternity if you only want essential cookies. That seems in violation of the ePrivacy directive which according to the official website [1] requires that you "Make it as easy for users to withdraw their consent as it was for them to give their consent in the first place."

I guess it's for the courts to decide if requiring the same number of clicks but letting you wait for an eternity is equally easy, but I doubt it.

1: https://gdpr.eu/cookies/


The problem is that it takes so long because they make a bunch of requests to the opt out endpoints of different services, which takes time. Of course, it is questionable why those requests are essential, but the point is that there is a reason to this madness.


But why do they block user interaction while doing that? You can call those in the background.

And why call an opt-out endpoint at all, after all there has to be a mechanism that prevents setting the cookie before the user sees the cookie banner. Just continue using that mechanism (e.g. gtag's consent default denied).


They are aware of it, I've contacted them multiple times about it, they just don't care.


In situations like these I can only say a massive thank you to uMatrix + uBlock Origin author!


Trust arc is incredibly obnoxious


Is this kind of artificial delay even legal under the GDRP? Does anyone know if there has been any lawsuits against trust arc and co over this stuff? After waiting forever I was greeted with: "Some opt-outs failed. Opt-out requests responded with error or timeout. Please try again". How is this okay?


It isn’t (GDPR regulates how consent can be obtained and shenanigans like these are an obvious example of bad faith) but the entities supposed to enforce the GDPR are absolutely incompetent and don’t care.


Especially as a corporate site (that is, not a click-farm that relies on tracking ads) this is unacceptable

"Trust arc" I'd trust the site more if it had a sensible cookie policy


NoScript + uBlock Origin to the rescue!


For this reason, I find myself often often using archive.is to browse text-only websites, including twitter links.

https://archive.is/iGdbt


I use an addon for Firefox which gets rid of all cookie banners [0]. No issues on docker.com either.

[0] https://addons.mozilla.org/de/firefox/addon/i-dont-care-abou...


It's perfectly readable even if you don't click on anything? And it doesn't set any cookies until you agree.


And here I wasn't asked at all about my preferences! I wonder why our experience was so different.


My guess: you aren't in the EU and the top commenter is.


I've checked, mine was blocked by adblocker


Docker (or anyone for that matter) should not use TrustARC.

TrustARC is the most evil, dark pattern I have ever encountered. Opting out takes >10 clicks and then it displays a fake progress scanner for over 30 seconds to punish you for opting out (pro tip! just accepting all is instant!).

GDPR is good, having a choice not to be tracked is good. But the pathetic way that websites try to fool you into handing over your data should be punished hard by the EU.


That's some decent patience you've got there.

I give websites 5 seconds, tops.


To muggles it can be explained line this: computers send letters to each other. Getting a webpage is sending a letter that says “can I please GET the document you call /mypage?”, where the other part replies “OK (200), by the way, next time you write, please include this token “<cookie>“, here is your document”.

When I send the next letter, I (or my user agent) can chose to send along the cookie, or not. The server does not force me. This is why the EU cookie law regulating HTTP messages makes no sense.


Meh, the onus is on the web pages not tracking all kind of shit. Don't blame the law for exposing it.

Also, gdpr consent isn't that particular about implementation. Cookies aren't something special that needs consent. It's their usage.


Podman


What about podman?


Can't help but see this as bad for everything; to me it feels like if "curl" or "ssh" raises $23M. I would love for these people who work on these great tools to make lots of money, but what this means is that "now Docker has to figure out how to squeeze money out of this particular (admittedly good) tool in the chain" -- and the tool usually suffers.


Docker was created by a VC-backed startup in 2013. So they have had an incentive to make money from the very start. At no point was Docker not financed by VC money.


That doesn't really address anything I said? Regardless of how it got started -- it simply feels like the type of tool that doesn't fit well with a high-profit model because it is so backend/developer oriented.

I've said it before, if you're trying to be profitable, that means there must be some part of it that is, "if you don't pay, you don't get it." What is that thing for Docker (a generally very open-sourcey thing), and will it be worth it?


This is exactly right. Docker is like lxc/lxd with some extra stuff (much of which I personally dislike.)

It's not rocket science, and podman is obviously a drop-in replacement.

The ssh example is good though - so much depends on ssh, yet how much do we invest in it?


Yeah, this is abjectly terrifying to me. A signal that I need to begin looking at alternatives to Docker.


Docker popularized containers, but it does nothing special. Podman is the closest alternative, but a lot of engines can run images from any oci registry like docker hub.

Other engines you might wanna look at: rkt, lxc with an oci template, etc.

Kubernetes with cri-o runs oci containers as well.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: