Hacker News new | past | comments | ask | show | jobs | submit login

So you move from vendor lock in with the cloud provider, to vendor lock in with an expensive, proprietary* IBM k8s distribution with its strange nonstandard opinions about workflows that you have to manage yourself?

Don't get me wrong, I appreciate RHAT's code contributions very much, they have done a lot for k8s! But running OKD on one's own is a bad idea, while paying for IBM support makes you as much a hostage as anything Google will do to you. Better to just stick with a distribution with better community support and wait for RHAT's useful innovations to be merged upstream (while avoiding the pitfalls of their failed experiments...)

* Yes it's open source, but the community version (okd) isn't really supported, nor is it widely used, so if you're serious about running this you're doing so for the Enterprise support and you're going to be writing those checks




Thanks for the edits (and acknowledging our contributions). I wasn't sure if you were just trolling or not before, so I didn't want to engage.

Your concern is valid, and I agree with you that OKD is not supported enough. I have my own theories as to why, but I will keep my criticism "in the family" (but do know there are people that want to see OKD be a first-class citizen, and know we are falling short right now). We had some challenges supporting OKD 4.x because the masters in 4.x now require Red Hat CoreOS (and nodes it is highly recommended), but RHCOS was not yet freely available. This is obvoiusly a big problem. Now that Fedora CoreOS is out, there is a freely distributable host OS on which to build OKD, so it will be better supported and usable. FWIW I have a personal goal to have a production-ish OKD cluster running for myself by end of the year.

I'll admit I am a little offended at being called a "proprietary IBM K8s distribution," but I don't think you meant to be offensive. IBM has nothing to do with OpenShift, beyond the fact that they are becoming customers of it. Every bit of our OpenShift code is open source and available. You are right that it's not in a production-usable state (although there are people using it) but it's a lot better than you'll get from other vendors. We are at least trying to get it to a usable state, unlike many of them. We are strapped for resources like everyone else, and K8s runs a mile a minute and requires significant effort to stay ahead). This space is still really young and really hot, and I am confident we'll get the open source in a good, usable, state, much like Fedora and CentOS are. I also don't think OpenShift is really all that expensive considering the enormous value it provides. The value really does shine at scale, and may not be there for smaller shops.

I don't blame you for waiting, I probably would too. Our current offering is made for enterprise scale, so isn't tailored to everyone. I've heard OpenShift online has gotten better, but haven't tried it myself. Eventually I plan to run all my personal apps on OKD (I have maybe a dozen, mostly with the only users being me and my family), but until then I've been using podman pods with systemd, which will be trivial to port to OKD once it's in a good state.


Yes, I realize I will come across as being overly negative here, and I apologize for this.

It's not that openshift is bad per se, I just don't imagine it solves many problems an org that is fretting about lock in or gcp pricing will have. Such an org is probably cost sensitive and looking for flexibility, but openshift is expensive, and if you adopt its differentiating features you are de facto locking yourself in. And if you do not leverage those features out of a desire to avoid lock in, you are effectively paying a whole lot just for k8s support...

And I really should say, for certain orgs (especially bigcos) this may well be worth it, I just don't think it is a good option for anybody worried primarily about avoiding vendor lock in and keeping costs in check.


Honestly, the fact that it requires rhel or centos was enough to make it not feasible for us. I wish that would change, since I can't think of any reason the distribution should affect openshift.


There are several reasons, many of them are that the nodes themselves are managed by OpenShift operators. If you run (as cluster admin) `oc get clusteroperators` you'll see plenty that are for hosts, such as an upgrader and a tuner. If the operators had to be distro agnostic it would be a support nightmare, and we wouldn't be able to do it. With RHCOS (immutable OS) we also have enough guarantees to safely upgrade systems without human intervention. Can you imagine doing that in an Enterprise environment while trying to support multiple distributions? I can't.


Can you describe what kind of tuning the rhel tuning tools do that are not available using the normal kernel constructs? Last I checked tuna and others did everything you could do in Ubuntu, but without knowing the guts of the system.

Again, I think the idea of OS is great, but you've lost us, and likely other big customers because of that restriction. Having old kernels is just not an option for some people.


I'm not informed enough to tell you what the tuning tools do, so I'll dodge that question. But "[h]aving old kernels is just not an option for some people" is exactly the type of problem this solves. You literally don't have to know or care what kernel your node runs, because it doesn't matter! The OS is a very thin layer underneath K8s, a layer which is entirely managed by applications running as pods (supervised by an operator) on the system. Whatever apps/daemons/services you need to run move to pods on OpenShift. If you need to manage the node itself there is an API for it. If you truly need underlying access, then this is not for you, but you'd be amazed at how many people (myself included) started out balking at this and thought "no way, for compliance we need <tool>" but after re-thinking the system realized you really don't. By "complicating" the system with immutable layers, we actually simplify the system. It was much like learning functional programming to me. By "complicating" programming by taking away stuff (like global variables, side-effects, etc) it actually simplified it and reduced bugs by a huge margin.

If you are like me and are old school and think "huh, yeah that makes me nervous" I completely understand that, but we've seen some serious success with it. I'm a skeptical person, and telling me I can't SSH to my node freaks me out a bit, but I'm becoming a convert.

I would also note if you buy OpenShift you get the infrastructure nodes (masters, and some workers for running openshift operator pods) for free (typically, but I'm not a salesperson so don't hold me to that if I've misspoke :-P), so you aren't paying for the super locked in OS. I suppose you do have to pay for RHEL8 or RHCOS on the worker nodes running your pods, and we don't support other distros (because we expect a very specific selinux config, CRI-O config (container runtime), among other things), so I guess there's some dependence there, although I recommend RHCOS for all your nodes and then just use the Machine API if you need it.


btw. I would never run OpenShift after the CoreOS debacle. This was a really sketchy move and still is.

Yes RH did a lot for k8s, but killing of a working distribution without a direct migration path that is like "start again". will make your customers angry.

also I think the OpenShift terminology is way too much and OpenShift should be a way more thinner layer on top of k8s.


I agree, the CoreOS thing went down grossly. It was a technical nightmare tho. They deeply merged CoreOS and Fedora/RHEL and created a hybrid animal. Creating an upgrade path would have been an insane challenge, and in the end the advice would have been to rebuild anyway to avoid unforeseen issues. They could have left CoreOS repos and stuff up tho and given a longer transition period.


I work for Microsoft and I quite like OpenShift (although we have AKS as the default, "vanilla" managed k8s service, you can also run OpenShift on Azure).

Sure it's opinionated, but at this point, which flavor of k8s isn't? Even k3s (which I play around with on https://github.com/rcarmo/azure-k3s-cluster) has its quirks.

Everyone who's targeting the enterprise market has a "value-added" brew of k8s at this point, so kudos to RH for the engineering effort :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: