Hacker News new | past | comments | ask | show | jobs | submit login

I can run a kubernetes cluster, with 1000 of linux images, have them communicate with each other. Hot swap in and out nodes. Such a thing is not out of reach of common folks if you change your mental model of what a computer is. "The network is the computer"



That is an Apples-to-Oranges comparison. The Z-series already changes your mental model of what a computer is if you are not yet familiar with it. A Kubernetes cluster is not a computer, it is a cluster of computers.

The end result (computations being done) may be the same but the way you get there is radically different between those two setups and you're going to have to do a lot more work on your application on a cluster vs a very large computer with some awesome hardware capabilities thrown in. Not cheap, but for some applications well worth it.


> A Kubernetes cluster is not a computer, it is a cluster of computers.

That statement is true, but the thing it implies isn't. A cluster of computers on a common backplane can certainly present itself as a single (NUMA) computer, with one process scheduler, one filesystem, etc. Most HPC systems are built this way.

As well, when your workloads are, themselves, VMs, there isn't much of a difference between having a Z-series machine as a hypervisor and, say, a vSphere cluster as a hypervisor. Either way, you get a pool of CPU, a pool of RAM, a pool of storage, etc. And either way, you get VM workloads with transparent migration on node hotplug/node failure, etc.


> A Kubernetes cluster is not a computer, it is a cluster of computers.

Well, when you tot up all of the channel processors and LPARs and so on...

A mainframe isn't a computer, it is a cluster of computers.


But that comes at the expense of complicated software. Sure, once you've gone through the effort of procuring all that hardware, configuring it to run Kubernetes, reconfiguring your software to run in containers... you're good to go.

Mainframes allow you to easily scale your software vertically (but at the expense of complicated hardware). That might seem silly in today's world, but a lot of the software running on Z was written decades ago and the risk of rewriting it is extremely high.


How... does it work then?

Was the programming model for mainframes made to be scalable from the beginning?


Coding for a mainframe in the early days was almost regimental in many companies, lots of dotting i's and crossing t's, code sign off, lots of QA and checks.

One example would be a bit of COBOL I did and associated test program, all worked, ticked all the boxes accept there was a single spelling mistake in one comment line, actually I'd missed the full stop of the last sentence. So I had to make that change and retest it all from scratch again. How many would just leave it, or not fully test every instance and associated program that had in full and document all tests and results compared to expected results today? Yes we still have industries that would do that, airline industry or finance systems, but that was the prevail of all code back then.

But then source code control does not consist of a group of auditors in a separate group physically comparing outputs with previous with a lightbox. So been much progress, though equally lots of automation in which a single flaw can cascade.

Now as for scalability - mainframes lived on what we know as moore's law today more than we do today and iirc the likes of IBM effectively promised it's customers a solid path for tomorrow, today and remember - what we now know as LTE today, was and is the mainframe staple and LTE for them, means a lifetime. But you pay for that level of service, but always have. This enables lots of legacy, well tested and proven code to carry on still being used today.

Equally a relevant anecdote, was working in the 80's for a large company who had iirc a DPS88T (Honeywell Bull), which had a maximum of 4 CPU clusters available. This company was hitting max usage and I identified a program that I could optimise and recover near on a whole CPU's worth of processing back for use. I also mentioned that dispite only officially having 4 CPU clusters available, it was possible to attach a 5th. May mistake as next thing they are onto the supplier demanding a 5th CPU. As for them money solved a problem and whilst a cheaper option was available that would entail change and when it comes to changing code that just works as needed, they are always reluctant to change it when they can carry on just adding more power. And the mainframe suppliers make their money on supplying more power, year after year, all in a heavily fault tolerant hardware design that goes beyond just having ECC memory and thinking life is good. But mainframe hardware is a well worth looking into as redundancy, legacy handling, robustness is at a whole level many don't appreciate.


The vast majority of mainframe software, especially older software was batch processing oriented - often millions of discreet financial transactions - that can easily be scaled horizontally across multiple CPUs or processing engines.


You can also run Kubernetes on a mainframe now, with ZLinux and the hundreds of precompiled s390X (IBM CPU architecture) docker images: https://hub.docker.com/u/ibmcom/.

This is attractive if you happen to already be a large company and locked into mainframe contracts. Sure, you don't need a mainframe to do this, but it's also nice to not need to throw away your mainframe and buy a bunch of commodity hardware to do this.

Options are good for everybody. :-)


Well, it is murky if you start generalising that way. The equivalent of Kubernetes in the Mainframe world would be Parallel Sysplex - where there is a cluster of Mainframe servers that co-ordinate with each other and take over upon failure or over load of other participants of the cluster.

A single system Z is vertically integrated and scaled. Now, I was talking about a single operating system that can handle a hotplug out/in of several of its CPUs without any kind of service degradation.

Sure, I can throw in a few x86 servers, and achieve the similar convenience using say load balancer, floating IP, shared storage, and whatever else is needed for a full switch over. You see where I'm getting at. Complexity that is on YOUR plate.

I'm not denying that Kubernetes gives a taste of such resilience for the common folks, just saying that the IBM system Z's engineered solution is on a different level that is just not what Kubernetes is aimed at.

As a side rant, Kubernetes is a magic layer for most of the users, that it is glorious until they hit the first serious problem that makes them scratch their head and look at their complex black box differently. With Mainframes, the banks just call IBM :-)


For starters, the software running on a mainframe can include persistent storage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: