If nobody but a business can afford a computing platform, it is doomed long-term. Everything was on mainframes until PCS were good enough and now 'everything' is on a PC, on prem or in the cloud. Even the serverless things are run on someone's PC somewhere. With the exception of the bleeding edge of high-speed computing with large amounts of data or old legacy banks that couldn't support a 10 character password if their lives depended on it, x86 or amd64 rule the world.
IBM does not get to boss us around anymore with their industry-specific terms that they refuse to use standard industry jargon instead.
> have you ever tried to run a Windows 3 app on Windows 10
Yes. It is called cardfile and it runs fine even though Windows 10 tries its best to never work on anything that isn't a telemetry laid in uwp social sharing app. No expensive mainframe needed.
(Windows experts will note that card file has 32-bit versions and not just 16-bit versions but hop back even one version of Windows and it can run 16-bit stuff as well so well my example wasn't perfect my point stands)
> Everything was on mainframes until PCS were good enough and now 'everything' is on a PC, on prem or in the cloud
100% agree with this - as a business, why would you want to pay the IBM support contract costs? As a developer, why would you want to waste your time learning about this sort of tech (I can't think of a more risky ecosystem to invest your time into learning in terms of chance of the skillset becoming obsolete)?
I've yet to hear about any legitimate usecases that couldn't be accomplished using x86/amd64 VMs/machines. Monzo has clearly shown it's not required in banking (and frankly, have shown how slow and generally useless the other legacy banks have been when it comes to innovating on top of these mainframe-based platforms).
Almost all of what cloud providers offer can be accomplished using plain old VMs, yet people find it useful to pay for higher-level abstractions. The whole serverless thing is arguably a return to the mainframe model: managed black-box platform that takes care of replication, failover, storage, etc. and lets you write code that's relatively naive about distributed systems.
A mainframe is just a distributed system in a box, presenting similarly simplified APIs.
> why would you want to waste your time learning about this sort of tech
Mainframes provide the largest single-image machines available today. A single z15 can have ~190 cores and 40 terabytes of memory and can be networked into a single-image group of sixteen machines to form a ~3000 core, 640 TB beast.
So, in essence, it allows one to program today your laptop of 2050. That's a pretty cool superpower.
Remember when 8 CPU 64-bit RISC machines running Unix were cutting-edge? There is one in my pocket right now.
The one in your pocket has a touch interface with a web browser and a completely different OS and development environment. As a system, it has very little in common with some mid-2000s UltraSPARC derivative.
The same will apply to a hypothetical 2050 laptop. There will be Some Hardware but the software around it will be almost unrecognisably different - assuming we still have laptops in 2050, which I doubt.
Well... It has less in common to the SPARCstation or the O2 I used in the 2000's than with the Linux box sitting to my left, but it still has a multi-tasking, multi-user OS and I can still have a terminal shell to it.
If, in 1990, I were asked to say what my laptop would look like, I'd say it'd use a microkernel OS like QNX and an intelligent assistant not unlike Sun's Starfire film.
You can get an IBM z15 without z/OS (a LinuxONE) and run Linux on it, benefiting from its vast IO. And it probably costs about the same as the Superdome.
> As a developer, why would you want to waste your time learning about this sort of tech (I can't think of a more risky ecosystem to invest your time into learning in terms of chance of the skillset becoming obsolete)?
You realize that mainframes aren't just for running 40-year-old COBOL applications, right?
Pretty much any code you might write, including that written in the hippest, trendiest, fad-of-the-week language, will run on top of this "risky ecosystem".
True for any computer, so not a good argument. I interned in the early 90's at a mainframe shop, and everything felt archaic even back then. Creating a file (sry 'dataset') is as complex as partioning a disk on other systems. The punch card still lurked under a lot of parts of the system 'Of course JCL does not have loop constructs, how would this ever work with a stack of cards?'. I stayed away from mainframes ever since.
Sure, you can run lots of Linux VMs on a mainframe. But now you need the skills for both --very different-- systems.
Some time ago, I was involved in a project to replace a mainframe and one important reason for it was that they could not find anyone willing to replace the retiring mainframe operators. Learn about Linux and you can work in fields ranging from tiny embedded devices to high performance computing. Learn about MVS and you are stuck in a shrinking niche.
> You realize that mainframes aren't just for running 40-year-old COBOL applications, right?
> Pretty much any code you might write, including that written in the hippest, trendiest, fad-of-the-week language, will run on top of this "risky ecosystem".
Nobody has presented any compelling arguments as to why a business would want to do this. Why would I ever want to learn about z/OS as a developer? And if we're not using z/OS, and we're using Linux on IBM Z instead (which doesn't work without virtualization), why would I not just deploy to VMs/Kubernetes/other x86-64 or ARM based machines? It'd certainly save on the licensing costs!
IMHO Linux on IBM Z cannot exploit its hardware capabilities as well as z/OS can. I think that's pretty much the reason to go with z/OS (aside from the existence of legacy applications).
I'm glad other people are aware of Monzo - I'm really looking forward to their deployment in North America. I can't wait to start programming my bank account and not have to have my bank mislead me about what "Bill pay" is (their online banking site won't tell me if a payment was sent through ACH or if it was a physically mailed cheque, wtf!)
> Monzo has clearly shown it's not required in banking
It's not required, but it's much easier to scale up than out. There is a point where managing the complexities of a distributed infrastructure of unreliable systems surpass its benefits and it starts to become interesting to consolidate your workloads inside a single extremely reliable environment.
Indeed, scaling out is more complex than scaling up. Especially when you need to scale a lot.
But maybe the major mainframe using industry is banking. Processing many transactions does require scale, but not so much that the complexities of scaling out (ditributed Linux based system) are worse than the cost of scaling up (renting a mainframe).
I'm experiencing this on a daily base; you "only" need 100's of servers (not 1000's or more) to process banking data (let's say 100-150 transactions per second on average, with peaks of 500).
It's not just scaling - it's the reliability and availability features too. While you can get a very powerful 8-socket x86 with 48 TB of RAM and tons of IO from Lenovo's website, if a CPU or memory DIMM goes bad you'll need to stop it for maintenance.
One of my older coworkers claims that one of the most impressive things he's seen in his career was an IBM mobile tech demo (mainframe in a semi truck) where the technician conducting the demo would reach into the machine and start randomly pulling out CPUs and memory, and then plugging them back in. The display would show the available resources and throughout of whatever computation they were running for the demo decrease as hardware was removed and then increase as it was added back, and the whole thing happened seamlessly without impacting the compute job.
I think CPU and memory hotplug is par for the course these days on many server-class platforms. In fact memory hotplug would be quite useful on mobile hardware since it would allow the OS to power down RAM modules automatically when that memory is not being used, and add them back as needed.
I can power down sockets and cores on my machine, but I'm pretty sure I can't physically pull out a CPU without a decent chance of having the magical smoke escape the machine.
I had to come back to reply to this because I also agree that modern development has way too much churn.
You can choose boring technology like a sqlite backend accessed over the web via PHP and cgi without needing to learn an entirely different paradigm like the shift required for learning mainframes.
If we are going to think of history as a wheel and try to learn from that history, I think you have to factor minicomputers into your timeline. There were a gateway to PCs, which lead to laptops and smart phones.
Administratively, you could argue that AWS is a time-shared mainframe. Or, perhaps, a time-share company in possession of a handful of mainframes. That opens the door for IBM to walk through and claim that they were here the whole time. Frankly, I'm not sure we want to be there, for reasons you already referenced.
My suspicion and, dare I say, proposal, is that Brian Cantrell's group is currently trying to reinvent the mini computer with open source software and 95% COTS hardware.
Yep. The article is pretty comical too - somehow, being able to write a C or Java app that runs on the mainframe, and having mobile access to the mainframe terminal, means that the platform is modern!
And quite big business at that. Everybody starts their tiny startup with AWS and Heroku, because it literally costs nothing - and if you eventually scale up, you won't switch stacks in a such a radical way.
These guys [1] trained a model to generate music. Used 512 GPU cluster for 4 weeks. Dataset is one million songs. Now imagine what would it take to generate video. 10 years from now everyone will want to generate something.
I think the data should be measured in units of time. Like, how long it takes your software to touch every entry just once. This explains how the same amount of data may be peanuts or Big Data worthy of a Hadoop cluster, depending on how bloated and wasteful your software is.
You should expand your thinking to also consider big data apps such as GIS software and anything that processes multi layer remote sensing data. Many pixels, Many Layers. Sizes start in the GB.
This was just a way to have a basic, rough estimate; of course the actual "feel" of data depends on the computations you're doing on it. That said, it's usually the case that the longer you can keep the data on a single machine, the better off you are. Distributing your computing incurs steep performance costs (that you'll have to pay for in infrastructure) and vastly increases complexity of the software.
Indeed. The binary portability of Windows is tremendous, and the source portability goes even further: at my previous job we built the same MFC codebase to run on a tiny obsolete WinCE4 device that also ran on developer's desktops under Windows 10.
Mainframe will continue to be obscure until there is a downscaled option. The Z-series equivalent of Raspberry Pi.
Little reflection: The Cloud (PaaS & SaaS) is nothing else than the new Mainframe 2.0.
Same staff, different time.
- Mainframes provides you HA by hardware redundancy, The Cloud do the same using distributed consensus and virtualization.
- Mainframes offer a way to scale your infrastructure through vertical scaling. The Cloud does the same through horizontal scaling.
- Mainframes jail its users through proprietary software stack (COBOL&IBM), the Cloud do exactly the same through proprietary APIs & services (SQS, lambda, Aurora).
- Mainframes make you jailed financially to an hardware provider (support contract). The Cloud make you jailed financially to a service provider (BW, $/h).
- Mainframes were provided by Big-Corps in close to duopoly situation. Same for Cloud providers today.
Yes and you got the major difference here: accessibility
- Where with Mainframes, you had to be part of a medium corp / university to be able to touch one. With the Cloud, anyone worldwide with a laptop and 15$ in its wallet can learn how to use it.
That's a big difference and explain (partially) the exponential development of the Cloud.
Howevet in this world nothing is free. And we traded this accessibility for our confidentiality
- Where IBM had pretty low control on the usage of the hardware they were selling to you, Amazon now physically own both of your data and your software, and could technically know the color of the underwear of your CEO if they want to.
It's possible you see it this way because you've grown into your position before there was a cloud, or perhaps you had more free money to spend on things (pocket money or so on).
When I was younger, I was poor. The kind of poor I was (not starving, but no disposable income of any kind and certainly no personal money) is enough to kill the premise of learning a cloud provider dead.
What I mean is: if you need a credit card, it's probably not going to work. There's no way I could attach a credit card to something, there's no way I could spend $5/mo on learning, I didn't have $5 to my name, much less a credit card. And the risk of overspending even if you do have some spare income is fucking -scary- as a person who is poor. The idea that I could not have money for food because I forgot to delete something is stifling.
I learned nearly everything I know on 10 year old hardware that I picked up for free or near free. That is accessibility.
As it stands today I'm pretty flush with cash, but I -still- worry about unexpected cloud costs. Mostly because I work with it every day and know concretely how easy it is for the costs to balloon.
So, no, I think it's the same level of accessibility as mainframes, you can't work against an "old" cloud provider that you picked up for almost nothing. You have to put your money where your mouth is, and experimentation has a financial cost.
Only people who can help you foot the bill (university, employer) or who can clean down environments properly (tutorial labs) are going to make it even remotely more accessible, but it still wont be as accessible even then.
Accessibility of running things at home is why many of us are able to do this job, I think the majority of people my age spun up a PHP/MYSQL pairing and started messing with things, it was easy, and it was "free" if you already had a PC, and it could be exposed to the internet.. you learned a lot, incrementally.
This is the fundamental reason why certain tech prevails for so long.
> So, no, I think it's the same level of accessibility as mainframes, you can't work against an "old" cloud provider that you picked up for almost nothing. You have to put your money where your mouth is, and experimentation has a financial cost.
I don't agree at all with this. You cannot simply buy an hold mainframe. And even if you did, transport costs alone would be higher than the cost of a free(!!!) cloud account. Plus you likely wouldn't have access to the required documentation.
You're basically comparing old PC hardware with cloud accessibility, which is something else entirely from mainframe computing (I can tell from first hand experience as I worked with mainframes for many years).
> Only people who can help you foot the bill (university, employer) or who can clean down environments properly (tutorial labs) are going to make it even remotely more accessible, but it still wont be as accessible even then.
WTH are you even talking about? You can limit the potential costs of cloud computing by sticking to the free offers (every major provider has them). I'd argue that if you can foot the bill for buying, transporting, and powering a used mainframe, you can easily afford a few bucks extra if you accidentally went over the free limit with a cloud service.
The only point I agree with you on is the credit card requirement. The rest is ramblings about a completely different topic - i.e. not related to mainframes at all.
I am equating the "accessibility" of cloud with the "accessibility" of mainframes in relation to PC hardware.
I do not refute the notion that mainframes are not accessible,
I am asserting the point that our lurch towards cloud hosted solutions with no on-prem replacement is the same lurch towards lack of accessibility that stifled the innovation and adoption of mainframes.
EDIT: also, Spanner and co often don't have free tiers, you're pretty limited in the amount of cloud services you can learn, and the danger of accidentally charging is inherent to the platform. The new budgeting features (which, are new btw) are good though, you have to be aware of them and know how to configure them though.
Even with cloud specific technologies still have to option to use locally running alternatives for development with a little abstraction.
Unless your core business specifically targets the capabilities of chosen cloud specific tech, you can get away with developing on a supplemental solution and rely on CI plus staging environments to uncover and fix the edge cases that you miss this way.
This is especially true with low-level stuff like databases and queues. I've worked like this for years and never had to worry much about it.
If you're going to run alternatives to cloud services and call that good enough, then the same is also true for mainframe development.
Meaning, I can't run DynamoDB locally but PostgreSQL is close enough for most tasks, so sure I can kinda run that workload locally, but I won't learn any implementation-specific issues about how it would behave in AWS.
Similarly, I might have a hard time running DB2 for Z/OS, but I can also run it on Windows or Linux, and in this case I'm far closer to the actual development environment. Same is true for most of the dev tools and services commonly found in Z series environments.
I would argue the commenter above you is flat-out wrong: for most core AWS and Azure services, it's far easier to put together a home, single-desktop-PC-based dev environment for mainframe than it would be for cloud if you don't have the ability to actually use the cloud directly.
Agreed - containers and VMs have made this approach even simpler these days.
Mainframes on the other hand are next to impossible to get in hardware and pretty much useless when emulated (what good is Hercules if you still can't get z/OS or VSE for it?).
If you're assuming 'for free', then yes I agree. A full z/OS cluster on a laptop is well within the realms of feasibility using (non-free) zPDT however.
I was saying 6 years ago that virtualization seemed like a sloppy and expensive way to return to the mainframe model. Probably many others saw the trend as well.
These parallels are rooted in centralization, not the technology of mainframes. The centralization-decentralization pendulum has swung back and forth a few times now. Let's hope the next swing is towards decentralized distributed apps under the control of users, instead of this "edge" push that's still centrally controlled.
Dont say this out loud. I said this in 2011, I was mocked and treated as a kid. I just showed equivalent products in mainframe and respective offhost products. Only difference offhost had many options on softwares, mainframes only few vendors(CA/BMC or IBM).
By the way how many of you planning on moving from Nodejs to Deno.
In mainframe there is a language Rexx, no change for last 10yrs.
Every organization I've encountered with mainframe dependencies has them not because they are a strategic asset but because the economics and risks of removing the mainframe from the system dwarf the expensive costs of the hardware itself.
The biggest lessons the mainframe has to teach software engineers is not around vertical scaling, uptime, backward compatibility or IO throughput but how software quickly becomes a liability and a risk rather than an asset or an advantage when it is not developed in a thoughtful disciplined way.
And if they get someone smart enough to do the migration without sinking the business, they won't be repeating mistakes of the past and replacing their entrenched mainframe systems with more mainframes. Unless they are paying IBM to do it, and even IBM will likely be selling something sexier like a private cloud.
> the economics and risks of removing the mainframe from the system dwarf the expensive costs of the hardware itself.
Also factor in the unkown risks and costs that will add up in the new solution. There are abundant horror stories of migrations that went bad and had to either be aborted and reversed, or that cost much more than originally expected, to the point of making the move a bad financial decision.
> when it is not developed in a thoughtful disciplined way
Most organizations don't allocate the budget for that and, if they did, they'd think the hardware is not expensive at all. Any hardware.
All the commenters trying to shoot down the article with their fervour for modern apps deployed onto cloud infrastructure are failing to see the irony.
Their shiny app that they built a few years ago and is deployed succesfully into a companies systems, it will be maintained for a few years. Then some people will leave.
Then documentation (if there was any) will be lost.
A few rookie developers will be brought in and will try to learn the system and add a few widgets or features on.
Then in 20 years time, the app will be like the COBOL programs running on dusty mainframes.
Most people believe that their software won't be used for more than a couple of years at most, especially young people who (seem to) think companies just rewrite everything to the latest and greatest (currently js/react; tomorrow something else) every few years. So they are not thinking about 2 years let alone 20 years.
As someone who is maintaining products written (by me) 15 to 25 years ago, I know that I was absolutely wrong about that assumption. A company will not touch software if it works, no matter what the latest or greatest is and no matter if it looks like crap. I shudder to think, with JS projects breaking between minor versions of libraries, what happens if there is an issue 20 years from now... Egotistical really to use 'the latest and the greatest' and then just let 'the next person' live the pain of supporting that. You can say whatever about php and java but 20 year old code works fine with very minor changes; I have Node/JS code from 2 years ago that doesn't work anymore at all after security updates.
Well that's a false dichotomy if I've ever heard one. All those years of running in production will do nothing for you when the business asks for tweaks.
> You can say whatever about php and java but 20 year old code works fine with very minor changes; I have Node/JS code from 2 years ago that doesn't work anymore at all after security updates.
Doesn't that also depend on any libraries used? Java and the core framework have been stable but the rest of the ecosystem not so much.
I do think we have a stability crises in "modern" tooling, if your a company that wants to upgrade your old COBOL code to something that will run for another 30 years of little to no maintenance then there are very few options. C, C++, maybe java and PHP are the only mainstream options that have shown any long term stability and even with those you have to be very careful about dependencies.
This! Other examples for good Software that maybe lasts forever is tar, groff/man, fat-fs, vi/m, emacs etc..
There are lots of perfect working 'little' programms.
The comments here are a wonderful insight into the SV and HN mindset. Developers want to develop.
Take the classic unicorn triad of finance/sales/development, and make a plan to exit as close to the peak of the revenue curve as possible. After all, the instantaneous value of peak income at the inflection point is pretty high. If all goes well (and it almost never does), you could be earning billion of dollars per second, at least if you ignore the burn rate.
It turns out that some 99% of the money to be made is under the curve between the initial peak and the end of the long tail -- and the end is not yet in sight for many mainframe applications. They're busy earning billions of dollars a year.
This doesn't really matter. Legacy apps need to be modernized but their owners can't afford/aren't willing to. It's not the mainframe holding these apps back... the problem is much much worse.
Also, mainframes are the most expensive way to run modern code so the fact that it's possible is not interesting.
> Legacy apps need to be modernized but their owners can't afford/aren't willing to. It's not the mainframe holding these apps back... the problem is much much worse.
I've been thinking about this a lot. I don't think many businesses have the margin to pay for the vast cost of a rewrite off of legacy. So maybe it's simply an evolution of business. Old businesses die and young businesses take their place with more efficient technology.
Unfortunately, it does not happen in industries that are heavily regulated, and probably where we need it most.
In a business context, the outcomes that your software enable are the things that matter. If your legacy platform still works and your business can still accomplish the things it wants to accomplish, why add a huge expensive distraction?
IBM still makes mainframes. You can still buy OpenVMS systems. There is no reason to believe that you will wake up one day and your technology will no longer exist. Don't hire people from Stanford or MIT. Recruit at 2nd tier schools and get them a year or two before you need them. They can learn all they need on the job, and they cost less, too.
Mainframes are not slow. IBM's POWER architecture has some downsides but it is no slouch in transactions/second.
> payments taking over a day to go through
That's due to the protocol. https://en.wikipedia.org/wiki/ACH_Network "when a real-time transfer is required, a wire transfer using a system such as the Federal Reserve's Fedwire is employed instead".
Consider that when you swipe your credit card and get payment approved in a store, that’s a round trip to the mainframe back at Visa HQ. Or when you withdraw cash from an ATM. Mainframes are perfectly capable of near-real-time operations at massive scale. The problems you mention are not technological.
> I've been thinking about this a lot. I don't think many businesses have the margin to pay for the vast cost of a rewrite off of legacy. So maybe it's simply an evolution of business. Old businesses die and young businesses take their place with more efficient technology.
The worst part is that a clean, entire rewrite isn't necessary: while everyone's familiar with a gradual refactor (module-by-module), fewer people seem to be aware of how better a proxied gradual rewrite is: where a single module/area in the old system is rewritten in the new system, but the old system's exposed interfaces remain the same and simply proxy or pass-through to the new system (to support existing clients and use-cases) while new clients can access the new system directly.
It's an approach I used for a rewrite of an old ASP.NET WebForms application to ASP.NET Core (famously the two are very incompatible: old WebForms *.aspx markup and stateful Page classes (don't forget `__VIEWSTATE`!) simply cannot even be refactored to work in ASP.NET Core: the only way forward is a total rewrite: but when an old .aspx is replaced, there's an `IHttpHandler` that takes its place and proxies the request to the new ASP.NET Core system and end-users are none-the-wiser (and by "proxy" I don't necessarily mean like a HTTP reverse-proxy - 90% of the time the new .NET Core code was running in-proc and invoked as a normal library reference because it targeted .NET Standard and so ran on .NET Framework 4.x instead of only the .NET Core runtime).
Now I know I can't compare a line-of-business web-application to a billion-dollar IBM-powered banking/insurance/public-sector - but I believe the same principles apply. Provided the original application _can_ be decoupled internally (thus to allow that proxying in the first place), then a gradual in-situ rewrite should be doable.
I think that last part is exactly the problem. Many legacy systems are poorly modularized and so large with such complex interactions that it's hard to figure out where to start cutting.
Given all the attention COBOL has gotten lately, I still haven’t seen any discussion of its characteristics that make it relatively “safe”, secure, and fast. These are; all memory is statically allocated, no dynamic memory allocation. No user defined functions and no stack. Of course I’m referring to the 85 standard here and later versions added these things but 85 is very common on mainframes (my understanding please correct if wrong).
These two things disallow entire classes of exploits and errors.
Edit to add, rather than put out these puff pieces, ibm needs to figure out how to get mainframe access for developers, only then will they see usage increase.
While not addressing the comment of writing programs without heap allocation, a lot of programs can be compiled to execute using the only mov instructions: https://github.com/xoreaxeaxeax/movfuscator
ok, so how do you call malloc or syscalls with mov instructions? Do you just (assuming you turned on executable stack) use mov to write the corresponding non-mov instructions necessary to do those things to memory, then mov the address of these instructions to the ip register? im struggling a little here
These are classified as "libc things". When using mov you can still access libc like any other program. Go study movfuscator(which I linked again) if you would like a full understanding[0]. When you have to interface C with a not-very-Unixy system like Webassembly or these mainframes, you have to link to a compatible libc that remaps everything appropriately.
Malloc isn't particularly more special than any libc function. It simply supports the fantasy that the memory is unlimited in scale. It was always of a static size, you were just handing off the chore of dividing it up. You can usefully implement dynamic sizing in languages with static memory allocation by pre-allocating large arrays and then maintaining handles and liveness flags. This is, in fact, how every console game used to do it until available memory sizes got into the hundreds of megabytes. Fragmentation and the subsequent crashes from OOM presented too much of a concern before then.
Ah I thought the idea was to make the whole program use nothing but mov instructions. Still cool though. That's an interesting point too about console games, it seems to make a lot of sense if the whole machine can be dedicated to running just your program
No, it's not. A mainframe is awesome for certain classes of problems, but I don't generally think of those classes as modern. Yes, you can run ML on a mainframe, but its performance isn't going to scale as broadly as throwing a thousand ephemeral virtual hosts at the problem (why virtual? because I'm not running ML 24/7 and don't want to pay for the cluster while everyone is at home asleep). Sure, maybe it can handle 12 million web requests per second - I'm skeptical about the complexity of those requests but let's roll with it - but now the computing resource is more robust than the network connections you have plugged into it. Are the pipes into your data center truly more reliable than a multi-AZ, multi-region AWS deployment?
To put it bluntly, mainframes are wonderful solutions to problems that I don't personally find interesting. (Note: personally. It's not like I don't think there's a legitimate need for the financial systems that mainframes power, it's just that I'm glad I'm not the one who has to run them.) The problems I enjoy involve horizontal scaling at levels that a mainframe just isn't going to reach, and those are the kinds of problems that most people would describe as "modern". And as I don't think mainframes are a good solution to what I consider to be modern problems, neither do I consider them to be a modern platform.
Again, not that this makes them less valuable for the problems they do solve. I don't want to run a stock exchange with its gazillion transactions (as in the database and financial kind) per second on a cluster of x86 VMs.
Edit: But you want to make them modern, make it easier to play with the damn things. I can get almost any other kind of computing tech into my house for under $1,000, but it's freaking impossible to experiment on a mainframe without selling a kidney or signing a contract in blood. If your platform's "learn more" link goes to a form read by a salesperson, you've already lost me. If I have to pull out a credit card to tinker with it, you've already lost me. Contrast with AWS, for example, which practically throws free resources at you to get you interested and invested in the platform. Where are my free IBM resources that get me a mainframe login and access to developer-useful documentation? As it stands today, if I want to learn how to operate Big Iron, I pretty much have to take a job doing it full-time. Make it easy for me to poke at one at night and on weekends if you want me to tell my boss about this cool thing you're trying to sell.
I do mainframes (I manipulate SMF data and other performance data, for those in the know) and I think there are many interesting problems in the domain of business applications, actually.
Another is RAD. Modern trend is to use Turing-complete languages such as Java, while in the past, a mix of application-oriented languages (COBOL on the MF) and system-oriented languages (assembler on the MF) was used.
I think there is some value in having a high-level language that is more domain specific for certain applications. For example like SAS, but SAS is old. Yet I don't see anything (at least anything open source or popular enough) that would be able to replace it. SQL can do some things, but it is also getting a bit old.
I will not comment much on the "modern" thing, I think the hardware is actually plenty modern (for example, IIRC z/Series processors were the first that had hardware transactional memory, and they have amazing hardware tracing capabilities), and as far as the software - I find that the older software is usually more performant, for whatever reason. There is also difference between Unix and mainframe philosophy about how to approach stuff (Unix is more permissive, mainframe is more tight), which I think has its merits as well.
> Unix is more permissive, mainframe is more tight
That sounds interesting, any examples? The usual complaint about Unix is lack of fine-grain control over filesystem, but in practice this does not seem to be a problem, there are ACLs, selinux, etc. Is the situation better on mainframes?
Yes, in general security is controlled through ACLs on the mainframe. Execution of system programs is also limited. But I think it goes deeper, basically lots of resources are limited or require additional security access, so if something fails, the damage it can cause (due to overuse of resources) is limited.
I don't know if there is anything you can't do on Linux, except perhaps pervasive encryption, which allows to have some file operations (such as backups) running on encrypted files that the user (under which the backup or restore is running) cannot read.
There are also some really old things that would be considered unsafe today, but IBM is gradually fixing and deprecating them. This is pretty much because historically, z/OS was quite customizable, and lot of user space programs (utilities and middleware, not applications) had pretty intimate hooks into the operating system.
Also the auditing (not only in security) is excellent. The visibility into what the mainframe system is doing is unmatched on any other platform, I think.
Even today z/OS has a lot of "exits" that are, literally, addresses in system binaries where you can place address to your own routine (which is linked directly to the OS) which will get called if system does a certain action.
Anyone who spent more time with emulating MVS (and possibly with weird accent emulating z/OS ;) ) probably encountered things like the assembly routine that you can link into every MVS version since 3.8 (all the way to recent z/OS) which hijacks tape mount requests before they are sent to operator, enabling you to connect whatever kind of tape-switching you want - in this case, doing special Hercules magic to change the tape image based on one of the parameters.
Except that mainframe-like platforms are all about vertical as opposed to horizontal scaling, particularly for transactional workloads where this kind of scaling is highly relevant. You can't horizontally scale a database (beyond trivial sharding of logically-independent partitions, perhaps with some further dependency on rarely-changing data that can thus be replicated/cached locally) without severe tradeoffs in both latency and reliability.
That's pretty much what I said. Conversely, you can't vertically scale a database (beyond throwing ever more CPUs, RAM, and RAIDed SSDs) without severe tradeoffs in electric bills and support contracts, and while still not getting the advantages of geographic redundancy. For instance, notice that none of the FAANGs are serving their websites off mainframes. Cost-per-server isn't the only reason for that.
My point wasn't that mainframes aren't useful - they are! - but that they're not good solutions for a lot of the new problems people are trying to solve lately. For instance, a huge portion of fun challenges are perfectly fine with eventually consistent solutions that optimize for availability, where one single piece of reliable hardware behind a couple of redundant peering connections simply can't compete with 1,000 reasonably reliable nodes distributed globally.
> For instance, notice that none of the FAANGs are serving their websites off mainframes
Obviously, nobody cares if the data that makes up your FB/Twitter timeline or Google search results page is less than completely reliable. The new problems people are trying to solve with horizontal scaling and distributed computing may well be "fun" for users and developers alike, but fun is quite different from generally worthwhile.
I didn't say they were unreliable. I said they were eventually consistent. ACID is a wonderful model for lots of data problems, but wholly inappropriate for lots of things where availability matters more than perfect consistency before it's reconciliation time. For instance, suppose you're reading data from a million sensors in a particle collider. You absolutely don't want to model that as a million `BEGIN; INSERT INTO readings (sensor, value) VALUES ('particle_489285', 83); COMMIT;` queries. It's probably more appropriate to shard those sensors to N servers that log the readings to local journals in realtime, then upload them to a central data warehouse later after the experiment is finished. Reliability is extremely important there, but realtime ACID-style consistency would add nothing of value.
Financial transactions aren't the only "real" work out there, and there are lots of "fun" projects that are deadly serious.
Altavista started in a DEC mainframe. Google architecture (and Amazon) was forced because a powerful constrain: some time ago they didn't have enough money.
Given the size of the web back then, running Altavista on a mainframe made sense. But how many exabytes can you stuff into any small collection of units today if you wanted to?
Even today, reading the specs for the unit mentioned in the linked article, I’d have a hard time imagining running the Amazon.com website off a single mainframe. The shopping cart? Sure. The funny matching like “pjama wih ft” showing me “pajamas with feet”? I’d hate to be the one who had to power a million of those per second.
Usually, yes. With IBM on JES2, yes. But it also had JES3, which is probably the oldest Kubernetes-like software around which was specifically done to spread tasks across multiple mainframes. It wasn't very popular as most orgs found it much easier to buy more powerful mainframe.
I find it interesting that the problems you describe as "modern" seem to have had the solution before the problem. I mean it's probably because we had the solution of scalable parallel computing available via the cloud that the problems started to be solved. Are the problems themselves modern?
I’ve worked at a few shops doing bleeding edge stuff, so that probably colors my perceptions a lot, but you’re probably onto something. And that’s kind of the history of computing, isn’t it? Weather prediction used to look a lot like “we got a telegraph from a city 200 miles west that it was raining today, so maybe we’ll get some rain tomorrow?” to “hey, these computers are getting pretty good at math. Think we could model a cold front?”
I remember playing with neural networks on my home Amiga way back when, and that was a fun toy. Now we say “here’s a petabyte of epidemic records. See anything unexpected?”
Or one time someone really wished they could do math even faster than the GPU would allow, and happened to glance at their graphics card, and now we have Dogecoin.
I think our industry has a great record of figuring out new ways to use things that was wholly unlike what the inventors had expected, and that’s probably what makes predicting our future directions so difficult. Like, we have ideas about problems we’d like to solve with quantum computers, but once they’re widely available I guarantee someone will realize they can also be used to make better Spotify playlists or such.
>Nevertheless, many older applications need to be modernized, for example, to create a modern web or GUI interface instead of the ubiquitous green screen interface common for older mainframe applications.
Why do they "need" this? What's wrong with a TUI? Must every single piece of software have a "modern web" interface?
If the option is between a TUI that's instant, and a React/Vue/whatever app that takes 15 seconds to load and breaks the back button... maybe the TUI ain't so bad?
Yeah, I found that argument to be strange as well. Not everything needs to have a super dynamic UI.
That said, I do think that companies are running into serious issues trying to maintain and integrate legacy mainframe systems without fully or properly considering the maintenance costs.
Take the US airlines. My understanding is that these airlines are largely running on legacy booking and inventory systems that have been duct-taped together over the course of decades due to mergers and acquisitions, etc. Several large airlines have suffered major outages that were blamed on internal IT failures, sometimes resulting in grounded planes.
I’d love to hear more detail around this problem for those who have the context, but my understanding is that these companies have have many old systems that are costing a fortune to maintain and becoming less and less stable. I’d bet that you could find some pretty serious architecture flaws without digging too deeply into those systems, and I’d argue that those would be issues worth addressing above a UI.
Problem is that management can see and grasp a new UI far easier than architecture changes.
Major airlines have huge chunks of the IT system outsourced to cloud services for longer than the term grid computing existed. Which isn't that surprising when you consider that Cloud is literally the old mainframe model.
The issues you've heard are often related to intersection points between different systems handling different areas, for example the main passenger-handling system is usually outsourced to Amadeus (especially in EU it seems) or Sabre, then you might have onboarding software that can sometimes introduce you to mainframes most people never heard of (I know that at least one airline used Unisys). Some have in-house components that integrate with the rest. There's integration with airport baggage handling. There's plane monitoring and maintenance (which might include "cloud" services from vendors of the plane and separately of the engine), etc. etc.
Some of those systems are very new - KLM/AF have recently made a large enough development on z/TPF that they call it actually a new system. I know that I was VERY impressed with how fast a trained clerk worked using it, getting me a new flight after volcano grounded everyone, adding frequent flyer program etc. along the way. You know all those stories about expert ViM users? A typical trained user of one of those systems is at least as fast as them.
Quite often, the finger for failures is pointed (often for good reason) at outsourcing companies. Interestingly, it tends to not be Amadeus or Sabre that are pointed at, but well known companies like TCS, Wipro, Infosys...
First things the vast majority of users (virtually all) when they see an old ugly GUI is to hit the back button.
Decent SPAs don’t take 15 seconds to load and they don’t break the back button. Stop spreading FUD.
It’s like the HN myth of how there’s supposedly all these old devs writing COBOL for the government and in reality the government can’t find any developers to maintain their ancient apps
> It’s like the HN myth of how there’s supposedly all these old devs writing COBOL for the government and in reality the government can’t find any developers to maintain their ancient apps
Exactly. I worked at a private, multi-billion dollar organization a few years ago. There were plenty of "old devs" working on their COBOL software, but they also got paid more than 6 figures to do so. And they hired recent grads to work on those same systems. New grads typically started around 60k, more if you negotiated better. Whereas these gov't jobs only pay around 35k - 75k for the same work. For someone with years of experience.
> First things the vast majority of users (virtually all) when they see an old ugly GUI is to hit the back button.
These aren't public facing apps you have to entice people to use, it's part of their job, if someone hits the back button and refuses to use it they're fired. Modern user friendly versions typically have half the functionality and take longer to do anything, they're universally reviled by people that took the time to learn the old green screen versions.
> Decent SPAs don’t take 15 seconds to load and they don’t break the back button. Stop spreading FUD.
Decent SPA's are few and far between, they take extra work to get this functionality that was free before SPA's.
The kind of users that are going to use the mainframe are not going to hit the back button.
Also, in mainframe-using orgs, there's often this weird idea of giving people training. Sometimes it's over a month of training where you have to sign a loyalty contract that you're going to work at least X months or repay them before you're allowed to take job with competition.
I've heard of real, major issues in USA where organizations that were willing to train someone from zero just could not get them over the fact that the terminal was block-based instead of character based...
In terms of cost comparisons, I don't have any raw data that I can share - so, apologies and feel free to take the number I stated with a pinch of salt.
In terms of security, z-series can be configured to meet the common criteria EAL 5 and beyond. Which is pretty d*mn secure and I don't believe can be matched by commodity hardware.
In terms of transactions per second, which is one of the most important metrics here, the z-series can cope with up to 1.1 millions per second. I believe the chunkiest single AWS instance maxes out at 10,000 tps.
Then there's reliability... a lot of these mainframe beasts have years, if not decades of up-time in a lot of cases.
On the downside, developing on them is clunky and there's a severe shortage of expertise. College graduates barely even know mainframes exist.
As part of my job over the past 2 years I’ve been migrating a data warehouse with data being loaded from Mainframe into BigQuery. I’ve learned a lot and gained respect for the platform.
One of Google’s differentiating products is the TPU, to train neural networks in hardware. Meanwhile, the mainframe has specialized hardware for TLS, gzip compression and encrypted storage.
Can anyone explain how Mainframes differ from commodity servers or a high-performance computing cluster with low-latency networking? Is there really a practical difference? I get that they are useful for handling banking transactions, but I've never gotten a good explanation of why a collection of networked servers is poor for this.
A cluster of x86 servers can be made to be as or more reliable than a mainframe and provide equivalent or superior performance.
The difficulty is that doing this requires building a full vertical around the idea, and it's so hard to sell new big scary ideas to the business. So, the entire space is now fractured into 10000 vendors trying to sell you various nuanced aspects of the big picture. Combine with the cloud hysteria, and there's virtually zero chance you get to even experiment with this idea unless you take it up as a hobby and/or build a company around it.
To do this right for commodity x86 - that is, make it as or more reliable than a Z15 mainframe with equivalent performance - you have to do it on-prem and in a datacenter designed around the topology of the cluster. You cannot run something like this inside AWS/Azure/et.al. and expect it to perform well. You need direct fiber links between nodes without switches in the middle. Minimum of 5 nodes for N+2 redundancy. N+1 paths between every node. 2N+1 power/cooling. There is zero compromise you can take on the aspects of such a solution if you want to meet or beat IBM's insane numbers. On top of this, you would have to develop a software platform that is directly aware of your physical topology in order to properly leverage it. This implies writing new business software against your new platform. This is risky business. No reasonable shop would sign up for that ride unless you had 2-3 success stories and living, breathing clients to testify to the same effect.
A cluster of x86 servers can be made to be as or more reliable than a mainframe and provide equivalent or superior performance.
For some carefully chosen, possibly hypothetical value of cluster,x86,server, reliable and performance.
To go from a hypothetical to a concrete would require an investment of time and money of sufficient magnitude to make the purchase/lease of an extant mainframe a compelling choice.
IOW, if my grandmother had wheels, she'd be a horse trailer.
> To go from a hypothetical to a concrete would require an investment of time and money of sufficient magnitude
As another commenter mentioned, Oxide Computer seems to be going for something not unlike this. People might compare their system-design target to the minicomputer (or the "supermini") as opposed to the mainframe, but that's just semantics at this point.
Mainframes are designed and tuned for throughput in IO-heavy workloads, not compute performance. Reliability is also a key concern - in a typical system, every critical component might be triply redundant and use voting logic throughout to detect system faults.
Low-latency networking is obviously relevant to anyone who might want to achieve similar levels of throughput+reliability by scaling out towards datacenter-scale computing. But modern mainframes are typically single systems - perhaps with a secondary system in an off-site data center for automated failover, but nothing much beyond that.
Mainframe hardware is amazing. You can do many soficiated configurations and is quite fault tolerant. The kicker is the price. If your a large multi-billion dollar corporation it might make sense to buy a few of them. For the rest of us the price is way too much.
As for cobol, cobol will be running long after we are dead. It's a testament to cobol's resilience.
That being said, I am not a mainframe guy and I don't plan to be. The cloud, linux and container does what need and quickly. (Mainframers can add their snark here.)
I seriously doubt a 4-socket, 80-thread, 8 terabyte x86 box from Lenovo is that much cheaper than the lowest end of the z15 spectrum.
A 4-socked Dell with 8 TB of RAM will cost you almost 400 thousand dollars and, in order to get 5 nines out of it, you'd better buy a bunch of them. You also won't be able to match the single thread performance of the z15 CPUs and almost a gigabyte of L4 cache.
An unscheduled reboot of a reasonably sized server doesn't happen in 60 ms. I don't think 60 ms is even enough to properly power up all components of the BMC.
We are looking into, at best, many minutes of unscheduled downtime. Depending on your business model, that lost revenue could have paid for a lot of metal in the first place.
Except that the mainframe reliability figures apply to a sysplex (think cluster), not a single machine. And the single-machine features, such as memory mirroring, are available elsewhere.
So, here's the thing: for mainframes we only have marketing data; the actual performance figures are secret. Which looks a bit suspicious, don't you think?
As for "custom-built" - nope, not such thing. You can choose the configuration, but it's all pre-made, nothing custom there.
Not just to sysplex - the correct comparison is not "le random Dell server with RAIM configuration", but HPE NonStop and/or carefully husbanded vmware vFT configuration - and that's for single mainframe cage.
For the last decade or so IBM had been selling pretty much all mainframes with hot-swap FT cpu books, and the standard purchase model is very cloudy - you lease them (so it's all OPEX, instead of CAPEX), and in fact IBM will happily sell you "on-demand usage based pricing" for that lease. It's just that you can order that machine to be in your datacenter, but many clients simply use IBM's lesser-known cloud offering.
I have been involved in a slow-rolling project at work to migrate a fair number of administrative screens and their data (written in the late 80s/early 90s) off of the existing zOS mainframe, primarily because of the costs (management doesn't want to pay IBM anymore for the limited set of things it's being used for and finding devs is expensive too). The project has already run way past the original estimate, no major surprise.
Our new APIs and web UIs are nice and all and offer a number of advantages for today's users/admins, but sometimes it feels they take more babysitting and fuss (updating deps, etc.) than the old mainframe code did - I'm not confident that an Angular SPA running for 20+ years would still work the same, so we have tried to take that into consideration while designing replacements.
I'm not complaining about one or the other but rather I feel lucky to have experienced tradeoffs and learned about how some of these problems were solved 20-30 years ago, while I was still in grade school.
Not everything needs to be an Angular SPA, surely? It seems obvious that choosing a well-understood server side platform would make way more sense for something like this.
You're absolutely correct - we have a healthy mix of traditional server-side web applications and a small number of SPAs only where it makes sense (usually complex forms with lots of feedback/conditionals). I just used Angular as an (admittedly) easy target. These days, I've been trying my best to avoid SPA for enterprise stuff if possible
It seems like we should distinguish between hardware and the platform that software runs on.
I'm wondering if the same software can be deployed to mainframes and non-mainframes easily. Is the hardware sufficiently abstracted away so that it's like running a Docker file?
I do realize that virtual machines were invented in the mainframe era, but I'm wondering how it's done nowadays.
Technically COBOL can be compiled for most platforms (or auto-converted to Java or whatever) but I suspect many of these legacy systems are built on proprietary IBM databases or middleware that IBM is milking by not porting it. You could emulate the whole thing in Hercules but you still have to pay whatever IBM asks for z/OS and such.
I am reading a lot of these comment. But I cant help but think most people aren't exactly attacking the idea of Mainframe at all.
They simply dont like IBM. Or Mainframes from IBM.
Personally I love those ideals, 7 nines reliabilities, vertical scaling to the max rather than super horizontal scaling ( Conceptually simpler ). I just wish they were more accessible.
Working with a mainframe everyday, my experience is this. While IBM has enabled the mainframe to run modern software, many mainframe developers cling to the old development languages and techniques.
So, while the mainframe CAN be a modern platform, it may not be used as such.
If this were true, we'd see a massive uptick in the sales of mainframes.
Mainframes don't scale. Nor do the apps you build into them. Unless you have very deep engineering pockets. Most places though won't invest in this due to the high initial upfront cost.
There is nothing inherent in mainframe hardware nor software ecosystem that causes it to not scale. In fact, "modern" scalable software architecture with containers, microservices and pervasive automation closely matches how things are done in mainframe environment for 30+ years.
This fact obviously isn't lost on the IBM, because they even market scaled down mainframe hardware as "kubernetes cluster in a box".
Are there any meaningful benchmark results for mainframe hardware available anywhere? You’d expect there to be, like for any other competitive platform, unless the IBM’s license makes it impossible to share them?
I think they should let someone to have an entry point for at least trainee and people trying. Or it is another security measure, only insider can join with barrier of million dollars investment.
Mainframes are amazing machines, but I’m not sure as to how one would bring these 9s to the end user application.
Can you have it globally distributed? Cross datacenters? Blue/green deployments? Canary?
I saw a couple real world cases where banks went out of service due to an electricity outage, they could not switch the mainframes to another datacenter.
If you put a fault-tolerant multi/az app on AWS, the spinning disk or ssd may not have the 9s of mainframes, but the end users won’t notice if one stops work ring.
It doesn’t matter if it’s a JS app running in a container or a mainframe, if the power is knocking your service out you’re not doing it right.
The company I work for runs a few mainframes. The DCs have 2 circuits for power, batteries, and diesel generators. I’ve seen them run the tests on the generators.
Our “main” mainframe does not run 9s it runs 100% of the time or someone from IBM is getting on a plane...
Read a few pages into this where they talk about x86 cores.
A consultant trying to convince an audience that his obsolete area of expertise is still relevant to more than 100 people on Earth. Good luck, I can relate.
Oh man, we not only get to use COBOL on mainframes, we get to use modern languages like Java and C too! When your idea of modern is a language created in 1995 and 1972, you know you're out of touch.
Let's face it, everybody knows that mainframes are dying. IBM knows it, these Enterprise System Media guys know it (whoever they are), and most importantly, every one of these big fortune 500 companies stuck with COBOL apps know it. How many new projects are started each year that target mainframes? Or, alternatively, how many companies who don't have any mainframes are thinking about purchasing some for their new service? I'm guessing close to zero, or maybe even actually zero.
The insane premium (in cost, in difficulty to learn, in everything) that you're paying for those bazillion 9s of reliability aren't worth it. Anyone who actually needs that kind of reliability has the technical knowledge to make it themselves using x86_64. According to his talking points, big tech should be loving mainframes and exclusively use them. But in fact, it's the opposite. The most savvy and powerful tech companies use commodity x86_64 while the not so savvy are stuck with their 1980s mainframes. In Emmanual Derman's "My Life as a Quant", he talks about developing a new bond analytics platform at Goldman Sachs. His team decided to build it on Unix (though Genera was floated around as an idea as well), because it was the modern choice and mainframes were legacy. And this was in the early 80s!
This article is so unenthusiastic, it's kind of sad. It's like the author knows what he's saying is total bullshit. I predict that in the next 10 years, IBM is going to stop selling hardware (I'm not even sure why they haven't already, but then again, the company isn't exactly a paragon of good management) and just release some virtualization package that can run all the legacy crap on x86_64. Or maybe this has already happened, finding any information on what IBM an product actually does is quite difficult. I get the impression that no one actually calls IBM to inquire about a product. Instead, I'm sure some middle management guy sells out his company for a $60 steak and some wine.
"everybody knows that mainframes are dying" Please name a single modern machine which can reliably handle billions of financial transactions on December 31 every year, when everybody is shopping? I know you can handle this with cloud, but it's not so reliable and you need many servers... On the other hand, you have a single machine with decades of up-time...
It is not impossible to build a system that can handle this on x86_64, it's more a question of software architecture rather than hardware architecture - modern computers are insanely powerful.
That software architecture will be extremely hard to pull off if you really want that kind of stability (distributed systems are HARD). Only the most hardcore engineering organizations can pull this off (i.e. not banks or airlines). Meanwhile, IBM is selling out-of-the-box solution that you just need to buy and plug in.
It's less about it being hard (banks and airlines can pay for lots of software engineers if they need to) and more about it not being necessarily appropriate to every workload. Distributed systems must be designed assuming that any single node might fail at any time; even communication among nodes cannot be assumed to be reliable, and every communication step introduces latency. When a random processing error means that some airplane might fall out of the sky or some banking transaction might go unaccounted for, these problems become very relevant.
The point is that it's unnecessary. The cost of such a reliable machine is far too high. It's better to have a bunch of cheap machines and program some failover system.
It's like the actor model. It's better for an actor to just fail instead of trying and recovering. Just make sure another actor can take its place and that the failures are isolated from each other. By using this method, you avoid the enormous cost it takes to make something very reliable. It's the 80/20 problem.
Let's say we have a service than whenever it handles a requests has a 80% chance of succeeding and a 20% chance of dying. The easy way to lower the chance of failing is to just have two services that handle the same request. If one fails, the other one might succeed. Using this strategy, we've now just lowered our failure rate to 4%, instead of 20%, with very little additional work. The cost of making only one service have a 4% failure rate (instead of 20%) is so much higher than a service that has a 20% failure rate.
This is the perfect analogy, imo, for mainframes. So much effort and cost has gone into making these things super reliable. But it turns out, we don't actually need a super reliable system. For example, NYSE ditched it's mainframes a long time ago, because just using x86_64 and Linux to achieve the reliability needed is much cheaper and easier. And you would be hard pressed to find a company that needs as much reliability as a security exchange. If mainframes aren't worth it for NYSE (the mainframe apologists would say that NYSE is exactly the kind of company that needs mainframes the most), then they probably aren't worth it for anyone at all.
I would say developing for java that ultimately will run on mainframe is worse than developing for java for another platform (because deploying to your development websphere is slow and not so reliable). The other problem is that Java is the most 'modern' programming language available. Nobody wants to write a web app in COBOL or C. But Java isn't that good at it either (even without websphere, java does not deploy as fast as python, ruby, php, go). The only misconception is that its 20 years out of date instead of 50. The best reason to stay well clear of it is cost, though. Not just increased development cost but those things are expensive. But for a developer it can be good. Mainframe orgs tend to be stable and pay above average.
IBM does not get to boss us around anymore with their industry-specific terms that they refuse to use standard industry jargon instead.
> have you ever tried to run a Windows 3 app on Windows 10
Yes. It is called cardfile and it runs fine even though Windows 10 tries its best to never work on anything that isn't a telemetry laid in uwp social sharing app. No expensive mainframe needed.
(Windows experts will note that card file has 32-bit versions and not just 16-bit versions but hop back even one version of Windows and it can run 16-bit stuff as well so well my example wasn't perfect my point stands)