Hacker News new | past | comments | ask | show | jobs | submit login
IBM ported Go to s390x mainframes (github.com/linux-on-ibm-z)
220 points by pythonist on Jan 11, 2016 | hide | past | favorite | 120 comments



They have a lot more software in the works, such as Cassanda, MongoDB and Spark [1]. It's a push to have good software support for the IBM LinuxONE machine [2], essentially an IBM z Systems (z13) mainframe that will only run Linux on KVM (i.e. without expensive z/OS or z/VM 'legacy').

I'm not sure about the costs, but it might make for a compelling platform as a 'private cloud' solution. A single mainframe is a lot easier to maintain than racks full of x86 boxes. Apparently one LinuxONE Emperor machine can run up to 8,000 Linux VMs [3] (not sure with what load).

[1] https://github.com/linux-on-ibm-z

[2] http://www-03.ibm.com/systems/z/os/linux/linux-one.html

[3] http://www.zdnet.com/article/linuxone-ibms-new-linux-mainfra...


up to 8,000 Linux VMs (not sure with what load)

Traditionally mainframes were superior for loads that were IO bound rather than CPU bound. I suspect that this is still the case.


The mainframe hardware isn't anything special (I've said this before, I still have access to a z114). You can probably host that many lxc containers on a decent x86 server too. Not that you probably want to.

The z114 CPU doesn't keep up with anything > midrange x86, and its IO capabilities are limited by the inifiniband links it uses to communicate with the IO drawers. A pretty basic dual socket E5 v3 has more PCIe bandwidth available than you can get out of the z114. It appears that is the case for larger machines as well (aka there are E7 configurations with significantly more bandwidth than the z13). And yes, I've heard all about SC's and all that. But what most people fail to realize is that most storage adapters have some form of ARM/etc cpu onboard that does most of what the SC does on a mainframe, and people have been fighting the CPU vs offload engine battle for years for network adapters.

So, a lot of the mainframe advantage is in zOS/zTPF and not the hardware. Running linux with its inefficient block layer, misses the point.

Heck, your probably better off running linux on POWER8.


A company I worked for about 10 years ago had an IBM s/390 mainframe for about a month. Our manager at that time had previously worked for IBM and was trying to move the company over to hardware managed by IBM.

The biggest selling point was redundancy in the hardware being able to handle CPU/RAM/Disk/PSU failures without taking down the Linux VMs running on it.

Our workload was network bound, not much CPU or Disk IO load.

We ended up using generic Intel PCs (some custom built, some Dell and some IBM Netfinities). We had more down time with the IBM managed Netfinities, then with the custom built rack mount servers.

Back then we also had some Sun UltraSparc E3500 & E6500s, but we found that the Java VM (from Sun!) was much slower on the UltraSparc CPUs then on Intel CPUs.

I find it interesting that we had much better results from generic no brand rack mount servers then most of the expensive "enterprise" class servers.

Note: This was over 10 years ago.

Anyone have more recent experience with servers in a self-managed DC or co-location? It seems most companies/developers now use cloud/vps/hosted servers.


It's really not possible to make a blanket judgement. The lot of high end systems boil down to brand name, software compatibility, "one throat to choke", and vertical integration. While the first two are mostly meaningless, the last two can have profound network effects. IBM has been able to realize nice vertical integration across most of their current systems, and they have some world class people doing CPUs, compilers, OSes, application stacks. If you have a problem with the mainframe you have one company to deal with.. that could be very good or very bad (somewhere brand name aka reputation does matter). In consumer terms, it's kind of like Mac vs PC.

It's also not so easy to judge x86 as low end. Stratus and HP make some really high end fault tolerant/instruction retry x86 servers that could be considered mainframes IMHO.

Low end x86 servers are "good enough" for a large portion of what people use servers for, and a massive amount of work has gone into making distributed systems that can scale and handle faults semi-predictably. That said, I'd much prefer my bank account to reside on an IBM mainframe SysPlex for the time being.

Sun kind of struggled in the CPU market as well as indecision around x86 for a long time, it's one of the primary reasons they are now part of Oracle. Those E3500/6500 systems are very high quality and would probably still be running fine today if performance isn't the only measure of worth.

I've long been a fan of IBM's POWER system p or whatever they want to call it depending on the day of the week. It's positioned somewhat in the middle of x86 and mainframe, but you generally get more powerful systems than x86 with mainframe inspired reliability. Still, these fill a much narrower niche than cheap x86 boxes.


I agree with what you said. I was just relating the experience I had.

We also had two large (U8) Compaq ProLiant servers that we used to run Oracle DB. We didn't have any problems with them, and only replaced them with newer faster Dell servers later. The redundant PSUs and hot-swap drives were the main reason we ran our database on them (one master, the other a standby slave). Those server were later used for our beta site (used for staging/development).

We later ported the backend Java service to C++, which most likely would have run fine on the UltraSparcs, but I think we had already sold them or returned them back to the vendor. I really wished Sun's UltraSparc T1 & T2 CPUs had taken off. I think they could have really good CPUs, but it was hard to compete with x86 because of all the software that was already compiled/optimized for the x86 architecture.

For us it was important to be able to fix problems quickly. Having to call in a service tech. to fix a hardware problem just increased downtime. Also we just had really bad luck with the IBM Netfinities, all 4 servers had a bad motherboard which failed in each server at different times. After that we didn't have any problems with them.

Personally I prefer to deal with generic hardware that I can service myself when a problem happens. But for some companies, it is better to have managed hardware with service contracts.


A few years ago our management had a healthy portion of kool aid and demanded that we land a few systems to Linux zOS containers.

It was awful and we discovered why a few days in -- running our workload on a few VMs on my crappy corporate laptop in VMWare smoked the mainframe.

I guess it's better now, but it seems more like an Oracle license laundering scheme that something anyone would choose to run.


Total agreement. Linux on the mainframe is really silly beyond utility purposes, whoever the business people are that convinced senior IBM management otherwise really pulled a fast one.

POWER8 is designed for UNIX, and adjusting for certain parameters goes beyond what x86 servers can do in the same footprint and reliability. Price is approachable to people who need the reliability these kinds of systems can provide. IBM systems have grown a lot in common over the years, with AS/400 or System i and RS/6000 or System p melding together, POWER8 kit has a lot of mainframe inspired RAS features.


Can you do an AMA, or umm, tell us more?

+1 Insightful


Even with high CPU loads, due to license costs for expensive proprietary software, the advantage can shift towards mainframes when dedicated Application Assist Processors (zAAP) are employed, because usually only the generic CPUs have to be licensed. By offloading work to the 'license-free' zAAPs, overall throughput can be greatly increased with equal license costs.

Of course, this scenario isn't what LinuxONE boxes are made for; they are more focused towards open source deployments in the enterprise, with low license costs (usually only support contract costs).


I used to work at a place that did this. The only issue we ran into was memory was HELLA expensive on the z series boxes. It was to the point that it was cheaper to buy an hp server with 1TiB ram than get 16g of ram from the zaaps.

That and getting software to run on it was almost impossible. In the end we ended up moving everything to HP servers for a fraction of the cost.


When deployments like this happen, where does the z series box then go? Back to IBM (because it was on lease?) Is the z series modular enough that it's easy for ibm to reconfigure to lease to someone else?


If it was a lease through something like IBM Global Services they will take it back and possibly redeploy it. Otherwise, they can kind of get lost in the noise, either ending up at auction or straight to an edisposal service. They are usually parted out at that point to sell spare parts, because the licensing makes it nearly impossible to use one of these w/o involving and paying IBM, usually monthly. Otherwise they are crushed for precious metal recovery. Kind of sad ends for such works of art.

The Linux dichotomy IBM created is pretty stupid. I don't know why you'd want to run a tire fire on such expensive HW, sure maybe utility VMs, but not as the raison d'etat. Instead they should be a lot more willing to provide cheap z/VM, z/OS, z/TPF licenses for smaller shops to build up an ISV, admin, and porting community. But they charge thousands of dollars for an emulator when Hercules is freely available and ADCD is quite restricted.

I recently bought an older one for the cost of a soda https://twitter.com/kevinbowling1/status/685240077161598976. I would warn against doing this for a number of reasons, namely FICON storage and licensing, but if you are determined it is a lot of fun.


Thats cool, your lucky with the z800, most of the other mainframes have "exotic" (aka 3 phase 277/480) power requirements. We had to pull power in our existing computer lab to run our MF. So, while its possible to order most mainframe equipment for single phase 208, it doesn't appear to be the default choice for the kinds of places that buy mainframes.

I settled for hercules at home, hell of a lot easier to manage, and is pretty fast given that its a fairly naive emulator (aka no JIT/tracing/etc). I think if the turbo hercules guys (or someone else) added a decent JIT, and solved the licensing problem IBM would be in a world of hurt. Although, maybe not, the few remaining customers aren't the kind to run their bank/airline/etc on a 10k piece of hardware and an open source emulator. Of course, I've been wondering for a few years what percentage of machines are being sold to development shops to support the limited number of real customers. That is why we purchased ours, to support real customers, not to run any actual work on it. Of course I've also heard that most of the business class machines are sold with tiny capacities, because they are simply hot fail-over targets. The assumption is that if something goes wrong at the primary site, someone calls up IBM and gets them to boost the capacity on the idle machine as the workload is shifting over.


Yeah IBM really created a false dichotomy. Production users want the mainframe architecture and reliability. Developers are fine with emulation and create demand.

I've got to imagine they are barely breaking even on a minimal config baby class system. The yield on an MCM has to be minimal. You can find videos of how labor intensive the build of a frame is, let alone the engineering. I wouldn't even be surprised if they were loss leaders.

And that begs the question, why not grow the user base with aggressive placement, training, and development. Another $1 billion investment in Linux has no appreciable affect to most mainframe users, and no way there is an ROI. Imagine that injection on this small, loyal platform of users.


Yep, either IBM says ok thanks we'll move this hardware to somewhere else. Or the other odd thing about mainframes, they ship them out fully loaded, and make it so they can turn on the hardware you want if you need it.

Yeah, it is a weird environment. The i/o on them is nice though, but everything else is so wackadoo I never want to touch it.


Good for you haha. How much power do one of those things use, btw?


The z13 uses 12.9 kW with 3 CPC drawers (~96 CPUs) up to 24.7 kW in a 4 CPC (141 CPUs) fully loaded, maximum power configuration.

See: https://books.google.nl/books?id=Do1WCAAAQBAJ&pg=PA78&lpg=PA...


Appreciate it.


This is the baby class z800, it's 3kw max which isn't too bad for a full cab.


Yeah, that's like 6 gaming rigs when I built them. Not bad at all.


I don't know how they decommission a zSystem.

But from what the mainframe guys at work told me, hardware between 2 z boxes are pretty much the same and it's kind of licensed by load usage or components usage.

So if you want to downgrade or upgrade memory/CPUs/etc. (to a certain degree), it's just a matter of having an IBM tech to enter the right license in the console.


Interesting.

Never worked on mainframes myself, but have on Unix (and Linux) a lot. Had read some years ago that mainframes had some powerful features that were often not available on Unixes. One such feature that was available, could be HP's PRM (Process Resource Manager), which, IIRC (from my time at a HP joint venture), could ensure allocation of a guaranteed minimum percentage of system resources (such as RAM and CPU time) to a specific process (say a payroll job running at the end of the month). I remember being impressed by the concept at the time.

There was also HP MC/Service Guard, but didn't have a good idea what it did, just remember the name.

P.S. Had also tried out rtprio and prioc(n)tl on some Unix boxes, maybe one was an HP one, the other a SVR4. They allowed assigning (quite) higher priorities to specified processes, IIRC.

P.P.S. Both those product names could have been changed by now, or been merged into bigger products, as sometimes happens.

Edited to say that HP's PRM _was_ an example of a mainframe-like feature.


Another one I like very much is how OS/400 works.

Written originally in PL/I and Assembly, it contains a kernel level JIT.

All user space applications target a general purpose abstract Assembly and they are AOT to native either at installation time or on-demand.

Other mainframes had similar approaches to code portability.

Sounds familiar?

Another cool thing are the catalog based filesystem and OO Assembly.


The JIT scheme worked well for future-proofing for decades. That means it's an approach others should consider emulating if they're not yet. Only thing I couldn't figure out, due to IBM terminology, is if the compiled output is actual microcode or just CPU instructions. Or if they both create a optimized, microcode implementation and a JIT to move things to it.

Integrating DB-like functionality into the system was ahead of time. The original, System/38, had capability security in the hardware enforcing an object-based view of memory with permissible operations on different types of objects: processes, I/O, arrays, stacks, and so on. The security implications of that means I'd recommend one to a business in a heartbeat if they hadn't ditched that for POWER.

https://homes.cs.washington.edu/~levy/capabook/Chapter8.pdf

All in all, System/38 was a very, forward-thinking and practical design that continues to prove itself year after year. If it doesn't exist, I'd like to see a Lesson's Learned report on that project and how the team decided what would be in it.


OS/400 doesn't run on s390/zenterprise. It runs on pSeries (POWER8), which also run AIX and Linux.

OS/400/IBM i (or whatever its called this week) is actually probably one of the cooler OS's still around. It should be mandatory study in any comp sci/comp eng OS classes, but sadly its not and because of that, a lot of wheels are being reinvented.


> Sounds familiar?

Java JIT doesn't write machine code to disk. OS/400 does, and then it re-uses the machine code until it no longer exists (you move to new hardware) or the bytecode is newer (you change the program and recompile), at which point it compiles the bytecode to machine code and stores it to disk again.


Actually it does.

You just need to look to the correct JVM. Apparently people keep forgetting Java is like C and C++. There are plenty options to choose from and OpenJDK only plays the role of reference implementation.

If I am not mistaken IBM J9 is one of those JVMs.


Kernel level JIT? Didn't know, sounds cool.

>Sounds familiar?

I suppose you mean like Java and Python bytecode running on their VMs (of the language, not OS kind).

I've sometimes thought, as well as heard others say, that some of the tech advancements of earlier decades haven't been "re-invented" :) yet ...


Actually I was thinking more about some Oberon versions that used a JIT, the way .NET is integrated into Windows, specially on the mobile since version 7.

Also the way Java gets used on Android and other embedded platforms.

Or the new deployment model for iDevices.


Ah, Oberon. Don't know much about it. But do somewhat remember an interesting feature of it from a BYTE or other computer mag article: You could write a subroutine once and then have it available in the whole operating system - something like that. My description might not be right but I remember thinking the feature was powerful at the time.


Yes, that is correct.

But the original idea was from Mesa/Cedar at Xerox PARC.


Didn't know, thanks. Also saw that you just posted a link to an article about Mesa on HN. Viewed it briefly. It says the Alto file system was written in "BPCL". Should that BCPL, a precursor of C? Had read a book it, by Martin Richards, IIRC, long ago. Good read. I think it had only one type - machine words. Fun to implement higher level stuff on such a base, though tedious by today's HLL standards.


Just checked out Martin Richards' web site:

http://www.cl.cam.ac.uk/~mr10/

(Richards invented BCPL.)

He has an interpreter for BCPL. The page says:

BCPL, an interpretive implementation of the BCPL language and system, including many demonstration programs. Click on BCPL.html to obtain a copy of the current version. This version can be installed easily on most machines running Linux, Windows and MAC OSX. In particular, it is easy to install this version on the Raspberry Pi machine. See the Young Person's Guide to BCPL Programming on the Raspberry Pi (bcpl4raspi.pdf) for details.

His Wikipedia page:

https://en.wikipedia.org/wiki/Martin_Richards_(computer_scie...

says:

He was awarded the IEEE Computer Society's Computer Pioneer Award in 2003 for "pioneering system software portability through the programming language BCPL".[9]


https://en.wikipedia.org/wiki/BCPL

C followed B which followed BCPL.


I think he's refering to things like Apple's "bitcode".

http://lowlevelbits.org/bitcode-demystified/


Bitcode isn't JIT (yet); it simply allows Apple to compile binaries targeted at new devices or with newer compilers, without the developer having to do anything.


This has not been my experience running Linux on mainframes.

The 8000 VM tests I've seen picked apart are typically "a webserver" type load, not meaningful work.


I know it's not on a mainframe but if you think 8,000 vms is a lot (which it is!) Ron Minnich ran 1,000,000 Linux VMs on the Thunderbird supercomputing cluster with Dell supplied 4,480-nodes.

https://share.sandia.gov/news/resources/news_releases/sandia...


But still. If I understand this correctly 8000vms is one of these mainframes. That's one machine versus a supercomluting cluster of 4480 nodes.


yes, that's why I qualified it and gave the numbers so it gives a kind of measure of the relative power if these things.

~5000 dell nodes = 1,000,000 vms = 200 per node

1 mainframe = 8000 vms per mainframe

so 1 mainframe approx. 40 dell nodes


> I'm not sure about the costs, but it might make for a compelling platform as a 'private cloud' solution.

My assumption is that they will port Cloud Foundry and sell it under a name like "BlueMix/Z".

Disclaimer: I work for Pivotal, the major donor of engineering time to Cloud Foundry. I believe IBM is the second biggest donor of engineering time.


Hmm https://github.com/linux-on-ibm-z has got all kinds of cool goodies: Docker, Cassandra, Kubernetes, Spark



What is the deal with z / z/OS? Is it a non-unix OS? I can't find a ton of info in my naïve googling.


System z is IBM's mainframe architecture (the hardware).

z/OS is the flagship operating system. It is at its heart a batch-oriented system, but it does have a Unix subsystem (called USS for "Unix System Services", formerly known as MVS OpenEdition or OMVS) that is in fact certified against a rather old version of the Single Unix Specification.

So in a way, z/OS is in fact a Unix system, but that is kind of like saying that Windows is a Unix due to Cygwin or Microsoft's Services for Unix. Mostly, it is very non-Unix-like.

Interestingly, IBM also offers a number of other operating systems for System z, including VM (although that system apparently does little but host virtual machines), TPF (a "real-time" transaction processing system) and zVSE (formerly known as DOS/360, another, smaller batch-oriented system). And, of course, Linux is available from a number of distributions. Oh, and just for good measure, Open Solaris was ported to run inside VM, although I am not sure what became of that or if anyone is actually using it.


What is using it like? Command line? Would it be familiar-ish to someone with Unix / DOS experience or totally foreign?


All command line, and totally foreign on a level that makes switching from Windows to Linux look trivial by comparison.

(which coincidentally, is why I want to explore it)

Here's the official IBM "for dummies" book: https://www.redbooks.ibm.com/redbooks/pdfs/sg246366.pdf


It uses the 3270 terminal (xterm has a 3270 mode I've used), interface is predominantly 'ispf', it's essentially menu driven; you can browse/search the filesystem, edit files, submit jobs (everything is a job ie jcl ie job control language; compile, run programs etc) browse job spools (stdout from a job).

Google will show you images of ispf, browsing the jcl manuals will give you nightmares.


My first job was writing JCL decks - it's not that bad.


It is not terrifying per se, just ... very different.

I have only had brief contact with JCL, but I remember having to figure out how to get a command line into JCL that was more than 80 characters in length. It took me two days to figure that out, and none of the old-timers in the team had ever had to do that. When I did find the solution, it was surprisingly simple, but finding it took me a lot of time.


Wait, is it a UNIX system or have a UNIX subsystem? Those are different. I know it has the subsystem. I'm trying to recall when someone said it was a UNIX. Stuff I read on it is more MVS than UNIX. Example with some data:

https://en.wikipedia.org/wiki/Z/OS


It has a Unix subsystem. It is a lineal descendant of OSes which predate Unix by quite a bit, and has grown by accretion since then.

It's POSIX certified. Linux isn't. That should tell people quite a bit about POSIX, but it never seems to...


POSIX isn't actually about operating systems. POSIX (1003.1) is about source-level compatibility for 2 languages: C99 and shell scripts.

If your C programs use only C99+POSIX facilities, and your shell programs use only POSIX features and utilities, and they work on a platform, then that platform is eligible to be POSIX certified. POSIX doesn't care about the kernel or syscalls or anything like that. It cares about libc, libm, libl, and a couple of file paths.


Actually to a certain extent, I think one could call POSIX the actual C runtime library.

As C runtime + POSIX calls (I know it didn't exist back then) is what defined C when it was still UNIX only, but ANSI didn't want to make the language standard that big.


Linux is just the kernel. Distributions should certify. And I do not see anyone going fot it. There is at least one distribution posix certified: Inspur K-UX.

The equivalence in linux land should be LSB, which many distributions certify to.


Lol at the last line. No doubt.


OMVS mapped a hierarchical filesystem onto the dataset based mainframe filesystem. It's one hell of a kludge, imo. It's kind of like how cygwin attempts to emulate stuff windows doesn't have, like fork. Mainframe has the concept of a started task, which is kinda similar, but not quite


It's an architecture called z/Architecture, used for z Systems CPUs such as the z13 CPUs in IBM mainframes (the z13, same name). The 'z' in 'IBM z Systems' stands for 'zero downtime'.

[1] https://en.wikipedia.org/wiki/Z/Architecture

[2] https://en.wikipedia.org/wiki/IBM_z13_(microprocessor)


I thought it came from a play on the naming convention. I don't have the source but it went something like this:

System/360 - The system for the 60's

System/370 - The system for the 70's

System/390 - The system for the 90's

System/Z - The system for the 2000's, which sounds like a Z to some at the end.


System/360 was meant to be a good all-around system, not optimized specifically for business or science, as previous mainframes were, but good for both. 360 degrees in a circle, you know.

Notice how it skips System/380. IBM was going to replace the mainframe line with its Future Systems project, which would have single-level store (everything is RAM, everything persists, page cache takes care of moving stuff in and out of physical disk drives) and be so tightly-integrated nobody would be able to clone it, as the plug-compatible vendors had been able to clone parts of the System/360 and /370 systems. The only real result of this was the AS/400 midrange systems, now the i Series, I think.


Re 360

I remember reading that it converged two seperate segments into one. 360 degrees makes sense.

Re System/380

Always wondered why they skipped 380. Thanks for the enlightening details on that. The anti-cloning angle is amusing.


>The 'z' in 'IBM z Systems' stands for 'zero downtime'.

Maybe also meant to imply, the ultimate? :) Heh. Just guessing.


"z" is the marketing term for all of IBM's mainframe products. There are several different operating systems that can run on the mainframes.

This port only applies to those running Linux on their mainframe, which isn't terribly common. Most would be running zOS (https://en.wikipedia.org/wiki/Z/OS) or TPF. Their VM hypervisor is also commonly used.


The AS/400 line uses i, not z.


The AS/400 isn't a mainframe. It sits in the same space as VAX/VMS and the HP3000 line.

Edit: The common term for these, during their heyday, was "minicomputer" or "midrange".


Sure, but I still see it as a mainframe, even though you could seat on them (I actually used to do that on a dead one).


z/OS is the rebrand of OS/360, which predates Unix and has been running Western civilization for about half a century.


It's more of a follow-on to MVS, actually; OS/360 is a couple steps back.


Is it possible for a mere mortal with an equivalent amount of cash to get ahold of Z/OS to play around with (for use on the Hercules emulator or somesuch)?

I've looked around with no success, so I'm guessing not, but any clarification would be nice.


My college roommate was the maintainer of Hercules, the z/OS emulator.

See: http://www.hercules-390.eu/hercfaq.html

2.01 Can it run z/OS, z/VM, z/VSE?

Yes. Hercules is a software implementation of z/Architecture, and so it is capable of running z/OS, z/VM, and z/VSE. Hercules also implements ESA/390 (including SIE) and so it can run OS/390, VM/ESA, and VSE/ESA, as well as older versions of these operating systems such as MVS/ESA, MVS/XA, MVS/SP, MVS/SE, VM/SP, VSE/SP, and DOS/VSE.

But (and this is a big but), these operating systems are all IBM Licensed Program Products, whose conditions of use generally restrict their usage to specific IBM machine serial numbers. So you cannot just copy these systems from work and run them on your PC, as this would almost certainly be a violation of your company's licensing agreement with IBM.


For science and education, Fair Use terms may be applied generously.


IBM's official product to run z/OS on an intel box is here:

http://www-03.ibm.com/software/products/en/ratideveandtesten...

It's just under $5k USD for a 12 month single user license:

https://www-112.ibm.com/software/howtobuy/buyingtools/paexpr...


You can try MVS, a z/OS precursor, for free:

http://www.bsp-gmbh.com/turnkey/


It is nice for fun or digital archaeology, but the last freely available version of MVS was released ca. 1978... But it is nice to get a feeling of what the system is like.


Ouch... Guess that answers it definitively then. Shame that if someone wants to learn, they have to resort to copyright infringement...


Seems like a good startup opportunity -- z/OS as a service. Buy the $5000 license and then rent it out to users for $10 a month. Maybe the terms don't allow it? Fuck them. Disruption, baby!


There were places around that rent time. There was also a place offering free accounts, I don't even remember the name now. There's one hell of a learning curve, makes it really hard to do anything other than poke around when you don't know the params to code in a jcl to compile a program. My first gig was mainframe cobol, the company put everyone thru a 3 month training program, that's the only reason I know anything about it.


Odd conclusion.


How so? There are no legal resources for acquiring Z/OS as a student without paying absurd (for a student) prices.

You could take a class for cheaper, but formal classes are poor substitute for interacting with the system on your own terms. (Ask any programmer)


You seem to reason that anything that is not available on your terms is fair game for piracy.

IBM can do whatever they like with their IP.


I believe the point is that if you want to play with it, your only choice is piracy, not that you must pirate it.

Since, in practice, nobody's going to pirate it either, that means there's no way to learn about it on your own. That's shutting down a very big free training mechanism for IBM, though it's probably too late to do anything about that now.


Oh thanks for pointing that out.


Karunamon didn't imply it to be fair game, he stated that it's too expensive to afford for learning for most people, which seems to be a simple fact. He also stated that it's a shame, which can mean either that it's a shame that you can't learn, or a shame that you're to infringe, and if taken to mean the latter that makes it seem like you're actually in violent agreement with him.


IBM used to run the mainframe challenge, in which they let you play with one. It was more a PR exercise than a competition - http://www-05.ibm.com/employment/uk/graduate-programmes/main...


It's primarily a talent identification exercise. Helps them find the next generation of z/OS developers that they could potentially hire to replace the ones who are retiring.


> Is it possible for a mere mortal with an equivalent amount of cash to get ahold of Z/OS to play around with

As a former IBMer, believe me, you don't want to. We all dreaded the "such and such doesn't work... when running on Z" defect that would roll in every now and then.


He he he, I had to install zOS a few years ago.. You think fixing a defect on zvm/zlinux is a PITA, just try installing zOS. Ugh, make me shiver thinking about it.

Apparently, just cloning a working system and applying PTF's is the common method of doing it. Otherwise you can pay IBM to build an image and install it (can't remember what they call it) but basically they generally install it the same way.


Oh man, I can only imagine. As far as fixing a defect on zLinux, that was usually something that made be breathe a slight sigh of relief. It meant I would be doing something on a somewhat sane and modern system that behaved roughly like something I'm used to.


They used to provide free Linux partitions:

http://www.eetimes.com/document.asp?doc_id=1200613

This was still available in 2006..


I think this is awesome? I'm not entirely sure though.


Write once, compile and run anywhere. Go is gaining traction. IBM doesn't want to lose a sale to an organization that has adopted Go.


IBM wants to be able to run Docker on zLinux. No Go, no Docker.


lol, last time I talked to IBM Linux was Linux was Linux.

Shouldn't Go stuff simply run on their LinuxVMs on Z?


Processor architecture is not processor architecture is not processor architecture. The Go compiler produces native binaries.


>Processor architecture is not processor architecture is not processor architecture.

Interestingly, I've come across multiple computer people who didn't know this basic fact. Hardware engineers, sysadmins and devs who were surprised when I said a binary/EXE for one processor architecture (and hence, machine instruction set) cannot run on another architecture (except for special cases that may exist nowadays, like maybe Apple related to PowerPC vs Intel CPUs?) But even then it is probably due to special extra steps being taken.

Edited to add hardware engineer category.


Apple used to support a "universal binary" format[1] to run a "single" binary on both PPC and Intel processors (as well as 32 bit vs 64 bit). This meant that both the PPC and Intel code was inside the same file, but a specific section of the file was executed based on what architecture the computer was using.

Of course, modern Macs are 64 bit Intel only, so this isn't really necessary anymore unless a developer needs to support older platforms.

[1] https://en.wikipedia.org/wiki/Universal_binary


Rosetta was cooler. I have an old Mac Pro running some process that was never upgraded.

https://en.m.wikipedia.org/wiki/Rosetta_(software)


Cool, that universal binary concept is what I vaguely remembered and meant, thanks. Will check that link.


They were always talking about their "bare metal" visualization, I thought they could virtualize CPUs too.


You can run virtual machines in z/VM, but you can also partition the machine at the firmware level. (Partitions are called "Logical Partitions" or LPARs.) Maybe that was what they meant by "bare metal virtualization".


Virtualization and emulation are not the same thing. You can have one without the other.

(Traditionally, virtualization precluded emulation: The hypervisor simply multiplexed hardware, and the guests got what looked like raw access to the real hardware, with no way of "seeing" the hypervisor at all. Very secure, very simple, and you could run a hypervisor as a guest with guests under it, recursively.)


Emulation is never going to be fast.


Sure it is. The s390x instruction set is microcode emulation on top of a POWER-derived microarchitecture, just as modern x86 chips are microcode on an underlying microarch.


Completely wrong. s390x runs on its own dedicated hardware, it is not emulated on POWER. POWER and s390x are completely separate.

source: worked at IBM on their compilers team.


Different processor architecture?


zSeries machines are awesome. Costly, but awesome nevertheless.


Maybe for support of Open Ledger since OBC is written in Go:

https://www.ibm.com/developerworks/community/blogs/gcuomo/en...


It's for Docker.


"Note that we do not accept pull requests"

Strange.


That text is unmodified from upstream's README.

It's a little unfortunate that GitHub provides so little room to add your own README to a fork, short of modifying upstream's README in the source code itself. You get a sentence at the top, which is about it.


I've seen people create their own README and move the upstream one to README.upstream or similar. Seemed sensible to me.


But unfortunately that change will then be included in all pull requests, unless you create a clean branch for every change you want to get into upstream.


GitHub should allow you to choose which file to use as the readme, just like they allow you to choose the default branch.


Or "README.forkname" or "README.user-forkname"...


They're not doing the actual development on GitHub, just mirroring it there because, to many developers, GitHub is git.

Not all that strange.

(My guess, anyway.)


The Go team does use Github Issues.




Why?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: