A company I worked for about 10 years ago had an IBM s/390 mainframe for about a month. Our manager at that time had previously worked for IBM and was trying to move the company over to hardware managed by IBM.
The biggest selling point was redundancy in the hardware being able to handle CPU/RAM/Disk/PSU failures without taking down the Linux VMs running on it.
Our workload was network bound, not much CPU or Disk IO load.
We ended up using generic Intel PCs (some custom built, some Dell and some IBM Netfinities). We had more down time with the IBM managed Netfinities, then with the custom built rack mount servers.
Back then we also had some Sun UltraSparc E3500 & E6500s, but we found that the Java VM (from Sun!) was much slower on the UltraSparc CPUs then on Intel CPUs.
I find it interesting that we had much better results from generic no brand rack mount servers then most of the expensive "enterprise" class servers.
Note: This was over 10 years ago.
Anyone have more recent experience with servers in a self-managed DC or co-location? It seems most companies/developers now use cloud/vps/hosted servers.
It's really not possible to make a blanket judgement. The lot of high end systems boil down to brand name, software compatibility, "one throat to choke", and vertical integration. While the first two are mostly meaningless, the last two can have profound network effects. IBM has been able to realize nice vertical integration across most of their current systems, and they have some world class people doing CPUs, compilers, OSes, application stacks. If you have a problem with the mainframe you have one company to deal with.. that could be very good or very bad (somewhere brand name aka reputation does matter). In consumer terms, it's kind of like Mac vs PC.
It's also not so easy to judge x86 as low end. Stratus and HP make some really high end fault tolerant/instruction retry x86 servers that could be considered mainframes IMHO.
Low end x86 servers are "good enough" for a large portion of what people use servers for, and a massive amount of work has gone into making distributed systems that can scale and handle faults semi-predictably. That said, I'd much prefer my bank account to reside on an IBM mainframe SysPlex for the time being.
Sun kind of struggled in the CPU market as well as indecision around x86 for a long time, it's one of the primary reasons they are now part of Oracle. Those E3500/6500 systems are very high quality and would probably still be running fine today if performance isn't the only measure of worth.
I've long been a fan of IBM's POWER system p or whatever they want to call it depending on the day of the week. It's positioned somewhat in the middle of x86 and mainframe, but you generally get more powerful systems than x86 with mainframe inspired reliability. Still, these fill a much narrower niche than cheap x86 boxes.
I agree with what you said. I was just relating the experience I had.
We also had two large (U8) Compaq ProLiant servers that we used to run Oracle DB. We didn't have any problems with them, and only replaced them with newer faster Dell servers later. The redundant PSUs and hot-swap drives were the main reason we ran our database on them (one master, the other a standby slave). Those server were later used for our beta site (used for staging/development).
We later ported the backend Java service to C++, which most likely would have run fine on the UltraSparcs, but I think we had already sold them or returned them back to the vendor. I really wished Sun's UltraSparc T1 & T2 CPUs had taken off. I think they could have really good CPUs, but it was hard to compete with x86 because of all the software that was already compiled/optimized for the x86 architecture.
For us it was important to be able to fix problems quickly. Having to call in a service tech. to fix a hardware problem just increased downtime. Also we just had really bad luck with the IBM Netfinities, all 4 servers had a bad motherboard which failed in each server at different times. After that we didn't have any problems with them.
Personally I prefer to deal with generic hardware that I can service myself when a problem happens. But for some companies, it is better to have managed hardware with service contracts.
The biggest selling point was redundancy in the hardware being able to handle CPU/RAM/Disk/PSU failures without taking down the Linux VMs running on it.
Our workload was network bound, not much CPU or Disk IO load.
We ended up using generic Intel PCs (some custom built, some Dell and some IBM Netfinities). We had more down time with the IBM managed Netfinities, then with the custom built rack mount servers.
Back then we also had some Sun UltraSparc E3500 & E6500s, but we found that the Java VM (from Sun!) was much slower on the UltraSparc CPUs then on Intel CPUs.
I find it interesting that we had much better results from generic no brand rack mount servers then most of the expensive "enterprise" class servers.
Note: This was over 10 years ago.
Anyone have more recent experience with servers in a self-managed DC or co-location? It seems most companies/developers now use cloud/vps/hosted servers.