> My dad has been IT director and he chuckles when I talk to him about "new and exciting paradigms" which he of course sees as turning a circle to what they had in 70's and 80's :)
As someone with 20+ years in IT, I agree - a lot of these "new and exciting paradigms" are not new at all.
My personal favourite is how many large multi-nationals are now building in-house clouds?
WTF is the difference between an "in-house cloud" and a shared-use datacenter from the 1990's?
The interface to the shared-use datacenter, if you're lucky, is a spreadsheet that declares the static resources you own and a remote hands guy that can tackle things beyond the capabilities of your remote KVM. If you need more capacity you need to work with the datacenter folks to order physical machines that might show up in a few months.
The interface to the in-house cloud is an API. In most instances, developers are completely abstracted away from the physical infrastructure and don't need to take a lock on some human in the datacenter to get their work done.
I still think that's a bit "rewriting the history". E. G. Vmware enabled fast self-provisioned VMs and nobody called them cloud. Heck in 1999ish or so, while at university and way before I was an IBMer, I could sign up to some ibm development program as a student, get an account, and provision Linux VMs on mainframe.
Not to say your scenario isn't valid and real, but I live that scenario every day today with in house cloud and virtualization too - it takes months of approvals and solutioning and security assessment and network engineering and procurement and costing and whatnot... To deploy a windows vm.
> Vmware enabled fast self-provisioned VMs and nobody called them cloud
Because it’s missing all the glue. I need to self-service the VMs, the database, the load balancer, the dns records, certs, and hook them all up so I can receive production traffic all via an API I could theoretically do in Terraform.
> WTF is the difference between an "in-house cloud" and a shared-use datacenter from the 1990's?
An in-house cloud sounds like the mainframe installed in the raised floor computer room at the school district office I worked at in the late 90s; of course, the 12 foot long Unisys mainframe was replaced with a Unisys 4U pentium pro box pretending to be a mainframe, and then there was a lot of extra floor space.
If you're running your own 'cloud' in a (shared) colo, I dunno that that's really in-house. I guess it's still 'private cloud' though.
The only possible value over the last iteration is if they cleaned house and the new team manages to be more permissive than the old one. I'd much rather have on-prem, but the sad fact is that for cloud I just need a signed check. For on-prem I also need buy-in from other divisions before I can even start experimenting with a new service.
I still have the exact same problem with respect to never having exactly the ratio of CPU to memory that would make my app happy.
I had a couple of telco customers that ran their own internal cloud, and I’d say the main difference is that it’s way more expensive than the public cloud, heaps less flexible, and when you need compute it has to go through an approval process.
I mean … sometimes I shake my head in wonder, and other times I just shake my head.
An in-house cloud will just be a bunch of commodity servers running a hypervisor that gives you an API that allows you to automate the provisioning of infrastructure.
I am guessing that in the 80s you weren't writing Infrastructure as Code to define exactly what resources you needed for your software, having it all set up automatically, and so on.
As someone with 20+ years in IT, I agree - a lot of these "new and exciting paradigms" are not new at all.
My personal favourite is how many large multi-nationals are now building in-house clouds?
WTF is the difference between an "in-house cloud" and a shared-use datacenter from the 1990's?