Will this even be an issue when we have CPUs with hundreds/thousands of cores that can just sandbox processes to their own set of cores/cache with exclusive unshared memory?
I think this idea could be taken further: just build physical machines with lower capacity (RAM, cores), rather than filling data-centers with top-spec hardware then dividing them up with virtualisation. On the face of it at least, this seems like an idea worth taking seriously. With the right form-factor, I imagine it shouldn't even have much of an impact on space efficiency or power efficiency. Perhaps the CPU companies just aren't interested in making such hardware?
And "lower capacity" isn't even that low any more, just in comparison with top-of-the line. Think a raspberry pi or basically any cellphone's main logic board. My motorola g7, that I got for something like $150 new, has Snapdragon 632 processor with 1.8 GHz octa-core CPU and Adreno 506 GPU, 4 GB of ram, and 64 GB internal storage. A pi4, for under $100 has a Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz and up to 8GB or ram. Those specs far outclass most budget VMs and are more than adequate for the vast majority of workloads. All that either is missing is a proper storage port (i.e. not an sd card but something like sata or m3), but otherwise how many raspberry pis could fit in a 1u enclosure? Even being generous and giving half of the volume to disks, dual power, and cooling it's still quite a few.
Yes, there are definitely workloads that will benefit from better hardware, e.g. video transcoding or pure number crunching, but i would contend that most websites, databasing, ci, &c could be done on something like a pi replacing a vm or 3 (of the same customer).
Wouldn't that presumably cause energy costs to skyrocket because of all the overhead you get from going from multitenant machines to dedicated ones? Even if the capex is compare I'd imagine it would be hard to get the opex to be competitive