I know it's arm64 but you can hack Mac Studio and put Linux on it. Putting Linux on Apple Silicon makes it go so much faster than MacOS if you're doing command line development work. With x86 there really isn't anything you can buy as far as I know that'll go as fast as apple silicon for cpu inference. It's because the RAM isn't integrated into the CPU system on chip. For example, if you get a high-end x86 system with 5000+ MT/s then you can expect maybe ~15 tokens per second at CPU inference, but a high end Mac Studio does CPU inference at ~36 tokens per second. Not that it matters though if you're planning to get a high-end Nvidia card for your x86 computer. Nvidia GPUs go very fast and so does Apple Metal.
If you have really unlimited budget, unconditional love for Intel and x86 and don't care about ludicrous power draw at all, Intel has a silly Sapphire Rapids Xeon Max part with 64GiB of 1TB/s HBM.
It goes really fast (same magnitude of bandwith as A100s) if your model fits in that cache entirely.
What would be x86 alternative in that price range (if any)? Xeons with HBM are more expensive IIRC