Intel does lots of contributions across the OS (Linux and glibc) to compilers including their own (gcc, icc, ispc, etc). Their problems aren't their ability, it's that Intel is poorly managed and internal groups are constantly fighting with each other.
Also, compiler support for CPUs is very overrated. Heavy compiler investment was attempted with Itanium and debunked; giant OoO CPUs like Intel's or M1 barely care about code quality, and the compilers have very little tuning for individual models.
> Intel does lots of contributions across the OS (Linux and glibc) to compilers including their own (gcc, icc, ispc, etc). Their problems aren't their ability, it's that Intel is poorly managed and internal groups are constantly fighting with each other.
I wasn't just talking about Intel but the concept of separate CPU and compiler vendors in general. Intel contributes a ton of open source but even if they were perfectly organized it takes time for everything to happen on different schedules before it's generally available: get patches into something like Linux or gcc, wait possibly years for Red Hat to ship a release using the new version, etc. Certain users — e.g. game or scientific developers — might jump on a new compiler or feature faster, of course, but that's far from a given and it means they're not going to get the across-the-board excellent scores that Apple is showing.
> Also, compiler support for CPUs is very overrated. Heavy compiler investment was attempted with Itanium and debunked; giant OoO CPUs like Intel's or M1 barely care about code quality, and the compilers have very little tuning for individual models.
This isn't entirely wrong but it's definitely not complete. Itanium failed because brilliant compilers didn't exist and it was barely faster even with hand-tuned code, especially when you adjusted for cost, but that doesn't mean that it doesn't matter at all. I've definitely seen significant improvements caused by CPU family-specific tuning and, more importantly, when new features are added (e.g. SIMD, dedicated crypto instructions, etc.) a compiler or library which knows how to use those can see huge improvements on specific benchmarks. That was more what I had in mind since those are a great example of where Apple's integration shines: when they have a task like “Make H.265 video cheap on a phone” or “Use ML to analyze a video stream” they can profile the whole stack, decide where it makes sense to add hardware acceleration, and then update their choice of the compiler toolchain and higher-level libraries (e.g. Accelerate.framework) and ship the entire thing at the time of their choosing whereas AMD/Intel/Qualcomm and maybe nVidia have to get Microsoft/Linux and maybe someone like Adobe on board to get the same thing done.
That isn't a certain win — Apple can't work on everything at once and they certainly make mistakes — but it's hard to match unless they do screw up.
> Itanium failed because brilliant compilers didn't exist and it was barely faster even with hand-tuned code, especially when you adjusted for cost, but that doesn't mean that it doesn't matter at all.
What you said is true for libraries, I just don't think it's true for compiler optimizations. Even Apple's clang just doesn't have any new optimizations that work on their own; there are certainly new features but they're usually intrinsics and other things that need to be adopted by hand. They thought this would happen (it's what bitcode was sold as doing) but in practice it has not happened.
Also, compiler support for CPUs is very overrated. Heavy compiler investment was attempted with Itanium and debunked; giant OoO CPUs like Intel's or M1 barely care about code quality, and the compilers have very little tuning for individual models.