Well, if I was writing something which had really serious failure consequences (space, aviation, crypto, bitcoin wallets, nuclear systems, etc) I think I would go to great lengths to minimise the unnecessary complexity, remove entire bug classes, etc.
Now, a compiler is not a simple thing, and one which does optimisation steps is doing something complicated to code one may already by reasoning about in complicated ways.
I've encountered a number of compiler bugs, but usually in the sense of it crashing or doing something violently wrong. I've never encountered one in terms of it doing something very subtly wrong, but I consider it likely that that could be going on, but that I have no way to test for it. The only way would be to have dual implementations producing matching output, and it would still be dependent on the test vectors, and even then it could be a common mode error. So, for me, it would be very useful to be able to remove the compiler from the list of possibly buggy components. In fact you could say that this is the first C compiler ever, the others compilers have been for some close cousin of C.
So, I don't really thing your argument holds. Testing is notoriously hard to get right, and testing which simultaneously accounts for complex compiler interactions seems implausible. Especially compared to 'use a verified compiler and stop worrying if the compiler got it wrong'.
Not exactly, but if we look at it that way, how is it wrong?
I write buggy software; what do you write?
With the approach I take to writing software, the methods by which I flush out my bugs find compiler issues also in the same stroke.
If I didn't write buggy software, I could ditch testing, right?
Except, oops, not without a proven toolchain.