I’d expect most vendors do, at least in their closed source drivers.
You could also check in the mesa project if this is implemented but it’s definitely possible to do
Shader compilers mostly use LLVM even though runtime is a constraint, if the pattern is common enough it’s definitely easy to match (it’s just two intrinsics after all) meaning you can do it for cheap in instcombine which you’re going to be running anyway
For some reason, I feel like this is harder to implement than you expect. The way to find out would be to get a bunch of examples of people doing this “optimizations in shader code, look at the IR generated compared to the optimal version and figure out a set of rules to detect the bad versions and transform it into a good versions. Keep in mind that in the example, the addition operators could be replaced with logical OR operators, so there are definitely multiple variations that need to be detected and corrected.