It doesn't very much, in theory. Figuring out the instruction boundaries in x86 instructions in parallel is rather involved but after code has been seen once the boundaries can be marked with an extra bit in the L1 instruction cache and in loops you're mostly using decoded instructions cache anyways. The strict sequential consistency model of x86 versus ARM probably has a lot of implications for how the cache hierarchy but I couldn't speculate in detail.
However, x86 has a ton of cruft in the instruction set which has to be implemented and keep working despite whatever microarchitectural changes happen. You have to worry about how your Spectre mitigations interact with call gates that haven't been used much since the 286, for instance. That's a lot of extra design work and even more extra verification work that has to happen with each new x86 core, which I think is most of the advantage that ARM has.
I don't know what sort of conflict of interest you might have on the matter. I have none. Just interested in being a good citizen and minimizing the environmental footprint of digital tech.
There might be semantic splitting of hairs as to how one controls for different design features so that it is a proper "like-for-like" comparison. But for the longest time the narrative is that the ARM/RISC type optimisation is favorable for power consumption. "The use of ARM based systems has shown to be a good choice when power efficiency is needed without losing performance." [1]
Not an expert in this stuff and would stand corrected if, e.g. both this paper and the related narrative is all a devious plot against x86.