It's a mobile CPU with many silicon advantages (widest decoder in industry, memory closer, deepest re-order buffer of any CPU and much more) plus a sane ISA and optimized OS. So yeah, you're seeing the benefit of Apples integration. That's why even the Anandtech page calls that graph "absurd", because it seems unreal, but it's real.
Gotta be frank because it's not getting through: you're jumping way ahead here. Every time one of these threads has happened, there's an ever-increasing # of people who vaguely remember reading a story about Apple GeekBench numbers, so therefore this one is credible too - I used to be one of those people. This has been going regularly for 3-4 years now, and your interlocutor as well as other comments on this article are correct - comparing X86 versus ARM on GeekBench is nonsensical due to the way GeekBench eliminates thermal concerns and the impact of sustained load. Your iPhone can't magically do video editing or compile code faster than an i5.
My comment, and this specific thread, isn't even related to GeekBench. The graph I linked used SPEC instead of GB5. The gigantic architectural deep dive over on Anandtech even includes a discussion on the strengths and limits of both benchmarks, and how they make sense based on further micro-architecture testing.
The reason that graph doesn't include the A14 Firestorm -> M1 jump was simply timing. We know the thermal envelopes of the M1 and the cooling designs. We now have clock info thanks to GB5. So yes, the data is pretty solid. No one's saying that the iPhone beats the Mac (or a PC) at performance when you consider the whole system. Just that the CPU architecture can and will deliver higher performance given the M1 clock, thermals and cooling. Remember that The A14/M1 CPUs are faster at lower clock speeds.
that's comparing a hardware encoder to a software one, unfortunately, as the replies note.
it's unfortunately drowned out by the cpu throttling scandal on google, but, its well-known in ar dev (and if you get to talk to an apple engineer away from stage lights at wwdc) that you have to proactively choose to tune performance, or you'll get killed after a minute or two due to thermal throttling.
This raises the question of just why the Mac is doing software rendering—I think the hardware it’s running on should have two compatible hardware encoders, that on the CPU and that on the GPU. Is the software being used incapable of using hardware encoding? Does it default to software rendering because of its higher quality per bit? Was it configured to use software encoding (whether ignorantly or deliberately)?
Video encoding is generally done on CPUs because they can run more complicated video encoding algorithms with multiple passes. This generally results in smaller video files with the same quality. As you increase the compute insensitivity of the video encoder you get diminishing returns. 30% lower bitrate might need 10x as much CPU time. That tweet says more about the type of encoder and chosen encoder settings than anything about the hardware.
Imagine going on a hike and climbing an exponential slope like 2^x. You go up to 2^4 and then go down again and repeat this three times so you have hiked 12km (43) in total. Then there is a athlete who is going up to 2^8. He says he has hiked 8km and you laugh at him because of how sweaty he is despite having walked a shorter distance than you. In reality 32^4 (48) is nowhere near 2^8 (256). The athlete put in a lot more effort than you.