Latency is almost always in frames, the renderer is one frame behind the compositor which is one frame behind scanout.
If each steps takes 4 msec (240 Hz) instead of 16.7 msec (60 Hz), and you're the same number of steps behind, the latency is reduced by (16.7 - 4) * nsteps, or 50 msec - 12 msec = 38 msec.
Now that's assuming your system can actually keep up and produce frames and composite that fast, which is where the "brute force" comes into play.
Why is that delay coupled to the monitor that's plugged into the computer? Could we ask the computer to produce graphics at 240Hz and then take every 4th frame?
Then we have "average" delay in ms rather than frames.
(Asking as someone who doesn't know this area well)
You could do that in theory but it would be wasteful to render frames that are never displayed. If you can modify the software, better to modify it as the article advocates for, to strip latency without the waste.
Only if you know how long it will take to render a frame.
If you render "as fast as possible (to a max of xyz hz), you get better worst case performance than if you delay compositing. A render time of 2 physical frames results in only 1 missed frame instead of a render time of 1.26 physical frames resulting in 2 missed frames.
Of course nothing is simple in the real world, the flip side is rendering the extra frames creates waste heat that slows down modern processors with insufficient cooling.
If each steps takes 4 msec (240 Hz) instead of 16.7 msec (60 Hz), and you're the same number of steps behind, the latency is reduced by (16.7 - 4) * nsteps, or 50 msec - 12 msec = 38 msec.
Now that's assuming your system can actually keep up and produce frames and composite that fast, which is where the "brute force" comes into play.