Overhead would "loosely" relate to better performance, and v-table dispatch type that visitor pattern is built upon, cannot be considered 'zero overhead'.
It's a trade off and the article shows no proof the of an improved performance by reducing the 'overhead'. For example object allocation in JVM is really cheap (pointer bump in the TLAB + check&perfectly predicted jump), reusing objects in a cycling list/buffer is similar, etc. In the end the post is about performance and I pointed that the multiple mentions of 'zero overhead' are incorrect.
Ah yes. So triggering a GC in the middle of your duplicated tree-on-heap traversal will totally not risk any pipeline stalls. I'm sure that N vtable calls per node is totally worse than re-enumerating the objects on the heap N times.
When I talk about how the OO community has anti-intellectual toxicity can try to buzzword bingo architecture away, Visitor is my go to example. Because that's sure what it seems like to me.
You will 100% end up with pipeline stalls with a sufficiently large, random input anyways. This complaint about vtables is not substantial in this case, and I welcome you to prove they do.
It's a trade off and the article shows no proof the of an improved performance by reducing the 'overhead'. For example object allocation in JVM is really cheap (pointer bump in the TLAB + check&perfectly predicted jump), reusing objects in a cycling list/buffer is similar, etc. In the end the post is about performance and I pointed that the multiple mentions of 'zero overhead' are incorrect.