I guess they're just trying to say that LLVM's control-flow graph is implemented as individually heap-allocated objects for nodes, and pointers for edges. (I haven't looked at the LLVM code, but that sounds plausible).
Even if those allocations are fast on Linux/Mac, I wonder whether there are other downsides of that representation, for example in terms of performance issues from cache misses when walking the graph. Could you do better, e.g. with a bump allocator instead of malloc? But who knows, maybe graph algorithms are just inherently cache-unfriendly, no matter the representation.
This guy builds a compiler. I guess he added some limitations into his model so his CF can be represented as a DAG. I have a compiler which represents its CF as a DAG.
Control flow is not a DAG.