Compilers like LLVM and GCC use heuristics not human intervention. For inlining a common heuristic is the size of the function. So we inline (even recursive functions) until the function becomes larger than a certain threshold. The threshold can be specified by a human (-finline-limit), but I believe that is rarely done.
This isn't compiling though; it's normalising. A heuristic like function size is useful when optimising a Turing-complete language since (a) we have to rely on some heuristic and (b) smaller binaries are generally more efficient, all else being equal, so we should avoid a size blow up regardless of what we're optimising for (speed, size, memory, etc.).
In the case of normalising a non-Turing-complete language, we (a) don't need any heuristics, beta-reduction is a complete and correct strategy and (b) things like the size of a function are useless at telling us whether we've reached a normal form. In fact, I would imagine that normal forms of real Dhall programs are generally much bigger than the programs themselves, since one of the main reasons to use a language like Dhall is to reduce repetition. Also, your heuristic is heavily dependent on the evaluation order: if we have a program like this:
On the other hand, an evaluation strategy like call-by-value will evaluate big-thing, resulting in some arbitrarily large value (which may cause your heuristic to halt); then it will create 1000000 duplicates of that value (again, causing a size-based heuristic to halt); then finally it will evaluate the functions and discard the big, duplicate expression: