I think LLMs will have a hard time replacing traditional diff algorithms. You want your diff to be reliable and predictable. It shouldn't omit relevant changes. This is not really a strength of LLMs.
I think a system that parses the code and uses verifiable rules to hide invariant changes would have a much easier time being adopted. I may be biased since I work on SemanticDiff (think difftastic but for VS Code and GitHub) and have implemented such rules myself. There have been several times where I've thought about implementing a new rule, only to find out that some obscure edge case would not be handled correctly. So I don't see how LLMs could handle these cases correctly in the near future.
In a world where you can pass a whole git repo as part of the context window of a model, wee see potential not only to reduce hallucination but on diffing based on a company's code style
I think a system that parses the code and uses verifiable rules to hide invariant changes would have a much easier time being adopted. I may be biased since I work on SemanticDiff (think difftastic but for VS Code and GitHub) and have implemented such rules myself. There have been several times where I've thought about implementing a new rule, only to find out that some obscure edge case would not be handled correctly. So I don't see how LLMs could handle these cases correctly in the near future.