Yes. It's handy to see if you don't have any obvious bias in your data. I don't know how it works in LLMs though, you'll probably need to do some fancy tricks to visualize that.
Thank you. Some variation of Circuit Tracing[1] is what I was thinking of. It would be useful to visualize the tokens effect on inference in real time, and tweak the input parameters, individual tensor weights, etc., and watch the inference collapse on a different result (does output of LLMs need to be next token based? Would larger, pre trained/synthesized tokens with known and expected output allow for faster inference, or is this simply tool calling?)
I understand LLMs and ML models are not linear functions but, they are functions (no?) and some complex higher level math can be grasped intuitively when visualized.