Depends on what language you're using. In math notation, given `y = x * x`, you can work backwards from `y = 4` to figure out the value of x, whereas in, say, Javascript, `y = x * x` means exactly "compute y as the value of x times itself" and only that. For illustration, we could also compute the square of x in a different imperative form, e.g. in terms of a loop over additions.
Similarly, in mathematical notation, `f(g(x))` can be a way of expressing the existence of some sort of law, e.g. maybe f and g are commutative. That means that if code were written as such in a 5th-gen language[1], the underlying engine is free to recompile the code into `g(f(x))` assuming the commutative property holds and the performance is better. By contrast, in a imperative language, `f(g(x))` generally would compile to that exact order of operations (unless you have a mythical sufficiently smart compiler)
I can see an argument about JIT compilers being smart in some cases, but the philosophical distinction between imperative and declarative paradigms is that with declarative style, the compiler can transparently swap units of arbitrary complexity. For example, given some CSS rules, a browser engine can decide to paint the screen buffer however it wants, be it top-to-bottom, edge-to-center, layer-over-layer, etc regardless of how the CSS was originally expressed.