Good read!
Does this new proof have an effect on cryptography given that it's often desirable to have highly sensitive cryptographic functions (you flip one bit in the input and get a very different output)?
Nah. Computing sensitivity has always been easy enough, flip bits and watch output. This is about how to do it in a generalized way that fits in with related work in the field.
It is definitely a design goal, but I would be very surprised if sha2 did not have 2 inputs that only differ by a single bit, and have the same output.
Finding those inputs is essentially impossible, but for a true 'random oracle ' they are likely to exist.
Note: correct me if I’m wrong, but to be relevant to this theorem and definition of sensitivity, the two strings would have to be for the same circuit and therefore be the same length, which is much harder than finding hash collisions in general.
And I thought they had ways to construct hash functions so that all the inputs of the same length have a different output?
I doubt any study of hash functions restricts itself to studying a very tiny subset of all possible (infinite) inputs. The security of hash functions is always studied under the assumption of arbitrary inputs.
The number of single-bit flips that would change the output on their own.
From the article: "If, say, there are seven different lies you could have told that would have each separately flipped the outcome, then for your loan profile, the sensitivity of the Boolean function is seven."