It is definitely a design goal, but I would be very surprised if sha2 did not have 2 inputs that only differ by a single bit, and have the same output.
Finding those inputs is essentially impossible, but for a true 'random oracle ' they are likely to exist.
Note: correct me if I’m wrong, but to be relevant to this theorem and definition of sensitivity, the two strings would have to be for the same circuit and therefore be the same length, which is much harder than finding hash collisions in general.
And I thought they had ways to construct hash functions so that all the inputs of the same length have a different output?
I doubt any study of hash functions restricts itself to studying a very tiny subset of all possible (infinite) inputs. The security of hash functions is always studied under the assumption of arbitrary inputs.
The number of single-bit flips that would change the output on their own.
From the article: "If, say, there are seven different lies you could have told that would have each separately flipped the outcome, then for your loan profile, the sensitivity of the Boolean function is seven."
Take all possible input strings
Given any input string, the sensitivity of that string is how many bits you could flip that cause the output to flip.
Now take the /maximum/ of all input string sensitivities.
For e.g. a hash function, you'd want either a minimum or something like a 1st percentile.