> [Huang] was able to prove that in any collection of more than half the points in an n-dimensional cube, there will be some point that is connected to at least √n of the other points — and the sensitivity conjecture instantly followed from this result
Well - the eigenvalues (of the adjacency matrix), which are used in the proof, correspond to eigenvectors. Which are fixpoints (or at least fixed linear subspaces) of the corresponding linear function. So there is some "lexical" connection. But apart from that I'm not sure that I share your sense that this is "some generalization of a fixed point theorem". Can you give us more insight?
> Other measures involve looking for the simplest way to write the Boolean function as a mathematical expression, or calculating how many answers the banker would have to show a boss to prove they had made the right loan decision. There’s even a quantum physics version of query complexity in which the banker can ask a “superposition” of several questions at the same time. Figuring out how this measure relates to other complexity measures has helped researchers understand the limitations of quantum algorithms.
I don’t exactly follow the above either, but I believe fixed point is meant in a different sense here, as in a function f(x) such there a point x0 exists where x0=f(x0).
What a fascinating article and kudos to Huang for solving it. I’m excited to see what benefits this has in the future for computing. Not many articles do I read all the way through (I’ve the attention span of a fly these days) but this one I did.
Good read!
Does this new proof have an effect on cryptography given that it's often desirable to have highly sensitive cryptographic functions (you flip one bit in the input and get a very different output)?
Nah. Computing sensitivity has always been easy enough, flip bits and watch output. This is about how to do it in a generalized way that fits in with related work in the field.
It is definitely a design goal, but I would be very surprised if sha2 did not have 2 inputs that only differ by a single bit, and have the same output.
Finding those inputs is essentially impossible, but for a true 'random oracle ' they are likely to exist.
Note: correct me if I’m wrong, but to be relevant to this theorem and definition of sensitivity, the two strings would have to be for the same circuit and therefore be the same length, which is much harder than finding hash collisions in general.
And I thought they had ways to construct hash functions so that all the inputs of the same length have a different output?
I doubt any study of hash functions restricts itself to studying a very tiny subset of all possible (infinite) inputs. The security of hash functions is always studied under the assumption of arbitrary inputs.
The number of single-bit flips that would change the output on their own.
From the article: "If, say, there are seven different lies you could have told that would have each separately flipped the outcome, then for your loan profile, the sensitivity of the Boolean function is seven."
>the new proof is so simple that one researcher summed it up in a single tweet.
Didn't look that simple to me! Reminds me of Andrew Ng showing his students the simple one liner to solve the cocktail party problem in Octave. There's a lot represented in that one line of code!
That is the solution! See? It's so simple it seems like a formulation of the problem itself!
The solution is to think of the input ('001') in terms of an n-dimensional cube, where n is the length of the input.
So for example, a binary logic with 5 bits ('01010') would require a 5-dimensional cube. From there, you check whether moving from one input ('00001') to an adjacent input ('00011') causes a flip in the output. If it does, you label it as "red", and if it doesn't, you label it as "blue". Then you merely find the vertex with the highest number of opposite colors, and the number of opposite colors is the sensitivity.
The article is quite clear that this formulation of the problem has been around for a while and did not lead to a proof until Huang had his special insight.
> Huang knew, as did the broader research community, that the sensitivity conjecture could be settled if mathematicians could prove an easily stated conjecture about collections of points on cubes of different dimensions.
> In 1992, Craig Gotsman, now of the New Jersey Institute of Technology, and Nati Linial of Hebrew University figured out that proving the sensitivity conjecture can be reduced to answering a simple question about cubes of different dimensions: If you choose any collection of more than half the corners of a cube and color them red, is there always some red point that is connected to many other red points?
An easier way to think of it is just as a matrix where each vertex is connected to n other vertices. The reason an n-cube is used specifically is because the bits correspond to spatial coordinates, which obviously require a uniform structure (like a cube if we're talking 3D) in order to make sense.
Can someone explain this in a slightly less dumbed-down way. Having trouble following all the wooly explanations.
So far I get that we have a function that maps from a string of bits to a single bit. The sensitivity of each input bit is the likelihood that it affects the output, summed across all possible inputs.
The history of computer science is dauting. Unlike physics, though, it's the history of incremental innovations that culminated in what we have today. Thus, if you go back in time long enough, things eventually are manageable.
A good primer on the history of CS is probably Code, The Hidden Language of Computer Hardware and Software, by Charles Pretzold. I've put together some notes on it here: http://alvaroduran.com/code.
Is sensitivity weighted by which stage it is for multi-stage logic chains? I.E. for the example graphic, we could argue that the output of the OR into the AND is an input, or in the bank loan example it might ask "Are you married?" and if you answer yes it asks how much your partner makes, so it's a second-stage input to the final boolean calculation - is that taken into consideration when calculating sensitivity?
No. To compute sensitivity we consider the entire boolean function abstractly, without any reference to a particular implementation. There are other equivalent implementations of the same function with different arrangements (and numbers) of gates, so this wouldn't necessarily make sense to include. There are measures of complexity that do (at least sort of) take into account the way that you implement a function with gates, and they typically look at the circuit with the fewest gates or the shortest paths that computes the function.
Usually with things like this, the result is already fairly well established, or close enough at least, within the scale any real-world application would require.
You can think of it like the four color theorem. A beautiful theoretical result (though a far less beautiful proof), but the only practical significance is now cartographers know they'll never need that extra crayon....
Yes famous conjectures usually already have a wealth of "downstream" results predicated on their truth value. But sometimes the proof technique is itself novel and that can carry over to other fields.
I think the most interesting point of the paper isn't simply the proof the conjecture but also the sqrt(n) bound. I doubt many downstream results were predicated on that particular bound.
>Huang’s result is even stronger than necessary to prove the sensitivity conjecture, and this power should yield new insights about complexity measures.
If I interpret this correctly it's a tighter bound than the original conjecture, so it should allow better optimizations.
Well graph coloring in a more general sense is used for things like register allocation in compilers. So any proofs or increase in theoretical knowledge in the area could lead to improvements in that area.
Large boolean decision functions often arise in implementing logical circuits and compilers. I'm not an expert, but I imagine having an upper bound on relations between sensitivity and other metrics may yield insights into optimizations here. Since these problems are often intractable due to NP-completeness, any bounding functions that can offer heuristics can lead to more daring optimizations
Most of the times the solution of the problem doesn't really matter. The useful bits are the "technologies" that you discovered in order to solve the problem.
Most importantly, though, Huang’s result lays to rest nagging worries about whether sensitivity might be some strange outlier in the world of complexity measures, Servedio said. “I think a lot of people slept easier that night, after hearing about this.”
I'm studying universal needs for life to thrive. My intuition tells me this proof may imply some things about the question "Are all your needs met?"
I'll have to reread the paper a couple more times, but I think Boolean sensitivity could be related to the security one has around any given need. There may be further implications around how to assess one's strategies for meeting needs, as those would be the individual inputs to the Boolean function of "Is the need for _____ security met?" This could help provide a theoretical framework for designing systems oriented around well-being.
I wouldn't worry about a single comment. It's just that we're trying for a higher signal/noise ratio than is the default on internet forums, so I'd like to persuade you to be more thoughtful when you post. It's in your interest to do so, because preserving signal/noise is the one thing that can keep HN intellectually interesting for everyone in the community.
By the way, it isn't that people here (including us!) have no sense of humor, it's that internet humor tends to grow like weeds, and everyone overestimates how funny their jokes are. scott_s expressed this well a long time ago: https://news.ycombinator.com/item?id=7609289.
It's an important and interesting but unusually simple mathematical topic that can be explained quite well without dumbing it down, leading to a "surprisingly good" article compared to scientific divulgation in general.
The closing quote:
> [Huang] was able to prove that in any collection of more than half the points in an n-dimensional cube, there will be some point that is connected to at least √n of the other points — and the sensitivity conjecture instantly followed from this result
The actual paper: https://arxiv.org/abs/1907.00847
Another blog post: https://www.scottaaronson.com/blog/?p=4229
An older HN post: https://news.ycombinator.com/item?id=20338281