"Can NP-complete problems be solved efficiently in the physical universe? I survey proposals
including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic
algorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation,
analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, and
“anthropic computing.” The section on soap bubbles even includes some “experimental” results.
While I do not believe that any of the proposals will let us solve NP-complete problems
efficiently, I argue that by studying them, we can learn something not only about computation
but also about physics."
Using physical space and particle interactions for solving computational problems seems like such an elegant hack. Proteins fold and chemistry just works in 3D space, yet we have problem computing these things. I wonder why we don't do more computing on the physical fabric of the universe itself. It seems like an optimally efficient tool for problem solving.
Are there any other approaches along this same vein? It's incredibly fascinating.
And if you want to go simpler, there's always mechanical analog computers: https://youtu.be/s1i-dnAH9Y4 (it's long but well worth a watch IMO)
Oh, and obviously anything involving a scale model, especially for fluid dynamics (so wind tunnels, wave tanks etc), is doing the same thing in spirit.
That analysis of spaghetti sort is incorrect as you scale N to a size larger than the hand. It will take time linear in the number of straws to move your hand across the tips to find the largest one, and this will be true for any size hand as no matter how large the hand is, there is an upper limit to the number of straws it can hover over in parallel.
One of the big points of complexity analysis is that at small scales you might see certain behavior but what we are really interested in is asymptotic complexity: this is like saying selecting the lowest value from an unsorted array is faster than a tree iif the array has so few entries that it fits into the L1 cache (while the tree's randomly scattered memory means walking its left edge could take much longer; essentially making the array O(1) in memory accesses while the tree is O(log n)). A better analysis of spaghetti sort should find it to be O(n^2), if the algorithm really continues to function at all (and it isn't clear to me that it correctly scales).
(It is also worth nothing that if you want a limited purpose linear time sorting algorithm based on the same "let's change the notion of comparison", we even have working ones that function on real world computers: radix sort is insanely epic and with a little assistance from insertion sort fix up passes I have used it to sort Unicode strings in seemingly impossibly fast times.)
I'm reminded of Bead Sort [1], which is an amusing physical sorting algorithm for integers represented in unary. The sorting can be done in O(sqrt(n)) time (time taken for the beads to fall under gravity; n is input size in unary), but that doesn't include the time needed to load the input and read the result.
Before electronic computers existed, engineers built mechanical analog devices called differential analysers [1] to compute derivatives and integrals, invented in 1876 by brothers James Thomson and Lord Kelvin. Famously, cheap differential analysers were built out of Meccano starting in 1934 [2]. After scientists gained access to electronic computers, these old analysers were repeatedly handed down through lower-budget institutions, schools and eventually museums.
This old US Navy training film on mechanical fire control calculators (called "computers" back then, used to calculate firing solutions on warships) is pretty fascinating:
https://www.youtube.com/watch?v=s1i-dnAH9Y4
Also like sleep sort, something else -- some other real resource -- still has to bear the asymptotic cost of the algorithm. In this case, AIUI, you have to pay in "light distance traveled", which goes up exponentially with the problem size.
But going to physical processes could change the asymptotic cost by adding new primitives.
There is the Computational Complexity-Theoretic Church–Turing Thesis, which hypothesis that any physical model of computation can be efficiently (eg. in polynomial time) be simulated by a turing machine. This has never been proven, but it has led to correct physical predictions so far.
We also have have demonstrated polynomial improvement that can be had by moving away of a turing machine. For example, the use of random access memory offers an asymptotic improvement for many algorithms relative to the linear access memory used by a turing machine. Additionally, Grover's algorithm allows an exponential speed up in a wide range of problems on a quantom computer; and neither of these approaches requires shifting the savingns onto some other resource.
RAM machines are an abstraction, there's no such thing as constant-time access to a random address in any physical store. At best you're looking at O(n^0.5) where n is the number of bits in the store. Still a speedup, but not nearly as much as predicted by using a RAM machine.
Grover's algorithm doesn't offer exponential speedup over classical algorithms, its complexity is O(n^0.5) queries, where n the size of the list being searched, versus O(n) queries classically.
You should be able to get memory down to O(n^1/3) if you move into 3D.
For Grover's algorithm, it depends on how you paramiterize the problem. For example, consider brute forcing a key. We generally parameterize this problem by the size in bits of the key, in which case we are looking at O(2^n) queries. Grovers algorithm lets us reduce this to O(2^.5n) querires, which is an exponential speedup.
Random access just assumes you can instantly jump to a memory location. In any physical instantiation, you're still limited in how quickly you can access by the speed of light.
(It's true that you can pack the memory into a 3D volume and thus have a cubic improvement in access time, but you can do the same thing with a multi-dimensional TM.)
The wiffle-ball-graph data structure is O(n) for insertion, sure (each new ball has to be tied to a bunch of existing balls and finding them is probably O(n) at best), so building an n-node wiffle-graph is O(n^2), but shortest path queries on the data structure are O(n) at any time (again because you have to go find the two nodes of interest, then spread them apart by at most n units of distance).
You can also sort all the nodes by distance from any given node in O(n) time by holding the chosen wiffle ball and hanging the graph over the edge of an O(n) tall tower.
Interestingly, the physical-space algorithms proposed in child comments are only two-dimensional (spaghetti sort, bead sort). Granted, they do take a one-dimensional problem and solve it in a higher dimension.
Would be fascinated to see something even higher, for example, traveling salesman problem solved in 3D space.
>I wonder why we don't do more computing on the physical fabric of the universe itself. It seems like an optimally efficient tool for problem solving. Are there any other approaches along this same vein?
you literally just described the origin of quantum computing.
> I wonder why we don't do more computing on the physical fabric of the universe itself. It seems like an optimally efficient tool for problem solving.
I think in many cases it is much simpler to perform simulations rather than run the actual physical experiment.
As I understand it, it's exploiting how light signals move very fast and thus you can sweep through many solutions. But, as they seem to note in "weaknesses", you're effectively offloading an exponential amount of work to the light, and so haven't done anything for the big-O behavior; you're just making a (potentially) less scarce resource applicable to the problem, no different (economically) from finding cheaper ways to build computers.
For reference, here's the highly amusing and interesting paper he's written on this:
"NP-complete Problems and Physical Reality" http://www.scottaaronson.com/papers/npcomplete.pdf
Abstract:
"Can NP-complete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic algorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing.” The section on soap bubbles even includes some “experimental” results. While I do not believe that any of the proposals will let us solve NP-complete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics."