Hacker News new | past | comments | ask | show | jobs | submit login
Optical solutions to NP-Complete problems (ubbcluj.ro)
97 points by adamnemecek on Aug 26, 2016 | hide | past | favorite | 39 comments



For those who may not be aware -- As always in these situations: Scott Aronson said it first!

For reference, here's the highly amusing and interesting paper he's written on this:

"NP-complete Problems and Physical Reality" http://www.scottaaronson.com/papers/npcomplete.pdf

Abstract:

"Can NP-complete problems be solved efficiently in the physical universe? I survey proposals including soap bubbles, protein folding, quantum computing, quantum advice, quantum adiabatic algorithms, quantum-mechanical nonlinearities, hidden variables, relativistic time dilation, analog computing, Malament-Hogarth spacetimes, quantum gravity, closed timelike curves, and “anthropic computing.” The section on soap bubbles even includes some “experimental” results. While I do not believe that any of the proposals will let us solve NP-complete problems efficiently, I argue that by studying them, we can learn something not only about computation but also about physics."


A favorite example of mine, using photons to solve Traveling Salesman in quadratic time: https://www.osapublishing.org/oe/abstract.cfm?id=140598

Spoiler: the number of photons required scales up so quickly that the phenomenon cannot be observed for sufficiently high N.


Using physical space and particle interactions for solving computational problems seems like such an elegant hack. Proteins fold and chemistry just works in 3D space, yet we have problem computing these things. I wonder why we don't do more computing on the physical fabric of the universe itself. It seems like an optimally efficient tool for problem solving.

Are there any other approaches along this same vein? It's incredibly fascinating.


Well, there's spaghetti sort: https://en.wikipedia.org/wiki/Spaghetti_sort

Also things like using slime mold to solve the traveling salesman problem: http://phys.org/news/2013-03-blob-salesman.html

And if you want to go simpler, there's always mechanical analog computers: https://youtu.be/s1i-dnAH9Y4 (it's long but well worth a watch IMO)

Oh, and obviously anything involving a scale model, especially for fluid dynamics (so wind tunnels, wave tanks etc), is doing the same thing in spirit.


That analysis of spaghetti sort is incorrect as you scale N to a size larger than the hand. It will take time linear in the number of straws to move your hand across the tips to find the largest one, and this will be true for any size hand as no matter how large the hand is, there is an upper limit to the number of straws it can hover over in parallel.

One of the big points of complexity analysis is that at small scales you might see certain behavior but what we are really interested in is asymptotic complexity: this is like saying selecting the lowest value from an unsorted array is faster than a tree iif the array has so few entries that it fits into the L1 cache (while the tree's randomly scattered memory means walking its left edge could take much longer; essentially making the array O(1) in memory accesses while the tree is O(log n)). A better analysis of spaghetti sort should find it to be O(n^2), if the algorithm really continues to function at all (and it isn't clear to me that it correctly scales).

(It is also worth nothing that if you want a limited purpose linear time sorting algorithm based on the same "let's change the notion of comparison", we even have working ones that function on real world computers: radix sort is insanely epic and with a little assistance from insertion sort fix up passes I have used it to sort Unicode strings in seemingly impossibly fast times.)


Slime molds aren't guaranteed to find optimal solutions


You'll have to break out "Moldy Carlo" simulations for that!


I'm reminded of Bead Sort [1], which is an amusing physical sorting algorithm for integers represented in unary. The sorting can be done in O(sqrt(n)) time (time taken for the beads to fall under gravity; n is input size in unary), but that doesn't include the time needed to load the input and read the result.

[1] https://en.wikipedia.org/wiki/Bead_sort


Before computers had the kind of processing power they do now, engineers would build analog circuits to solve differential equations.

A modern-day quick link:

http://www.dummies.com/education/science/science-electronics...


Before electronic computers existed, engineers built mechanical analog devices called differential analysers [1] to compute derivatives and integrals, invented in 1876 by brothers James Thomson and Lord Kelvin. Famously, cheap differential analysers were built out of Meccano starting in 1934 [2]. After scientists gained access to electronic computers, these old analysers were repeatedly handed down through lower-budget institutions, schools and eventually museums.

[1] https://en.wikipedia.org/wiki/Differential_analyser [2] http://amg.nzfmm.co.nz/differential_analyser_explained.html


This old US Navy training film on mechanical fire control calculators (called "computers" back then, used to calculate firing solutions on warships) is pretty fascinating: https://www.youtube.com/watch?v=s1i-dnAH9Y4


(nowadays it's usually the other way around, of course ;)


Also like sleep sort, something else -- some other real resource -- still has to bear the asymptotic cost of the algorithm. In this case, AIUI, you have to pay in "light distance traveled", which goes up exponentially with the problem size.


But going to physical processes could change the asymptotic cost by adding new primitives.

There is the Computational Complexity-Theoretic Church–Turing Thesis, which hypothesis that any physical model of computation can be efficiently (eg. in polynomial time) be simulated by a turing machine. This has never been proven, but it has led to correct physical predictions so far.

We also have have demonstrated polynomial improvement that can be had by moving away of a turing machine. For example, the use of random access memory offers an asymptotic improvement for many algorithms relative to the linear access memory used by a turing machine. Additionally, Grover's algorithm allows an exponential speed up in a wide range of problems on a quantom computer; and neither of these approaches requires shifting the savingns onto some other resource.


Note: nitpicking follows :)

RAM machines are an abstraction, there's no such thing as constant-time access to a random address in any physical store. At best you're looking at O(n^0.5) where n is the number of bits in the store. Still a speedup, but not nearly as much as predicted by using a RAM machine.

Grover's algorithm doesn't offer exponential speedup over classical algorithms, its complexity is O(n^0.5) queries, where n the size of the list being searched, versus O(n) queries classically.


More nitpicks:

You should be able to get memory down to O(n^1/3) if you move into 3D.

For Grover's algorithm, it depends on how you paramiterize the problem. For example, consider brute forcing a key. We generally parameterize this problem by the size in bits of the key, in which case we are looking at O(2^n) queries. Grovers algorithm lets us reduce this to O(2^.5n) querires, which is an exponential speedup.


Random access just assumes you can instantly jump to a memory location. In any physical instantiation, you're still limited in how quickly you can access by the speed of light.

(It's true that you can pack the memory into a 3D volume and thus have a cubic improvement in access time, but you can do the same thing with a multi-dimensional TM.)


Someone has described using a physical network to solve the shortest-path problem in O(N) time: https://www.reddit.com/r/compsci/comments/a1sqb/help_shortes...


That doesn't seem right to me. You'd have to tie a string between every single possible pair of nodes, so it's O(N^2).


The wiffle-ball-graph data structure is O(n) for insertion, sure (each new ball has to be tied to a bunch of existing balls and finding them is probably O(n) at best), so building an n-node wiffle-graph is O(n^2), but shortest path queries on the data structure are O(n) at any time (again because you have to go find the two nodes of interest, then spread them apart by at most n units of distance).

You can also sort all the nodes by distance from any given node in O(n) time by holding the chosen wiffle ball and hanging the graph over the edge of an O(n) tall tower.



This is what DWave really does when it comes down to it (if you ignore all the back and forth on whether it is or is t a quantum computer)

Also there's the usage of passing laser light through lenses to implement Fourier transforms


This paper describes an optical implementation to some transforms used in deep learning

http://arxiv.org/abs/1511.05946

These guys use optical computing for big data analysis and pattern matching

http://optalysys.com/


Interestingly, the physical-space algorithms proposed in child comments are only two-dimensional (spaghetti sort, bead sort). Granted, they do take a one-dimensional problem and solve it in a higher dimension.

Would be fascinated to see something even higher, for example, traveling salesman problem solved in 3D space.


>I wonder why we don't do more computing on the physical fabric of the universe itself. It seems like an optimally efficient tool for problem solving. Are there any other approaches along this same vein?

you literally just described the origin of quantum computing.


>I wonder why we don't do more computing on the physical fabric of the universe itself.

Could you explain how a regular old computer is not doing exactly that? The last time I checkt it had quite a bit of physics going on.


I'm reminded of a recent front page submission, this large-scale physical model of the Mississippi River that was built for use in flood planning.

https://news.ycombinator.com/item?id=12163592


> I wonder why we don't do more computing on the physical fabric of the universe itself. It seems like an optimally efficient tool for problem solving.

I think in many cases it is much simpler to perform simulations rather than run the actual physical experiment.


Amusingly, physics people were doing fourier transforms using optical techniques (lenses) well over a hundred years ago.


interesting - got a link?



As I understand it, it's exploiting how light signals move very fast and thus you can sweep through many solutions. But, as they seem to note in "weaknesses", you're effectively offloading an exponential amount of work to the light, and so haven't done anything for the big-O behavior; you're just making a (potentially) less scarce resource applicable to the problem, no different (economically) from finding cheaper ways to build computers.


You could say the same about fiber optics vs copper wire. Increasing bandwidth can drive revolutionary change.


This is like sleep sort: https://rosettacode.org/wiki/Sorting_algorithms/Sleep_sort. Same principles apply. Time delays are used to achieve the result.

I thought this would have used the quantum/wavelike properties of light but I was wrong.



I meant to reply to this but put it under a different comment by accident: https://news.ycombinator.com/item?id=12364273


Love this. This is interesting too - optical computing for machine learning

http://www.lighton.io/

I think we'll see much more physical computing in the future for specific things


What do they do for the nonlinear parts? I.e. the gates.

Is it all like Fourier transform where all the work is linear; or do they have to convert it to electricity and then back to light?

If they do that, then do they gain anything over purely electronic solutions?


Reminds me of the infamous "sleep sort".




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: