Hacker News new | past | comments | ask | show | jobs | submit login

Really. How so? (No sarcasm, I'm curious).



Divide and conquer approaches show that complexity is a matter of perspective. From the right point of view, many problems become trivial, or at least tractable.

The standard example is quicksort. Sorting requires N^2 relations to be enforced, but quicksort does it with N * log N operations (on average).

Matrix exponentiation is another simple example. Take M^N, where M is a square matrix and N is an integer. Directly calculating that takes a lot of arithmetic. So instead you calculate M^2, M^4 = (M^2)^2, M^8 = (M^4)^2, and so forth. Then M^N is simply the product of at most log(N) those binary powers. (Since iterated linear systems can be written as matrix math, this lets you easily calculate their Nth iteration. Want the trillionth Fibonacci number? No problem.)

The fast Fourier transform (FFT) is a much less obvious example. It computes an N-element frequency spectrum of N data samples in O(N * log N) time. This is impressive because every input contributes to every output, so the naive algorithm takes O(N^2), which is intractable for real-time signal processing.

Even less obvious are the various algorithms based on the FFT. For example, the primary school method for multiplying two N-digit integers requires N^2 operations, which is intractable for large values. However multiplication is equivalent to the digit-wise convolution of the integers followed by carrying the digit overflows, and there is an FFT theorem for convolving in O(N * log N) time. So you can multiply in O(N * log N) time! See http://en.wikipedia.org/wiki/Sch%C3%B6nhage%E2%80%93Strassen...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: