I learned Hamming code in school this summer (digital systems) .It was probably the single most magical thing (even more than flipflops! Loops make memory!) I was exposed to that class.
I remember upon learning it someone went "wow, who came up with that". The prof answered dryly "some genius named Hamming".
Sort of a probabilistic hash, where you trade space for accuracy. But it's also like a memory function - it can remember if it has seen a piece of data before.
What I love about bloom filters is that when checking to see if something is in a bloom filter, you can only get false positives, not false negatives. That property is just awesome to me.
Also the fact that you can use more bits per element and reduce the error rate for false positives. Using around 3 bytes per element, can bring the error rate down to 0.001%.
also the fact that you can use multiple hash functions to reduce the error rate even further is pretty nice. this is of course without incurring the overhead of extra storage space...
I love that they're fast enough to use in hardware. Some of the newer schemes for hardware-accelerated transactional memory use Bloom filters to probabilistically detect when two transactions are using the same memory addresses. This is so much easier and more efficient than previous approaches, which had to use content associative memories and expensive broadcast operations.
Singular value decomposition. It continually amazes me how many problems -- from noise reduction to Netflix prediction -- SVD can be usefully applied to.
I still do not completely understand it. I see the math, but I see it as symbols that I can memorize, rather than something intuitive (like when I see a moving cursor through a state tree during a binary search for instance).
Compressed Sensing is exciting - using a sequence of low res images to obtain a higher resolution sample. It was discussed in Wired (http://www.wired.com/magazine/2010/02/ff_algorithm/all/1) together with a compelling example (minimizing the time of a young patient in an MRI machine)
That sounds far too good to be true. I guess if you don't care about the fine details then its a decent technique, but to take the MRI example, what if the thing that was wrong was only visible in those small details?
The Wired articles gave the wrong impression. It gave an example of inpainting (the Obama picture) that is NOT compressed sensing. Compressed sensing on the other hand can be applied directly to MRI because the MRI machine actually picks up all the spatial information. It samples randomly in the Fourier domain thereby having access to all the spatial information needed to fully reconstruct an image. It used to be that the reconstruction algorithms were not that good before (they relied on SVD/least square). Candes, Tao, Romberg and Donoho then published papers showing that the reconstruction could be done in a totally different way AND it was exact. With these new reconstruction algorithms something like MRI data is acquired in a much more efficient manner than four years because of compressed sensing.
There are other things which sound too good to be true and yet are true (the sampling theorem for exact reconstruction of periodic signals is pretty unintuitive IMO, and even more "magic")
Filling in missing information with a reasonable guess is vary different from having accurate information. Looking at a low res impressionist painting you are going to fill in based on reasonable estimates for the real world not what is actually there.
For me, it's boosting. Basically, if you make the assumption that your weak classifier can achieve X% correct on any random distribution over a data set, then you can create an ensemble of weak learners that together get Y% correct on the same set, where Y > X.
Please don't post links through bit.ly and other URL indirection sites. It hurts the web by making it much more difficult to follow the link in the case that an intermediary goes out of business. Hacker News and your post will probably be around in a few years, but will bit.ly? (What's their business model, exactly...?)
Not to mention the real URL shows that the link is going to Stanford whereas I have no clue whether the shortened link is going to Stanford, an XXX site, or rick rolling me.
It's been discussed here before, and no doubt it'll be cited numerous times in the comments of that reddit post, but it was the first time I'd ever seen such trickery. When I got into low level DSP programming I saw many more examples of clever hacks, but this is the one which sticks in my mind above all of those.
My mind is blown by the algorithm for matching with mismatches which I present in the first chapter of my thesis. It shouldn't be, given that I discovered this algorithm -- but somehow "I take the Fourier Transfer of the FreeBSD kernel" sounds more like the punch line to a joke than the first step in an algorithm.
At least to me, it seems hard to imagine that multiplying two N-bit integers could be done with less than O(N^2) operations. Schoenage-Strassen lets you do it using O(N log N log log N) operations!
This is the one that blew my mind. "If we treat the numbers as polynomials, then the multiplication can be reduced to a Fourier transform!" is also one of those things that sounds like a punchline. Each step of the algorithm just sounds stupider and stupider, and slower and slower... and then you get to the last step, where all the multiplications by exponentials of complex numbers get replaced by bitshifts. Suddenly the entire algorithm collapses in on itself to reveal something efficient.
High dimensional work is bloody hard, and this algorithm works amazingly well. I've spoken with Lenstra (one of them) and he's amazingly insightful on these things. He helped to crystalise my understanding of why high-dimensional spheres should be thought of as "spikey," rather than "round."
Please do. My guess would be similar to Zaak's: the n-dimensional volume of an n-dimensional hypersphere approaches 0 as n approaches infinity. (http://www.mathreference.com/ca-int,hsp.html). A hypersphere with radius 1 centered at the origin has to get out to 1 along each axis (e.g. x=1, y=z=0 for a 3 dimensional sphere), and in order for that not to contribute much volume, it has to be a narrow spike rather than a gradual curve.
Not at all. The expected distance between the center of an n-ball and a random point in the ball is n/(n+1) times the radius.
Now, if you look at an n-sphere with respect to orthogonal axes, you find that moving along an axis you get out as far as (1, 0, ... 0) and moving "away" from the axes you only get to (1/sqrt(n), 1/sqrt(n), ... 1/sqrt(n)); but this isn't due to the sphere being spiky -- rather, it's because orthogonal axes are spiky.
Agreed, but while the notation's simple, the iterative effect is hard to follow -
it's like a spinning rod, angle doubled and length squared each iteration, with a rod of fixed length and angle added to it each iteration (if the spinner is facing away, it gets shorter, but if towards, it gets longer). Iterate til it goes over 2. If it doesn't then it's in the Set (or maybe you need to iterate more...). [that's my current understanding]
It's pretty clear to me that I can't visualize what it will do over a few iterations (apart from simple cases) - and I sure can't see how it would produce self-similar (ie fractal) shapes. Agreed that that's amazing.
It lets you search a completely unstructured N-item search space using the square root (!) of N queries, not the N queries you'd think were necessary.
Also, the algorithm is so simple that once you know it's possible, and provided you're very comfortable with basic quantum mechanics, it's almost trivial to find the algorithm.
Ok, not quite an algorithm, but the PCP theory (http://en.wikipedia.org/wiki/PCP_theorem) is easily the most mind-blowing result in comp sci in the past 20 years.
PCPs are true magic. I once learned how to construct a PCP-proof with verification using just 7 bits, but it still feels like magic to me. The 3-bit variants are just crazy.
The Burrows-Wheeler transform used in bzip2 is truly mind-blowing.
Many algorithms are ordinary genius (eventually you would have come up with it because the problem space dictates the solution), but this one is extraordinary genius (mind-blowingly original and non-obvious). Every time I see it, I'm amazed that it works.
Welzl's algorithm for minimum enclosing disk does it for me. Pick a point, recursively see if it is in the disk without the point - if not, it is part of the definition the minimum enclosing disk (forced to be on the boundary). Randomized, simple, clean and expected O(n).
CORDIC (http://en.wikipedia.org/wiki/Cordic), used to efficiently calculate trig functions with only very basic hardware requirements (add, sub, shift, and table lookup).
. . . relatedly, from Jim Blinn, from Marvin Minsky -- drawing a circle (near enough), or doing incremental rotation, using just integer add, sub, and two shifts:
N = 128 * 3.14159
X = 1000
Y = 0
MOVE(X,Y)
FOR I = 1 TO N
X = X - (Y >> 6)
Y = Y + (X >> 6)
DRAW(X,Y)
Some of the sorting algorithms, as somebody learning them, it's almost entirely opaque how somebody could come up with those.
It seems like they must have sprung forth wholecloth to the inventor in the shower...it's almost impossible to have iteratively developed some of them because even small changes in the algorithms produce terrible results. I remember thinking over and over again, "how the hell could somebody come up with this?"
Searching in comparison looks very engineered, very studied, something that most people could come up with given need, motivation and time.
The Genetic Algorithm. It totally blew my mind when I read about it's successful application to evolving satellite antennas for JPL, and the automated design of an analog->digital signal converter that relies on a component that isn't actually connected to anything else.
Poul-Henning Kamp (creator of Varnish) recently published the B-heap algorithm. It's a cache-friendly version of the binary heap storage we all learned as freshmen.
PHK points out that mapping node n to nodes 2n and 2n+1 will cause cache misses (and often page faults) as we vertically descend the tree. So instead, rearrange the mapping so that the child nodes are often very near their parent node. That way, heap comparisons are usually performed in the same virtual page, which cuts-down on the number of disk accesses the operating system must perform.
More of a complexity result than an algorithm, but it is a really fun result anyway. In complexity class we were given the problem to simulate an n-tape non-deterministic Turing machine with a 2-tape non-deterministic Turing machine with just a constant slow-down (which turned out to be 3). It really gave an insight into the power of non-deterministic choice.
while looking thru some 360 assembler code one day, I came across this ...
XC loc1,loc2
XC loc2,loc1
XC loc1,loc2
It took a day of head scratching, and various notes, before I understood that it was a clever way of exchanging the contents of two storage locations, without a third intermediate location. Three consecutive exclusive-Ors.
And regarding machine learning I like neural nets (MLP), however the algorithms that currently blow my mind are convoluted neural networks (CNNs) and deep belief networks trained with auto-encoders.
http://www.youtube.com/watch?v=AyzOUbkUf3M on the 21-minute mark to see it in action.
http://en.wikipedia.org/wiki/Hamming(7,4)
I got interested in these codes because of the problems involved with deep space communication.
http://en.wikipedia.org/wiki/Error_detection_and_correction#...