"For qubits, they can hold a value of “1” or “0” as well as both values at the same time. Described as superposition,"...
Am a bit frustrated reading that same description for years and still having no clue what it really means. Both values at the same time... huh? What is the physical property that is existing in two states at once? And how does that feature translate into speedier computation? Is it so hard to understand that it can't be explained to a layman in a bit more depth?
Would appreciate any references since everything i've managed to find is either way too technical, or too vacuous.
Think of a qbit as a 2D vector. You can express it as a(1,0)+b(0,1), where (1,0) is the "1" state and (0,1) is the "0" state. In quantum mechanics a and b are complex numbers. When you measure the qbit the quantum state by magic collapses to (1,0) with probability a^2 or to (0,1) with probability b^2. The power of quantum computers come from the fact that transformations or your qbits act on the superpositioned state, not the collapsed state.
So in principle you operate on all possible classical states of your qbits simultaneously when operating on the superposition. This magic doesn't grant you unlimited powers however, since you need to look at the state to get an answer and there you only measure one definite state. The trick to quantum algorithms is then to make the "right" answer as probable as possible.
This explanation is missing a little bit of background. A classical computer's state is represented by a single number - the concatenation of all of the classical bits in its memory. The transition function in a classical computer takes a state and moves it into another state - a simple mapping.
A quantum computer's state is a 2D vector of unit length. Transitions are actually rotations on this vector. Observing the quantum state collapses it into a classical state - a single number - by projecting it onto a random nearby axis. If your vector is halfway between two axes, the computer has an equal chance of collapsing into either classical state when observed. As it turns out, this also means that acting on this vector is equivalent to acting on both of those classical states simultaneously. If you have the right kind of problem (http://en.wikipedia.org/wiki/BQP, for example), this lets you do a lot of computation in a hurry.
A k-qbit computer's state is actually a unit vector in 2^k dimensional complex space. This exponential blowup makes it difficult to simulate quantum computers on classical systems.
Michael Nielsen has a series of Khan Academy style videos called Quantum computing for the determined: http://michaelnielsen.org/blog/quantum-computing-for-the-det.... Nielsen co-wrote the textbook on quantum computing and has made significant contributions to the field.
It's been many years since I took an introductory class in quantum computing, but basically the way computation works with quantum computers is fundamentally different. I would require me relearning a lot to explain how it makes computation faster, but basically it's possible to devise algorithms where multiple solutions are returned as a superposition of states, which when observed will usually collapse to the most probable answer, which is the correct one.
The algorithm I studied was for prime factorisation. Basically it lets you calculate prime factors of a number in polynomial time which ultimately breaks the security of things that depend on prime factorisation being hard (slow), namely all public key encryption.
What if you represented your computer bits as a arrow on a compass. You have also the plane that the compass is on.
The arrow points in one direction for 1 and in another direction for zero.
Regular computation would have you doing operations on bits. You could only do one operation at a time on one bit and you are able to flip them given a recipe.
A probabilistic computer allows you to do some special things. For example you have this magical random source where you can add a magic variable. This magic variable is a random variable. This is used in the following algorithm. We call it fermats little theorem.
Given a prime how do we test its primality? We would have to at minimum divide each number lower than itself? (there are optimizations I am glossing over for this exercise).
What if we use this magical random variable. It can be a number from 0 to a natural number?
If we have a few other algorithmic tools we can make a probabilistic search of a array.
With a random variable we can do the following operation
findingA_LV(array A, n)
begin
repeat
Randomly select one element out of n elements.
until 'a' is found
end
This will search a array A randomly and works with probability 1 (this is very important... if your probability is 0 then you probably don't have a working algorithm ;) )
Whats the advantage? Well, instead of having a sequential search of an array you can randomize the search. The probability of success is 1! So, if we are really lucky we could win very fast. If the array is SMALL then that might be even better right? If the array has one element then we will always win but thats not fun :(
But, a more useful example is the fact that quicksort with a randomized sort can get FAST {O(nlogn)}
Now, lets make a quantum leap. Lets go faster than FAST. Lets go ridiculous.
Lets say that we have a new magical bit. People call it the qubit. The more you have the better you are off.
Superposition can be described as follows. You can correlate two qubits together magically. Algorithmically no one cares how . On the Engineering side the ability to do this is worth trillions of dollars though. The advantage has the following feature.
If we want to factor a number we can do it faster than any known classical algorithm. By classical I mean following the rules of your standard computer.
Quantum computers allow you to BREAK the rules of classical ones. They do this by doing the following things.
Lets review the two computers we have discussed before.
The first kind of computer is called a Turing Machine or a Recursively Enumerable Language / Problem. This basically follows rules due to a finite control (a CPU basically) and has a tape which you can write to, go left or right.
The second kind of computer has the magical randomness. This is a randomized algorithm. If you are feeling lucky you might get lucky and this reduces your runtime. In the average case you probably win more often than you lose hopefully.
Our magical quantum compuer allows for correlation of two or more "variables" (called qubits) . How is this better?
In quantum physics when you observe a particle you "collapse its wave state". This is making an observation. This is analogous to a return variable in any C like language. The difference is how it returns and while in your function it does some magical things.
The magic is as follows. A qubit can be observed in something called a computational basis. What does that mean? It means you make two vectors in the "plane" or the "complex plane" or basically you have a 3d arrow that points wherever you want. They call this the bloch sphere.
With this magic you can do magic tricks. This magic trick is called sorting a UNSORTED database in O(sqrt(n)). This sounds ridiculous right?
You are performing operations that are LESS than the size of the array... You aren't LOOKING at every entry.
This is ridiculous. How do they do this?
Instead of thinking of the database as an array think of it as a function.
We are trying to invert the following function. Encode all of the elements of the database as a vector in an "n-dimentional state space" this will require a logarithmic amount of qubits.
Given a function which returns true or false if the element is there you can write an algorithm that searches it using superposition EXTREMELY fast O(sqrt(n))
It works by creating a correlation between the state space or basically all of the qubits are correlated together. Then you start rotating all of the qubits. Since they are tied to a function which tells you if they are equivalent to your target variable then this requires sqrt n operations to do so. They correlate this by doing a "uniform superposition" or entangling all of the elements and then you start rotating the state space to get what you want.
Thats why we want these things. They are just faster. They break rules but they are hard to build.
You need to take more math before can understand this...start with some Linear Algebra to understand what a qubit is, maybe some abstract algebra to understand why fields are important, and then topology.
Michael Nielsen explains why explaining quantum comouting is inherently difficult (TL;DR our intuition works classically and thus a complete classical model of a quantum computer would imply that a quantum computer could be simulated classically) http://michaelnielsen.org/blog/quantum-computing-for-everyon...
Super simplified summary: because the probabilities can interfere with each other in controlled ways.
Bits vs Qubits
Suppose you have 3 bits, but you don't know what state they are in. Could be 000, 001, 010, 011, 100, 101, 110, or 111. There is a probability of the state being each of those possibilities. However, the bits are actually in one of these states. Any operation you perform, like 'increment', will take one fixed input and give one fixed output.
Now suppose you have 3 qubits, but you don't know what state they are in. Could be 000, 001, 010, 011, 100, 101, 110, or 111. There is a probability of the state being each of those possibilities. However, the probability is derived from a "magnitude" or "amplitude" (http://en.wikipedia.org/wiki/Probability_amplitude) which sortof actually physically exists. The bits are not in exactly one of the states. They "really" have an amplitude/probability.
Ok, whatever, so the probabilities are "magnitudes" and "actually exist". How does that help? Well ... you can perform operations that 'rotate' the magnitudes in ways that make them interfere with each other in ways that give the right answer more and more magnitude as you iterate. Keep in mind these rotations are over a huge number of magnitudes (2^n) but their running times are polynomial in the number of qubits (n). This allows you to do things like Grover's algorithm (http://en.wikipedia.org/wiki/Grover%27s_algorithm), searching N unordered items in O(Sqrt(N)) time:
// http://tph.tuwien.ac.at/~oemer/doc/quprog/node17.html#SECTION00513300000000000000
procedure grover(int n) {
int l=floor(log(n,2))+1; // no. of qubits
int m=ceil(pi/8*sqrt(2^l)); // no. of iterations
int x;
int i;
qureg q[l];
qureg f[1];
{
reset;
Mix(q); // prepare superposition
for i= 1 to m { // main loop
query(q,f,n); // calculate C(q)
CPhase(pi,f); // negate |n>
!query(q,f,n); // undo C(q)
diffuse(q); // diffusion operator
}
measure q,x; // measurement
print "measured",x;
} until x==n;
}
In quantum mechanics a particle is not described by a position and velocity but by a wave function that represents a probability distribution of the possible states of the particle. When a measurement is made the particle takes one of the possible states, with a probability determined by the wave function. This is what people mean when they say that a particle can be in multiple states at the same time.
The Bloch Sphere is particularly helpful in understanding superpositions in qubits and operations on qubits.
Also, for the perl folks I think these two modules are useful for "hands-on" learning with "qubits" (an approximation of qubits on classical computing):
I personally favor the Quantum::Entanglement module.
The documentation on the q_logic function gives you a basic root-not gate, and you can best understand the intermediate state. Here, the entangle(1,0) returns an "entangled", which you can call just a single qubit with a superposition of values, but which returns the value 0 with 100% probability and 1 with 0% probability (hence, the 1,0).
The root_not function given to q_logic as its first argument is the root-not gate. You can see that it takes the input value, and returns a superposition of 0 and 1 ($val and !$val), the superposition giving you a 50% chance of getting either a 0 or a 1.
However, if you apply root_not TWICE, you have a 100% chance of getting a 1 and a 0% chance of getting a 0. In essence, square root of not times square root of not gives you square root of not squared, or just not, and not of 0 is 1.
Again, the Bloch Sphere is invaluable in understanding the probabilities in superpositions of 1 and 0, or qubits.
>Am a bit frustrated reading that same description for years and still having no clue what it really means. Both values at the same time... huh? What is the physical property that is existing in two states at once?
I'm nitpicking just a little, but isn't discovery always about something unprecedented? Otherwise it's not that different from the rest of the scientific work people do.
As a disclosure, I'm a physicist, having worked on quantum computing during my PhD. I think it's really cool, though long qubit lifetime doesn't take you far, the devil is in the details, for example good (reliable, low noise and tunable) qubit-qubit interaction (how in this case?), and reproducible devices (not at all in this case, every diamond will be different).
I think a discovery that improves the storage time of qubits by 6 orders of magnitude is unprecedented. Also, even if this technique has terrible qubit-qubit interaction properties, it would still be useful for storage. You could just read the qubit out, do computation in a different medium, and store the result back.
It's always a bit creepy to see this. I mean, if one is really close to this it could mean NSA or DARPA could have the know how and especially the money to already have one.
Well it's actually just the money. Money to buy knowhow, researchers, facilities and what else they'd need. It's not only about such things, but if you see how EFF built the DES cracker with donations or a group of security researchers breaks stuff with playstations and then see how much money and power government organizations without any transparency have then it's really creepy.
It's also a bit creepy how much power the bosses of such institutions potentially have when compared to something like a president. No wonder there are so many conspiracy theories.
As far as we know, its definitely just "we got a step closer". The advances that we will need to make about our understanding of quantum systems are probably significant, but quantum computers themselves don't seem that promising as of now for one important reason - we have had a very hard time coming up with algorithms that run faster on quantum computers (asymptotically) compared to classical computers. The only significant ones right now are Shor's algorithm (factoring numbers in log(n)^3 time) and Grover Search (searching through N unsorted values in sqrt(n) time). So it's definitely a niche for now, but with two qualifications:
1. Search is a very important niche, it comes up everywhere and could effectively let us say "fuck it, lets brute force it" in a lot of places where we currently use complex algorithms.
2. The knowledge that we gain while trying to build one will probably be useful somewhere.
As for "could propel". There is a set of five criteria (or 7 if you consider networking) called the DiVincenzo criteria that roughly state the properties a system needs to satisfy to be a viable quantum computer. One of these is "long decoherence times compared to gate operation times", which means how long the system of qubits can store quantum information before the noisy environment collapses it to classical information. We compare this to gate operation times because the relative times are what is important. If you have a decoherence time of 2 seconds, but it takes 3 seconds for a CNOT gate to operate, its not very useful. I don't know about the gate operation times of quantum computers based on diamonds. Even so, decoherence times are usually on the order of ms, so a 2 second time is probably a very significant achievement in that criterion. That's also been one of the difficult criteria to crack, so that adds to it. Additionally, theres no need to use this medium as the "RAM" or cache of a computer, this could potentially be used as the equivalent of a HDD on a quantum computer.
Quantum computers are used in production, as you say in a complete niche. Lockheed Martin bought a quantum computer (and associated research & development) from D-Wave for use in their "most complex computations". [1]
There's no further notice of what Lockheed has done with it, but at least there's one quantum computer in private hands designed to do production work.
We always hear about the ramifications a quantum computer would have if used to crack enterprise level security. Does there exist encryption that would be troublesome even for a quantum computer to crack or are we just SOL if one of these falls into the wrong hands?
Lattice-based cryptography (https://en.wikipedia.org/wiki/Lattice-based_cryptography) is currently believed to be quantum-computer resistant (i.e. requires super-polynomial time to break even on a sufficiently large quantum computer).
There are only very few problems were quantum computers achieve an exponential speedup vs. classical computers, factoring being one example.
It's a little pricey (I think it's meant as a textbook), but DJB has a book on post-quantum cryptography‡
He also has an intro paper (which is probably more along the lines of what you're looking for)‡
Hash-based cryptography, code-based cryptography, lattice-based cryptography, and multivariate-quadratic-equations cryptography are all methods that should hold up in a post-quantum world.
I believe that quantum computers only give a square root speedup for bruteforcing symmetric encryption algorithms, so we'd just have to double the key length of those.
Asymmetric algorithms based on the hardness of factoring or calculating discrete logarithms are broken, though.
Apart from the fairly wonderful applications in computing, it will be interesting to see what it does for the enhancement of synthetic diamond technology. I am not going to invest my pension in De Beers, unless they're going large in the synthetic diamond business.
This I believe is how certain aspects of the mind exists, information being temporarily stored and interactive with other systems. Theoretically then systems of the brain/mind should be able to be duplicated. This would allow for managing of large amounts of information, at very high speed, with no-heat generation. The only heat generated would be through the support systems / foundation - I suppose much like how the brain functions as a foundation for the mind.
A brain made out of a diamond? Indiana Jones anyone?
Interesting. I wasn't implying it's a quantum system, not through entanglement anyhow. I'm not sure it would need to be linked in such a way for it to play its role in overall functioning.
There's something really awesome and science-fiction-like about the idea that when we finally build super-awesome AIs (like Iain M Banks' "Minds"), who then either destroy us or become our best pals in the whole universe, their brains will be built of diamond.
If you can store values for a given time frame with such time frame higher than the delay of the circuit, why don't we just build a latch and extend the time frame indefinitely? Or are these diamonds only settable once, and then you have to build another one?
Because when you read a qubit, you collapse the superposition and end up with a boring old 0 or 1. Which is what you want when the algorithm is completed, but during the execution of the algorithm you want to do operations on the superpositions. So you cannot read the value and feed it back because in the process, you destroy it.
If they can achieve two seconds with diamonds, you have to wonder what they will be able to achieve with graphene, another allotrope of carbon. Maybe they will be able to find metamaterials that can store information in qubits indefinitely. This is indeed an exciting development.
Am a bit frustrated reading that same description for years and still having no clue what it really means. Both values at the same time... huh? What is the physical property that is existing in two states at once? And how does that feature translate into speedier computation? Is it so hard to understand that it can't be explained to a layman in a bit more depth?
Would appreciate any references since everything i've managed to find is either way too technical, or too vacuous.