Hacker News new | past | comments | ask | show | jobs | submit login
Quantum Computer Comes Closer to Cracking RSA Encryption (ieee.org)
123 points by mhb on March 7, 2016 | hide | past | favorite | 81 comments



If I understand correctly, this quantum computer came closer to cracking RSA the same way shifting in your chair gets you closer to Jupiter.


Werner von Braun oversaw development of some awfully small crummy rockets https://youtube.com/watch?v=Ii7uwp1SRIM (imagine the ones even before these!) but also oversaw the Saturn 5 which lifted every person who walked on the moon. We understand the physics, now it's a matter of engineering and funding!


Not necessarily. What if entangling each quabit is twice as hard as the previous one? There's a lot of people that suspect physics may prevent quantum computing from scaling up. See also:

http://phys.org/news/2012-02-quantum-physicist-100k-proof-sc...


But the guy who made that bet doesn't think physics prevents a quantum computer. "Many people assumed I was a QC skeptic, and was offering the prize because I hoped to spur research aimed at disproving QC. Which is actually an interesting misreading" http://www.scottaaronson.com/blog/?p=902


Yeah I was just pointing out that there is disagreement. I think of it like time travel - nobody has proved that it is impossible, but a lot of people suspect that a "sensible" universe wouldn't allow it.


the rate of progress seems in fact astoundingly slow. if gene sequencing is outstripping moore's law, quantum computing is underperforming it by a mile. this quantum computer did the same 3 x 5 = 15 factorization that was done ~15 years ago. and has added just one special q-bit over its predecessors. albeit this is a more "fair" factorization technique than the early experiments, which have been likened to loaded dice.


Well, "we" (I didn't I wasn't there) understand the physics since the 1920s.


Except that a whole lot of basic research into stuff like viable rocket fuels etc simply didn't exist yet that early.


A better analogy would be that it's the same way that building a glider got us closer to an A380.


Breaking news! RSA was cracked today by a quantum computer. Turns out {3, 5} were actually universal factors.


Ahh man, I needed this laugh. Thank you.


I dont even know if your comment is accurate Yver but I'm upvoting it anyway but it's awesome.


I love it when people tout a result as being "only 15-30 years away." Is there any large scale human technological research project in all of human history which lasted 20 years and actually produced what was promised? There is no incremental path here; any time someone says "20 years" it means they have no idea how to do this thing.


The human genome project. Their goal when starting was "this will take 15 years", they completed it in 13. The Apollo program: goal of less than 9 years (end of the decade) when it was proposed in 1961 to land a man on the moon and return him to Earth, completed in 1969.

If you go back quite a ways, I'm willing to bet at least some of the innovation-required Castles and Cathedrals and Pyramids of history were completed "on schedule" with it known up front that they would take many years to finish.

I suspect a lot of projects are accurately classified as 10-30 years away, given a certain level of funding and a dedicated team. Absent that, they may very well take twice as long or more. I always like to see what these long-term oriented institutions like SENS and MIRI say they would do if they had x times more money, to see that they've got a plan -- fortunately those ones do give that information.


> I suspect a lot of projects are accurately classified as 10-30 years away, given a certain level of funding and a dedicated team

Precisely. Take fusion for instance:

http://i.imgur.com/sjH5r.jpg

This is the source of the "joke" that fusion will always be 20 years away. It's because it's been grossly under-funded ever since that prediction was made.


This is really interesting! Have you read the source report? I'm curious as to why/how the predicted required budget would fluctuate so much. For such a specialised and frontier based project, surely you'd be aiming for consistent staffing (and thus consistent staff budgets) year on year no?


My guess is that is fluctuates because there are a discrete number of new experiments to be constructed, and these don't cost a constant amount during their construction.


That was my guess as well. Looking at the graphs, there are about three clear peaks in the maximum, accelerated, and aggressive plans, muddled out a bit in the moderate plan.

What are these three experiments? Which of them have already been done in the existing plan? And what others might need to be added?

Also curious is the expected total cost. Eyeballing each plan average cost and multiplying by the duration, the total costs are 6x14=84, 4x17=68, 3x22=66, and 2.5x29=72.5 billion dollars. The accelerated and aggressive plans are more expensive than the moderate plan!

I suppose there's some additional overhead costs associated with keeping the program running for 29 years instead of 17, but I would have expected all the rush orders, overtime, and extra staff to make the faster programs much more expensive.


Neither of these numbers is equal to or greater than 20. 20 is a "I don't know how to do this" number of years. 10 is building a dam. Quantum computing has already been "a thing" for longer than 20 years; careers have been made ... people have retired. Today we get news of 1 bit worth of progress made since 2001. Now we can factor 30 instead of 15....


https://en.wikipedia.org/wiki/Large_Hadron_Collider#Cost

The LHC was approved for construction start in 1995. The wiki article doesn't capture it but you can bet the science behind the design of the LHC wasn't all done the year before. It didn't achieve its first real goals until 2015.

https://en.wikipedia.org/wiki/LIGO

LIGO was dreamed up in the 1960's to detect gravitational waves. Took until 2002 to get the first one built big enough to theoretically detect them, and until 2015 to finally get measurements demonstrating the existence of waves.

Real science is slow. This "20 years means I don't know" business is indicative of perhaps not appreciating the difference between basic science and applied science/engineering. I know we're used to engineering projects pumping out new awesome tech every couple years, but basic science is hard and takes a long time.


Ligo was most certainly not dreamed up in the 1960s, and the LHC was completed in 10 years.

I picked the word "technology" instead of "science" on purpose. Science isn't slow or fast; it can't be predicted at all. Technologies; different story. Someone says I'm going to build you a piece of technology and it will take 20 years, that means they don't know how to do it. There are no exceptions.


For most research areas I'd agree, "20 years" = "who knows". But for cryptography people using it should keep looking carefully at how long something they encrypted will remain safe. People may be betting a lot on the assumption that something stays secret for at least 50 years. Therefore any significant shift in those probabilities is newsworthy, in my opinion.


I understand your point, but it does happen. For example, commercially available LED lighting that is more efficient than compact fluorescent. The blue LED was invented in the early nineties and you just knew it was a matter of time before we would get efficient white-light light.

The problem is that for every person who says, "This is 20 years away", there are unrealistic people who are saying, "This is 5 years away". And there are pessimistic people who are saying, "Oh, this is just going to go on for ever. They don't really know how long it's going to take".

Will you ever be able to read popular news articles and reasonably expect the predictions to be accurate? I doubt it. Are there people in the world who have a pretty good idea how long it will actually take? For a great many things, yes.


Kind of off topic, but wow LEDs have gotten a lot cheaper very quickly. I recently moved and replacing all the incandescent bulbs and a few halogens lights with white-light LEDs was a lot cheaper than I thought it was going to be, judging by my memory of some LEDs I bought during the last move a few years ago... and they're at least as or even more energy efficient for the same lumens. I'm usually part of the crowd that thinks Kurzweil hypes his curves too much and additionally is "curve deterministic" instead of pointing in the right research directions to make things happen sooner/better, but there really are a lot of exponentials (and slightly less impressively power laws) out there if you look for them, and they can shake up expectations quite a bit when you're predicting linearly all the time... The genome project usually comes to mind since Kurzweil mentions a lot that by the halfway point they were only 1% "done", so getting the other 99% in the next 7 years seemed pretty ridiculous if you were a layman extrapolating linearly and didn't read that they actually planned for an order of magnitude decrease in cost (https://books.google.com/books?id=rtq1bS7g-OEC&lpg=PA529&ots...).


Completely off topic, but I replaced almost all of my traditional lights with led lights (except for some oddly shaped ones) two years ago. It wasn't that cheap, but I haven't had to replace a single of them yet. I figure if they last 5 years, they'll have paid for themselves between the costs of replacement bulbs (some of my fixtures ate normal lights every few months), the cost of electricity, and the pain of having to drag out the ladder every time I needed to replace one.


Ha ha... Super off topic, but last year we moved back to Japan and I had to buy all new light fixtures. I decided to go with these new LED panels. I'm trying to remember the details, but I bought 3 panels each 5,600 lumens for about $100 US each. Even now I have a hard time believing that, so I just did a quick price check online and they are still about the same price. Not only that, but IIRC they get over 120 lumens per watt, are colour adjustable, and have a remote control. It just seems too good to be true. I've only ever seen these lights in Japan, which is really a shame because they are amazing. (of course they are about a meter in diameter which wouldn't suit most western homes...)

Slightly more on topic, it's the thing in my home that most makes me feel like I'm living in the future :-)


I'm ok with 20 year predictions when it comes to converting an existing invention into its commercially available mass produced form (like white LED lighting). Or the growth of the 1995 Internet into its now ubiquitous level.

I'm very skeptical when people do 20 year predictions from the current state of the art into something which is impossible to do today for any amount of money, such as this quantum computing based RSA breaker.


The Longitude Act : https://en.m.wikipedia.org/wiki/Longitude_Act

Short answer : over 114 years from 1714, Britain sought and paid rewards for accurate measures of longitude. Resulting of course in the invention of 'modern' clocks.


Relevant xkcd: https://xkcd.com/678/


>Is there any large scale human technological research project in all of human history which lasted 20 years and actually produced what was promised?

For sure. Government-promised highways, for example.

Oh, uh, wait a minute...


This is not research, but simply applying well-known facts of transport engineering.


The Manhattan Project at least comes close. It's arguable that the start of the project was not the start of the research. On the other hand the theoretical framework necessary for it was still pretty new.


Less than 3 years from generating a half of watt of power in the Chicago pile in December of 1942 to blowing up a city; 4 if you count the previous year. Nobody said, "give us 20 years and we'll make you an a-bomb." They pretty much knew how to at least figure out all the lacunae in 3 years.


Nice clickbait, Scott Aaronson on this: http://www.scottaaronson.com/blog/?p=2673


yep. whenever i read anything sensational about QC I immediately head over to his blog to read about what it actually means.


Best quote from the post you link to:

"If one counts demonstrations not based on quantum computing, some people have claimed even earlier precedents for the 3 x 5 = 15 theorem."


The irony that the SSL cert is invalid for the NSA FAQ linked to in the article... https://www.iad.gov:8443/iad/library/ia-guidance/ia-solution...


Not entirely true.

- Safari; Certificate is fine - Chrome; Certificate error due to SHA-1 signatures - Firefox; Unknown issuer

Why is the DoD root certificate in OSX but not in Firefox?



Because someone would have filed a bug on Bugzilla arguing for it to be removed on moralistic grounds I bet.

I'm all for having principles, but you have to accept that it'll break for your users.


Frank Hecker : "Accepting this bug. My initial comments: The last time I dealt with the issue of the DoD PKI it was essentially a DoD-internal PKI for the use of U.S. military personnel (active or retired), DoD civilian employees, DoD contractors, and (maybe) allied forces (e.g., NATO). It was not intended for the use of the general public (whether U.S. citizens or not), and I'm not aware that members of the general public would ever be in a situation where they would encounter SSL-enabled web servers, S/MIME email, or signed code that used DoD-issued certificates.

Based on that I would consider this an "intranet" CA (albeit for a very large intranet) and based on my previous "meta-policy" comments I would recommend not including this in Mozilla et.al. I'll leave this bug open for a period of public comments, and then I'll close it with "WONTFIX" unless someone can provide compelling reasons why I should do otherwise."

Seems reasonable to me.


The strong argument here is that if you are giving a lock icon but knowingly allowing decryption by a known bad actor, you are also breaking the experience of your users.

I also agree with the weaker argument that the cert in question is essentially for a local intranet and that the DoD can, for as long as it continues to exist, which I find politically disagreeable, install the cert locally on its own resources.

If it wants to publish material for broader consumption, it can get a cert like everyone else.


Trusting the cert does not, precisely, allow decryption by the certificate authority. It rather gives the certificate authority the ability to issue certificates for domains, which if they are used to establish a connection, can encrypt and decrypt traffic for that connection.

So yes, if you trust the DoD root certificate, then the DoD as well as every certificate authority in the world could in theory generate a valid certificate impersonating www.google.com or any website. With a sophisticated enough attack, they could do this just for your one visit to one particular website, in such a way that it would be difficult for anyone to realize that it's happening. However, though this is difficult to notice if you're not looking for it, it's actually really easy to notice if you are looking for it. If you use Chrome, then Chrome will report the certificates that it sees back to Google, who track what certs are issued by CAs. This is how Google noticed that Symantec issued fake certificates for Google domains in Symantec's test environment: https://googleonlinesecurity.blogspot.com/2015/10/sustaining...

Anyway, the practical risk of trusting a DoD certificate is pretty low. To decrypt your traffic, they'd have to man-in-the-middle intercept your connection to a web server, and replace the site's valid certificate with their own, which would leave an obvious and flagrant trail to anyone who is looking. This would very obviously "play their hand" and anyone with evidence of being attacked that way by a first world government would be immediate worldwide news in the security community. If they did this even once, they'd need to be extremely careful not to be caught by any of the countermeasures that detect this kind of surveillance.

That kind of attack would be a one time thing, because evidence of being attacked through the DoD cert would cause all browser vendors and OSes to yank support for it. CAs have been revoked for far less justified reasons than explicitly attacking someone.

I find it much, much, much more likely that targets of interest will simply be attacked and exploited through regular known security mechanisms - such as software vulnerabilities or built-in back-doors. These things don't leave an obvious trail and smoking gun pointing back to the perpetrator. Someone MITMing your website visits with the DoD root certificate would stir up a shitstorm. "Some anonymous IP broke into my computer with a 0-day and installed a rootkit" is not particularly newsworthy by comparison. Even the recent news of backdoors in networking product codebases is, while newsworthy, not really that surprising these days. Active evidence of DoD interception of someone's network traffic followed by evidence of CA certificate misuse would drop like a nuclear bomb in the security community, especially if for no extremely well justified reason. It would be the proverbial straw that broke the camel's back in terms of government interference with Internet security, and would lead to a digital revolt even more severe than what Snowden's disclosures caused. Government technical experts will be aware of this and will use other methods, at least in any context where it could be plausibly noticed.

To be clear, I completely believe that the government is or could passively conduct surveillance on virtually all electronic communications. I just don't think they'll go as far as actively intercepting and modifying a connection, and inserting a fake certificate, within the borders of the country or in any normal circumstance. Maybe they would do such a thing within the private networks of North Korea, but I'd be highly skeptical of them doing it within the US, and with the DoD certificate of all things. It would be too obvious and has too poor of a risk/reward payoff compared to other methods. If they were going to do this, they'd steal the private key from another CA and use it instead. Because that capability could be noticed and "burned", they'd save it for high value targets only.

So, in all practical analysis, I think it is extremely unlikely that the DoD will attack people through their root certificate, though I concede that it's plausible. I would be interested in feedback from others about this reasoning.


Because Illuminati


I have a feeling that we're going to wake up one day and there will be a legitimate breakthrough in quantum computing, leading to RSA no longer being a legitimate standard for security.

My question to HN: Have any of you guys attempted to come up with a plan for if/when this happens? Are there any algorithms in suites like OpenSSL that are quantum-resistant? Is SSL/TLS even compatible with a post-quantum world?


There are not, not in OpenSSL. Viable post-quantum encryption is a hot research area right now. There are a number of approaches that seem promising; some of them, like hash-based, lattice-based, and code-based crypto, are actually relatively old concepts that were pushed aside because RSA is more efficient, but which regain their edge if quantum computers become viable.

https://en.wikipedia.org/wiki/Post-quantum_cryptography

The trick with all of this is that there isn't all that much cryptanalysis work of some of the more promising PQ schemes, so trying to preemptively adopt them just in case quantum computers learn to factor numbers bigger than 15 is likely to do more harm, in the short term, than good.


My plan: don't rely on the secrecy of anything encrypted now with RSA to remain secret in 20 years, and never use it for encrypting data-at-rest. Hell, Schneier already increased his PGP key size to 4096 bits, I don't think it's a stretch to think 2048 could be broken classically by USA-level actors in the not-too-distant future. But since you're not using RSA for data-at-rest, more for transient "messages", hopefully the value of anything intercepted and stored for 20 years will be relatively low.


Here's good talk by djb covering the big picture and some options for alternatives that are thought to be quantum-resistant. https://media.ccc.de/v/32c3-7210-pqchacks


People try to come up with a plan. The problem is the plan is in its very early stage. We have a couple of impractical algorithms.

An EU research project has published a couple of recommendations what you can do if you need postquantum today (none of which you want to use for TLS, because the keys or signatures are too big). NIST plans a standardization process. A draft for a stateful hash-based signature scheme (XMSS) is probably going to be an IETF document soon.

Small bits and pieces. People are working on it, but it's still a long way until you will be able to use your browser to establish a quantum-safe https connection.


The PGCRYPTO project [1] a few months ago published a paper with initial recommendations [2] on which algorithms and parameters should work.

[1] https://pqcrypto.eu.org/ [2] https://pqcrypto.eu.org/docs/initial-recommendations.pdf


I'm familiar with the implications of quantum computing on factorization and thus DEcryption but have heard very little about about quantum computing enabling ENcryption until the last paragraph of this article

("...Chuang expects to see quantum encryption methods that will inscribe sensitive data into the very states of atoms")

Very curious about current state of this research (relative to the current state of quantum decryption)--any experts in the room?


Not an expert on Quantum cryptography, but an ex particle physicist here. The basic idea is that using a quantum channel (sadly means new hardware, so not over tcp), eavesdropping becomes impossible without destroying the quantum state of the signal (guaranteed by the laws of Quantum Mechanics). If an eavesdropper intercepted a message, that would be detectable and you can drop that packet. Wikipedia has a good intro: https://en.wikipedia.org/wiki/Quantum_cryptography


And yet, isn't it true that MITM attacks still work, as long as the MITM has the same hardware?


Theoretically, if you intercept in the middle, you destroy the pattern that you observe. This is a physical quantum effect, and will happen no matter what hardware you use

Since the intended use is key distribution, a MITM is fine as long as you can detect it reliably: you can keep sending new keys until one isn't eavesdropped upon, and then use that key.


I'm not talking about eavesdropping, I'm talking full on MITM. Cut the connection and insert a middle man. Both sides think they're communicating with their intended target, but they're communicating with you. How does quantum crypto protect you from that?


But how do you detect it reliably?


If someone intercepts the quantum key, it will modify it 25% of the time. If you randomly measure (and verify publicly with the sender) a fraction of your total key and find it unmodified, it means the rest of the key probably is too, up to a certain security factor. By starting with a longer key and measuring more of it (or doing privacy amplification, for example xor-ing multiple keys together), you can get as much security as you want. It also means the security is everlasting, meaning someone cannot retroactively break your key in 100 years using some mega-computer.


I read elsewhere that this was completely untrue. I'm really confused on the issue.


Maybe it was for a particular implementation? Funny story: the first toy impletementation of Quantum Key Distribution used a device with rotating photon polarizers. Quantum Key Distribution is completely secure so on paper the device was too. However, you could actually hear the polarizers rotating in a way you could intercept the whole secret key... as long as you were not deaf!


That's a really good example of a side-channel attack


Cryptographers are generally not interested in using quantum physics to achieve secure communications

Cryptographers interested in encryption schemes that use mathematical structures that are not amenable to any known quantum algorithm. Lattices, Ring Learning With Error

Cryptographers are also interested in how Quantum Computers will scale to large sizes. It will be important to understand what the largest quantum computers that can practically be built are.


Quantum cryptography is still applied in some places. Switzerland is using quantum crypto below lake Geneva using optic fiber. Recently, China announced a quantum crypto satellite program.


This has never made any sense to me. What is the threat model? In almost any case where you can use QKD, it would be cheaper to send a courier with locked/handcuffed brief case.



Time is a cost


Nearly 40 years old, we actually have an algorithm that is not known to be vulnerable to quantum attacks: https://en.wikipedia.org/wiki/McEliece_cryptosystem Quantum computing is nothing to worry about. (Though I admit it is interesting to think about what sorts of encryption/decryption schemes are only feasible with a quantum computer.)


The real problem is that quantum computers would break the forward secrecy algorithms we're using today. So if a practical quantum computer shows up tomorrow then we can start using different cryptography tomorrow, but evildoers will be able to decrypt everything we're sending today just by storing it and waiting for quantum computers.

Which is a pretty big "oh crap" that will catch a lot of people by surprise if quantum computers ever really happen.


Here's food for thought: What would be required to make a nontrivial fraction of TLS traffic on the web post-quantum secure? How much would it cost?

What about other legacy systems?

Actual, physical quantum computing is still in its infancy, so sure, it's hard to conjure up the wherewithal to worry. But nonetheless we have a long way to go before we can say the world is ready for real quantum attacks.


Quantum crypto is a large field. One aspect is Quantum Key Distribution (QKD). It opened the door to the whole field of quantum computing (it was discovered before Shor's algorithm).

QKD allows you to distribute a one-time pad while only sharing an authentification key. It is (on paper, if you don't count experimental flaws) theoretically-secure, meaning you can't break it or man-in-the-middle attack it even if you have infinite computational power (with 1-epsilon probability, epsilon being as close to 0 as we decide). In practice, most QKD systems can be hacked through hardware flaws.


If you want to know more about Quantum Computation, there is an edX course (https://www.edx.org/course/quantum-mechanics-quantum-computa...). It goes over the Shor algorithm and the quantum Fourier Transform that leads up to the Shor algorithm.


I don't think this article implies we are anywhere close to feasible quantum computers but...if I worked on the assumption that sufficiently powerful quantum computers exist (thought experiment), what would be my available and usable options today? Encrypt everything I deem important enough with symmetric encryption?

I have found this: https://en.wikipedia.org/wiki/Post-quantum_cryptography

But am not sure what is considered the state of the art. Stehle-Steinfeld seems like the most promising option from my very uninformed position. Supersingular elliptic curves also seem to have nice properties but I have to admit I don't even fully comprehend them. Would love for a crypto expert to chime in :)


Somehow the submitted link got mangled:

http://spectrum.ieee.org/tech-talk/computing/hardware/encryp...

The article has a broken link to the paper abstract, which is "Realization of a scalable Shor algorithm" available here:

http://science.sciencemag.org/content/351/6277/1068

And I'm yet to find a full copy of the paper anywhere, but it sure looks interesting...

Edit: Ah, here it is:

http://arxiv.org/abs/1507.08852

There's also "Compiling quantum algorithms for architectures with multi-qubit gates" which cites the above:

http://arxiv.org/abs/1601.06819


The title is also a bit clickbaity. Yes, QC are "closer" to breaking RSA... but...

"Briefly, the new work uses Kitaev’s version of Shor’s factoring algorithm, running on an ion-trap quantum computer with five calcium ions, to prove that, with at least 90% confidence, 15 equals 3×5..... So, what’s new is that a QC has now factored 15 “scalably”: that is, with much less cheating than before." - (Scott Aaronson).



Could anyone point me in the direction of how quantum hardware is currently designed? I'd love a technical explanation.


I've always been confused by the claims of quantum computing, I'm hoping one of the clever folk here can steer me to an understanding. Doesn't the very nature of superposition and uncertainty make determining when the process has solved the problem impossible to determine? Its like, when you look at the data, you change it, and if it wasn't solved, you've already forced it into a state, and if it was solved, by observing it you may change it to something else. How are these problems solved with quantum computing?


that is in part what this discovery is about: how to read out and disentangle the solution from superposition. the roots of the answer to your question are in "shor's algorithm," which is the original proposal for factoring integers in polynomial time with high probability.


thanks akarve, I shall read abot "shor's algorithm"


I'm sure we'll be the first to hear when they actually do. /s


Anyone know if this is the dwave quantum computer or a different one?


After reading "a quantum computer that could someday factor any number, and thereby crack the security of traditional encryption schemes." I understood, that author has no idea what he's talking about.




The deadline for YC's W25 batch is 8pm PT tonight. Go for it!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: