Werner von Braun oversaw development of some awfully small crummy rockets https://youtube.com/watch?v=Ii7uwp1SRIM (imagine the ones even before these!) but also oversaw the Saturn 5 which lifted every person who walked on the moon. We understand the physics, now it's a matter of engineering and funding!
Not necessarily. What if entangling each quabit is twice as hard as the previous one? There's a lot of people that suspect physics may prevent quantum computing from scaling up. See also:
But the guy who made that bet doesn't think physics prevents a quantum computer. "Many people assumed I was a QC skeptic, and was offering the prize because I hoped to spur research aimed at disproving QC. Which is actually an interesting misreading" http://www.scottaaronson.com/blog/?p=902
Yeah I was just pointing out that there is disagreement. I think of it like time travel - nobody has proved that it is impossible, but a lot of people suspect that a "sensible" universe wouldn't allow it.
the rate of progress seems in fact astoundingly slow. if gene sequencing is outstripping moore's law, quantum computing is underperforming it by a mile. this quantum computer did the same 3 x 5 = 15 factorization that was done ~15 years ago. and has added just one special q-bit over its predecessors. albeit this is a more "fair" factorization technique than the early experiments, which have been likened to loaded dice.
I love it when people tout a result as being "only 15-30 years away." Is there any large scale human technological research project in all of human history which lasted 20 years and actually produced what was promised? There is no incremental path here; any time someone says "20 years" it means they have no idea how to do this thing.
The human genome project. Their goal when starting was "this will take 15 years", they completed it in 13. The Apollo program: goal of less than 9 years (end of the decade) when it was proposed in 1961 to land a man on the moon and return him to Earth, completed in 1969.
If you go back quite a ways, I'm willing to bet at least some of the innovation-required Castles and Cathedrals and Pyramids of history were completed "on schedule" with it known up front that they would take many years to finish.
I suspect a lot of projects are accurately classified as 10-30 years away, given a certain level of funding and a dedicated team. Absent that, they may very well take twice as long or more. I always like to see what these long-term oriented institutions like SENS and MIRI say they would do if they had x times more money, to see that they've got a plan -- fortunately those ones do give that information.
This is the source of the "joke" that fusion will always be 20 years away. It's because it's been grossly under-funded ever since that prediction was made.
This is really interesting! Have you read the source report? I'm curious as to why/how the predicted required budget would fluctuate so much. For such a specialised and frontier based project, surely you'd be aiming for consistent staffing (and thus consistent staff budgets) year on year no?
My guess is that is fluctuates because there are a discrete number of new experiments to be constructed, and these don't cost a constant amount during their construction.
That was my guess as well. Looking at the graphs, there are about three clear peaks in the maximum, accelerated, and aggressive plans, muddled out a bit in the moderate plan.
What are these three experiments? Which of them have already been done in the existing plan? And what others might need to be added?
Also curious is the expected total cost. Eyeballing each plan average cost and multiplying by the duration, the total costs are 6x14=84, 4x17=68, 3x22=66, and 2.5x29=72.5 billion dollars. The accelerated and aggressive plans are more expensive than the moderate plan!
I suppose there's some additional overhead costs associated with keeping the program running for 29 years instead of 17, but I would have expected all the rush orders, overtime, and extra staff to make the faster programs much more expensive.
Neither of these numbers is equal to or greater than 20. 20 is a "I don't know how to do this" number of years. 10 is building a dam. Quantum computing has already been "a thing" for longer than 20 years; careers have been made ... people have retired. Today we get news of 1 bit worth of progress made since 2001. Now we can factor 30 instead of 15....
The LHC was approved for construction start in 1995. The wiki article doesn't capture it but you can bet the science behind the design of the LHC wasn't all done the year before. It didn't achieve its first real goals until 2015.
LIGO was dreamed up in the 1960's to detect gravitational waves. Took until 2002 to get the first one built big enough to theoretically detect them, and until 2015 to finally get measurements demonstrating the existence of waves.
Real science is slow. This "20 years means I don't know" business is indicative of perhaps not appreciating the difference between basic science and applied science/engineering. I know we're used to engineering projects pumping out new awesome tech every couple years, but basic science is hard and takes a long time.
Ligo was most certainly not dreamed up in the 1960s, and the LHC was completed in 10 years.
I picked the word "technology" instead of "science" on purpose. Science isn't slow or fast; it can't be predicted at all. Technologies; different story. Someone says I'm going to build you a piece of technology and it will take 20 years, that means they don't know how to do it. There are no exceptions.
For most research areas I'd agree, "20 years" = "who knows". But for cryptography people using it should keep looking carefully at how long something they encrypted will remain safe. People may be betting a lot on the assumption that something stays secret for at least 50 years. Therefore any significant shift in those probabilities is newsworthy, in my opinion.
I understand your point, but it does happen. For example, commercially available LED lighting that is more efficient than compact fluorescent. The blue LED was invented in the early nineties and you just knew it was a matter of time before we would get efficient white-light light.
The problem is that for every person who says, "This is 20 years away", there are unrealistic people who are saying, "This is 5 years away". And there are pessimistic people who are saying, "Oh, this is just going to go on for ever. They don't really know how long it's going to take".
Will you ever be able to read popular news articles and reasonably expect the predictions to be accurate? I doubt it. Are there people in the world who have a pretty good idea how long it will actually take? For a great many things, yes.
Kind of off topic, but wow LEDs have gotten a lot cheaper very quickly. I recently moved and replacing all the incandescent bulbs and a few halogens lights with white-light LEDs was a lot cheaper than I thought it was going to be, judging by my memory of some LEDs I bought during the last move a few years ago... and they're at least as or even more energy efficient for the same lumens. I'm usually part of the crowd that thinks Kurzweil hypes his curves too much and additionally is "curve deterministic" instead of pointing in the right research directions to make things happen sooner/better, but there really are a lot of exponentials (and slightly less impressively power laws) out there if you look for them, and they can shake up expectations quite a bit when you're predicting linearly all the time... The genome project usually comes to mind since Kurzweil mentions a lot that by the halfway point they were only 1% "done", so getting the other 99% in the next 7 years seemed pretty ridiculous if you were a layman extrapolating linearly and didn't read that they actually planned for an order of magnitude decrease in cost (https://books.google.com/books?id=rtq1bS7g-OEC&lpg=PA529&ots...).
Completely off topic, but I replaced almost all of my traditional lights with led lights (except for some oddly shaped ones) two years ago. It wasn't that cheap, but I haven't had to replace a single of them yet. I figure if they last 5 years, they'll have paid for themselves between the costs of replacement bulbs (some of my fixtures ate normal lights every few months), the cost of electricity, and the pain of having to drag out the ladder every time I needed to replace one.
Ha ha... Super off topic, but last year we moved back to Japan and I had to buy all new light fixtures. I decided to go with these new LED panels. I'm trying to remember the details, but I bought 3 panels each 5,600 lumens for about $100 US each. Even now I have a hard time believing that, so I just did a quick price check online and they are still about the same price. Not only that, but IIRC they get over 120 lumens per watt, are colour adjustable, and have a remote control. It just seems too good to be true. I've only ever seen these lights in Japan, which is really a shame because they are amazing. (of course they are about a meter in diameter which wouldn't suit most western homes...)
Slightly more on topic, it's the thing in my home that most makes me feel like I'm living in the future :-)
I'm ok with 20 year predictions when it comes to converting an existing invention into its commercially available mass produced form (like white LED lighting). Or the growth of the 1995 Internet into its now ubiquitous level.
I'm very skeptical when people do 20 year predictions from the current state of the art into something which is impossible to do today for any amount of money, such as this quantum computing based RSA breaker.
Short answer : over 114 years from 1714, Britain sought and paid rewards for accurate measures of longitude. Resulting of course in the invention of 'modern' clocks.
The Manhattan Project at least comes close. It's arguable that the start of the project was not the start of the research. On the other hand the theoretical framework necessary for it was still pretty new.
Less than 3 years from generating a half of watt of power in the Chicago pile in December of 1942 to blowing up a city; 4 if you count the previous year. Nobody said, "give us 20 years and we'll make you an a-bomb." They pretty much knew how to at least figure out all the lacunae in 3 years.
Frank Hecker : "Accepting this bug. My initial comments: The last time I dealt with the issue of
the DoD PKI it was essentially a DoD-internal PKI for the use of U.S. military
personnel (active or retired), DoD civilian employees, DoD contractors, and
(maybe) allied forces (e.g., NATO). It was not intended for the use of the
general public (whether U.S. citizens or not), and I'm not aware that members of
the general public would ever be in a situation where they would encounter
SSL-enabled web servers, S/MIME email, or signed code that used DoD-issued
certificates.
Based on that I would consider this an "intranet" CA (albeit for a very large
intranet) and based on my previous "meta-policy" comments I would recommend
not including this in Mozilla et.al. I'll leave this bug open for a period of
public comments, and then I'll close it with "WONTFIX" unless someone can
provide compelling reasons why I should do otherwise."
The strong argument here is that if you are giving a lock icon but knowingly allowing decryption by a known bad actor, you are also breaking the experience of your users.
I also agree with the weaker argument that the cert in question is essentially for a local intranet and that the DoD can, for as long as it continues to exist, which I find politically disagreeable, install the cert locally on its own resources.
If it wants to publish material for broader consumption, it can get a cert like everyone else.
Trusting the cert does not, precisely, allow decryption by the certificate authority. It rather gives the certificate authority the ability to issue certificates for domains, which if they are used to establish a connection, can encrypt and decrypt traffic for that connection.
So yes, if you trust the DoD root certificate, then the DoD as well as every certificate authority in the world could in theory generate a valid certificate impersonating www.google.com or any website. With a sophisticated enough attack, they could do this just for your one visit to one particular website, in such a way that it would be difficult for anyone to realize that it's happening. However, though this is difficult to notice if you're not looking for it, it's actually really easy to notice if you are looking for it. If you use Chrome, then Chrome will report the certificates that it sees back to Google, who track what certs are issued by CAs. This is how Google noticed that Symantec issued fake certificates for Google domains in Symantec's test environment: https://googleonlinesecurity.blogspot.com/2015/10/sustaining...
Anyway, the practical risk of trusting a DoD certificate is pretty low. To decrypt your traffic, they'd have to man-in-the-middle intercept your connection to a web server, and replace the site's valid certificate with their own, which would leave an obvious and flagrant trail to anyone who is looking. This would very obviously "play their hand" and anyone with evidence of being attacked that way by a first world government would be immediate worldwide news in the security community. If they did this even once, they'd need to be extremely careful not to be caught by any of the countermeasures that detect this kind of surveillance.
That kind of attack would be a one time thing, because evidence of being attacked through the DoD cert would cause all browser vendors and OSes to yank support for it. CAs have been revoked for far less justified reasons than explicitly attacking someone.
I find it much, much, much more likely that targets of interest will simply be attacked and exploited through regular known security mechanisms - such as software vulnerabilities or built-in back-doors. These things don't leave an obvious trail and smoking gun pointing back to the perpetrator. Someone MITMing your website visits with the DoD root certificate would stir up a shitstorm. "Some anonymous IP broke into my computer with a 0-day and installed a rootkit" is not particularly newsworthy by comparison. Even the recent news of backdoors in networking product codebases is, while newsworthy, not really that surprising these days. Active evidence of DoD interception of someone's network traffic followed by evidence of CA certificate misuse would drop like a nuclear bomb in the security community, especially if for no extremely well justified reason. It would be the proverbial straw that broke the camel's back in terms of government interference with Internet security, and would lead to a digital revolt even more severe than what Snowden's disclosures caused. Government technical experts will be aware of this and will use other methods, at least in any context where it could be plausibly noticed.
To be clear, I completely believe that the government is or could passively conduct surveillance on virtually all electronic communications. I just don't think they'll go as far as actively intercepting and modifying a connection, and inserting a fake certificate, within the borders of the country or in any normal circumstance. Maybe they would do such a thing within the private networks of North Korea, but I'd be highly skeptical of them doing it within the US, and with the DoD certificate of all things. It would be too obvious and has too poor of a risk/reward payoff compared to other methods. If they were going to do this, they'd steal the private key from another CA and use it instead. Because that capability could be noticed and "burned", they'd save it for high value targets only.
So, in all practical analysis, I think it is extremely unlikely that the DoD will attack people through their root certificate, though I concede that it's plausible. I would be interested in feedback from others about this reasoning.
I have a feeling that we're going to wake up one day and there will be a legitimate breakthrough in quantum computing, leading to RSA no longer being a legitimate standard for security.
My question to HN: Have any of you guys attempted to come up with a plan for if/when this happens? Are there any algorithms in suites like OpenSSL that are quantum-resistant? Is SSL/TLS even compatible with a post-quantum world?
There are not, not in OpenSSL. Viable post-quantum encryption is a hot research area right now. There are a number of approaches that seem promising; some of them, like hash-based, lattice-based, and code-based crypto, are actually relatively old concepts that were pushed aside because RSA is more efficient, but which regain their edge if quantum computers become viable.
The trick with all of this is that there isn't all that much cryptanalysis work of some of the more promising PQ schemes, so trying to preemptively adopt them just in case quantum computers learn to factor numbers bigger than 15 is likely to do more harm, in the short term, than good.
My plan: don't rely on the secrecy of anything encrypted now with RSA to remain secret in 20 years, and never use it for encrypting data-at-rest. Hell, Schneier already increased his PGP key size to 4096 bits, I don't think it's a stretch to think 2048 could be broken classically by USA-level actors in the not-too-distant future. But since you're not using RSA for data-at-rest, more for transient "messages", hopefully the value of anything intercepted and stored for 20 years will be relatively low.
Here's good talk by djb covering the big picture and some options for alternatives that are thought to be quantum-resistant. https://media.ccc.de/v/32c3-7210-pqchacks
People try to come up with a plan. The problem is the plan is in its very early stage. We have a couple of impractical algorithms.
An EU research project has published a couple of recommendations what you can do if you need postquantum today (none of which you want to use for TLS, because the keys or signatures are too big). NIST plans a standardization process. A draft for a stateful hash-based signature scheme (XMSS) is probably going to be an IETF document soon.
Small bits and pieces. People are working on it, but it's still a long way until you will be able to use your browser to establish a quantum-safe https connection.
I'm familiar with the implications of quantum computing on factorization and thus DEcryption but have heard very little about about quantum computing enabling ENcryption until the last paragraph of this article
("...Chuang expects to see quantum encryption methods that will inscribe sensitive data into the very states of atoms")
Very curious about current state of this research (relative to the current state of quantum decryption)--any experts in the room?
Not an expert on Quantum cryptography, but an ex particle physicist here. The basic idea is that using a quantum channel (sadly means new hardware, so not over tcp), eavesdropping becomes impossible without destroying the quantum state of the signal (guaranteed by the laws of Quantum Mechanics). If an eavesdropper intercepted a message, that would be detectable and you can drop that packet. Wikipedia has a good intro: https://en.wikipedia.org/wiki/Quantum_cryptography
Theoretically, if you intercept in the middle, you destroy the pattern that you observe. This is a physical quantum effect, and will happen no matter what hardware you use
Since the intended use is key distribution, a MITM is fine as long as you can detect it reliably: you can keep sending new keys until one isn't eavesdropped upon, and then use that key.
I'm not talking about eavesdropping, I'm talking full on MITM. Cut the connection and insert a middle man. Both sides think they're communicating with their intended target, but they're communicating with you. How does quantum crypto protect you from that?
If someone intercepts the quantum key, it will modify it 25% of the time. If you randomly measure (and verify publicly with the sender) a fraction of your total key and find it unmodified, it means the rest of the key probably is too, up to a certain security factor. By starting with a longer key and measuring more of it (or doing privacy amplification, for example xor-ing multiple keys together), you can get as much security as you want. It also means the security is everlasting, meaning someone cannot retroactively break your key in 100 years using some mega-computer.
Maybe it was for a particular implementation? Funny story: the first toy impletementation of Quantum Key Distribution used a device with rotating photon polarizers. Quantum Key Distribution is completely secure so on paper the device was too. However, you could actually hear the polarizers rotating in a way you could intercept the whole secret key... as long as you were not deaf!
Cryptographers are generally not interested in using quantum physics to achieve secure communications
Cryptographers interested in encryption schemes that use mathematical structures that are not amenable to any known quantum algorithm. Lattices, Ring Learning With Error
Cryptographers are also interested in how Quantum Computers will scale to large sizes. It will be important to understand what the largest quantum computers that can practically be built are.
Quantum cryptography is still applied in some places. Switzerland is using quantum crypto below lake Geneva using optic fiber. Recently, China announced a quantum crypto satellite program.
This has never made any sense to me. What is the threat model? In almost any case where you can use QKD, it would be cheaper to send a courier with locked/handcuffed brief case.
Nearly 40 years old, we actually have an algorithm that is not known to be vulnerable to quantum attacks: https://en.wikipedia.org/wiki/McEliece_cryptosystem Quantum computing is nothing to worry about. (Though I admit it is interesting to think about what sorts of encryption/decryption schemes are only feasible with a quantum computer.)
The real problem is that quantum computers would break the forward secrecy algorithms we're using today. So if a practical quantum computer shows up tomorrow then we can start using different cryptography tomorrow, but evildoers will be able to decrypt everything we're sending today just by storing it and waiting for quantum computers.
Which is a pretty big "oh crap" that will catch a lot of people by surprise if quantum computers ever really happen.
Here's food for thought: What would be required to make a nontrivial fraction of TLS traffic on the web post-quantum secure? How much would it cost?
What about other legacy systems?
Actual, physical quantum computing is still in its infancy, so sure, it's hard to conjure up the wherewithal to worry. But nonetheless we have a long way to go before we can say the world is ready for real quantum attacks.
Quantum crypto is a large field. One aspect is Quantum Key Distribution (QKD). It opened the door to the whole field of quantum computing (it was discovered before Shor's algorithm).
QKD allows you to distribute a one-time pad while only sharing an authentification key. It is (on paper, if you don't count experimental flaws) theoretically-secure, meaning you can't break it or man-in-the-middle attack it even if you have infinite computational power (with 1-epsilon probability, epsilon being as close to 0 as we decide). In practice, most QKD systems can be hacked through hardware flaws.
I don't think this article implies we are anywhere close to feasible quantum computers but...if I worked on the assumption that sufficiently powerful quantum computers exist (thought experiment), what would be my available and usable options today? Encrypt everything I deem important enough with symmetric encryption?
But am not sure what is considered the state of the art. Stehle-Steinfeld seems like the most promising option from my very uninformed position. Supersingular elliptic curves also seem to have nice properties but I have to admit I don't even fully comprehend them.
Would love for a crypto expert to chime in :)
The title is also a bit clickbaity. Yes, QC are "closer" to breaking RSA... but...
"Briefly, the new work uses Kitaev’s version of Shor’s factoring algorithm, running on an ion-trap quantum computer with five calcium ions, to prove that, with at least 90% confidence, 15 equals 3×5..... So, what’s new is that a QC has now factored 15 “scalably”: that is, with much less cheating than before." - (Scott Aaronson).
I've always been confused by the claims of quantum computing, I'm hoping one of the clever folk here can steer me to an understanding.
Doesn't the very nature of superposition and uncertainty make determining when the process has solved the problem impossible to determine?
Its like, when you look at the data, you change it, and if it wasn't solved, you've already forced it into a state, and if it was solved, by observing it you may change it to something else.
How are these problems solved with quantum computing?
that is in part what this discovery is about: how to read out and disentangle the solution from superposition. the roots of the answer to your question are in "shor's algorithm," which is the original proposal for factoring integers in polynomial time with high probability.
After reading "a quantum computer that could someday factor any number, and thereby crack the security of traditional encryption schemes." I understood, that author has no idea what he's talking about.