Hacker News new | past | comments | ask | show | jobs | submit login
A QUIC update on Google’s experimental transport (chromium.org)
231 points by jplevine on April 17, 2015 | hide | past | favorite | 123 comments



Google really gets how standardization works. They innovate and once the innovation has proven its value they offer the technology for standardization.

I previously saw companies, like Sun, completely fail at this. Eg. The many Java specifications that were created by the standards bodies. Sun tried to do it right by having a reference implementation with the spec. But the Reference implementations were rarely used for real work, so it proved only that the spec could be built, not that the spec elegantly solved real problems.


> Google really gets how standardization works. They innovate and once the innovation has proven its value they offer the technology for standardization.

I wouldn't necessarily say "innovate" or "offer," but they do understand the process. You can make pretty much anything a "standard" with a bit of money and time (isn't Microsoft Office's XML format a "standard"?), but adoption is always an issue. However, Google controls a popular web server (Google search) and client (Google Chrome), so for web-things, they can create "widespread adoption" for whatever they standardize.


> However, Google controls a popular web server (Google search) and client (Google Chrome), so for web-things, they can create "widespread adoption" for whatever they standardize.

Google's innovation is to make http faster over a slow unreliable network (e.g. a wireless device). They solved a real world problem, proved it using their own users and now are going to standardize. Their innovation is driving their standardization efforts.

If google didn't solve a real world problem then even with their platform they couldn't impact widespread adoption. Their innovations (SPDY and now QUIC) solve real world problems, so adoption will become widespread.

MSFT with Office XML was solving a political problem, not a real world problem. Ie. Office was taking a hit because DOC/XLS were proprietary formats, and governments were concerned about archiving documents in a proprietary format and were therefore threatening to move to open standards (ie. OSS office suites). MSFT fought back by pushing through a standard document format to offer their sales staff with a rebuttal to customers threatening to move to an open standard. Ie. The 'standard' only has traction due to MSFTs monopoly on office and serves no real benefit to anyway except for MSFTs salesforce.


I have OOXML on my poorly written wishlist among with a proposal to withdraw it from standardization without breaking compatibility.


That's not really the complete picture. People want to use the Google standards you mention because they solve a real problem and is proved in production. What problem does Office Open XML solve, apart from not being Open Document XML? You use it because of external factors, not because it is an elegant solution to a problem.


I think Open Document XML solves a problem--it's just not an immediate problem. It won't make saving and emailing around a document easier, but it will make interoperability (I'm talking even between two versions of Word--not just between word processors) simpler. Even if it's not supported you have some recourse for extracting data. Tragedy of the commons.

Google has an advantage because there's an obvious win over the old standard and they offer a big enough buy-in. GIF, for example, is generally better served by PNG. But adoption has been slow because the benefits aren't good enough and wide support took awhile to get implemented.


GIF this days is almost always used as a moving-picture format, which is not something standard PNG does. The number of actual GIFs being passed around these days that would be better off as PNGs is virtually zero.


That's kind of my point. Animated pngs support 24bit color and transparency--which gifs do not. The carrot of transparency and more colors weren't enough to replace gif. It looked like the lzw patent scare would be enough but since that expired in 2003 nobody has been motivated to fully implement the spec or create content exclusively as pngs


AFAIK the PNG format does not provide for animation. You'd need to use another format such as MNG or APNG. Am I mistaken?


Sure - what's interesting is that they use this power to eat the first-mover cost for changes that actually are beneficial to everyone to adopt. For example, they got a critical mass of HTTP2 deployed, and that has made it an immediate win for everyone else to implement it (non-Google web servers because Chrome, and non-Google browsers because Google Everything). This is similar to the effect Apple is having in making the USB-C connector happen.

The difference with Office XML is that a) there's not a clear benefit to the community as a whole to adopting the new standard, and b) they don't seem to have made any great effort to encourage their competitors to implement it.


Wrapping a binary blob and then saying <xml binary=start> ~~~~ <xml binary=end> doesn't make something XML. It's still MS proprietaryness wrapped in an open transport.


Honest question: Do you think Hangouts will eventually be open sourced?

It's really messed up that all our non-xmpp (a massive majority) messaging goes over nonstandard protocols. All our daily communications are behind walled gardens; one of the sorriest states in tech today.

Hangouts going open source is one of the few ways this could change in a reasonable time frame.


Why would it be? They had federation on, but shut it down. It's getting less open, not more.


Didn't say I was hopeful. But it is in line with "innovate first, then offer for standardization".


Plus Hangouts launched with an NPAPI plugin that supported Firefox and IE, but now Hangouts is a Chrome-only extension:

http://www.google.com/hangouts/


> Plus Hangouts launched with an NPAPI plugin that supported Firefox and IE, but now Hangouts is a Chrome-only extension

huh? Not to further derail this thread into irrelevant topics, but open up gmail in Firefox. If you aren't still holding on to the old google chat, that's hangouts right there. It takes like 2 seconds to verify this.


I think there's a higher chance Google would adopt end-to-end encryption for its texts, voice and video calls than making it open source.

In other words: very unlikely.


Today, roughly half of all requests from Chrome to Google servers are served over QUIC and we’re continuing to ramp up QUIC traffic

Google says they will eventually submit this to IETF and produce a reference implementation, but it is interesting how a company was able to quietly move a large user base from open protocols to a proprietary protocol.


You can't design protocols without implementation experience. Looks like they're going the same route they went with SPDY, and that has worked really well.


There is something about the amount of usage, though.

50% of all communication between Chrome and Google sites is now through a path that is not standardized, nor on track to standardization, and is just special to the combination of Google's browser plus Google's sites. That sets off warning lights for some people, and for good reason.

I totally get that to experiment with a new protocol, you need real-world data. Definitely. So if say 1% or 5% of that communication were non-standard, I really wouldn't have much of an issue. But when the "experimentation" is 50%, it's on the verge of being the normal path; it doesn't seem like experimentation. Perhaps they could continue the experiment and go from 50% to 60% or 80% - there isn't much a difference at that point. In fact, if the new protocol is better, it would be almost irrational not to - if 50% is considered ok ethically, and moving to 60% saves costs, then why wouldn't you?

I'm not saying that there is something wrong here. It's not even seriously worrying, given Google's positive track record on SPDY. Still, it's very close to the edge of being worrying. That worry of course is that they expand beyond 50%, and are slow to standardize or never do so - in which case things would clearly be wrong. Again, Google has a good reputation here, given SPDY. Still, I'm surprised Google feels ok to move half of all communication to a non-standard protocol, apparently unconcerned about that worrying anyone.


What exactly is worrying? Sorry, but it seems like you've written a lot but said very little.


Probably because they've sped up communication between the browser they provide and their website.

The logic behind it is benevolent and reasonable, but the short-term effect is that Chrome users get an incentive to use GWS/Gmail/Drive rather than Yahoo/Dropbox, and GWS/GMail/Yahoo users get an incentive to use Chrome rather than Firefox/Safari/IE.

If Chrome intentionally started rendering Yahoo slower, that would be blatantly anti-competitive. This is, in effect, just about the same thing, only with practical reasons behind it (much like their were practical reasons behind integrating IE into Windows).


> If Chrome intentionally started rendering Yahoo slower, that would be blatantly anti-competitive. This is, in effect, just about the same thing, only with practical reasons behind it (much like their were practical reasons behind integrating IE into Windows).

This seems to be a poor analogy.

One of the main draws of capitalism is to encourage companies to compete by offering quality products. One of the main drawbacks is that companies are also able to compete by sabotaging the ability of other companies to offer quality products.

The second is the reason for anti-competitive laws. Sabotaging Yahoo falls into this category. Offering a better product that what's already on the market - especially without impeding the quality of the old product - falls squarely into the first.


It is easily arguable that Google is preventing Yahoo from achieving the same result by abusing its control of the Web client. Lines are blurrier than you paint them in your text.


Yahoo could implement QUIC on their servers if they wanted too - there's at least a prototype server in the Chromium source tree


But even if they did - how would they convince Chrome to use their new QUIC endpoint if - as is apparently the case - QUIC is only enabled for a select, hardcoded list of servers?


How so? Not helping competition is very different from actively working against them.


As the top-level poster to this chain said,

> it is interesting how a company was able to quietly move a large user base from open protocols to a proprietary protocol.

Some of us may believe Google is doing so for good reasons, some of us might not be sure - but that is all beside the point.

The point is that this is a massive show of power. And it has been applied quietly - no one (outside of Google) knew about this massive change in activity until this blogpost.

In any hands, that amount of power should be worrying.


> And it has been applied quietly - no one (outside of Google) knew about this massive change in activity until this blogpost

Only if you weren't paying attention. They've been discussing testing it on Google's servers like they did SPDY for a long time now. The first announcement I can find that they were switching some Google traffic over to it was almost two years ago[1], and if you're on blink-dev or chromium-dev (or proto-quic, if you're serious about it) you'd have gotten periodic updates on the topic. Youtube videos about it[2] (with discussion on HN[3]), etc etc.

[1] https://groups.google.com/a/chromium.org/forum/#!topic/chrom...

[2] https://www.youtube.com/watch?v=hQZ-0mXFmk8

[3] https://news.ycombinator.com/item?id=7227255


But this could be said about any massive (tech) company with billions of users and a pervasive presence in the mainstream.

I don't think this is a cause for concern but rather a victory for progress and efficiency that a company can finally do such large scale testing and experimentation to move us all to a better standard.


Well, what is an example of a company with similar power?

No one else has a web browser with market share anywhere near.

And no other browser vendor has any major websites. Microsoft has Bing and MSN etc. sites, but those are fairly small compared to google.com, google docs, google maps, etc. etc.

Facebook would be a company with a website that has massive reach. But Facebook doesn't control a browser. If it did have a major browser, it would be as concerning as Google is.

It's the combination of major browser + major websites that allowed Google to divert a massive part of internet usage from a standard protocol to a non-standard one. No one else can do that today.


As I said, I don't think it's likely or an actual cause for worry. But what this comes close to causing worrying about is if a majority of people using the web were using a non-standard protocol to browse it. That's completely antithetical to the idea of an open web.

Right now, 50% of Chrome users (the #1 browser), on Google websites (some of the #1 websites), are in that state.


Antithetical? The web is still open.

This is only Google sites visited by Chrome. It's not like you can't visit these Google sites with normal HTTP with other browsers, nor does Chrome use QUIC on the rest of the web. If they walled themselves in then I could see a cause for concern but right now, even 50% of the traffic between Google sites and Chrome is still nowhere near the majority of internet traffic in any sense.

Because the web is open and massive is precisely the reason why changes like these will not happen overnight but potentially take decades. The amount of old legacy stuff on the web including protocols, implementations, security holes, ipv4, etc that seem like they'll never get upgraded is far more worrying to me.


The Chromium implementation of QUIC is released as Open Source, so I'm not sure how "proprietary" the protocol actually is.


There's two different distinctions in software for which "proprietary" is commonly used:

1. proprietary vs. Free / Open Source (code released under an F/OSS license), and

2. proprietary vs. Open Standards (an implementation of a standard governed by an independent standards body and freely implementable.)

QUIC is not proprietary by F/OSS on the first axis, and currently proprietary rather than based on an open standards on the second axis, with a stated intent of becoming the basis for open standards works in the future.

There is, I think, a pretty good case that this is a good way to get to new open standards that are actually useful for solving real world problems.


As I said later, all things that get submitted to IETF for standardization are going to fall into "proprietary on the second axis", because they want running code, not design-by-committee.

So this definition of proprietary does not seem particularly useful ....


Almost all software fails 2 because hardly any software is an implementation of an open standard. That doesn't seem like a useful definition.


> Almost all software fails 2 because hardly any software is an implementation of an open standard.

Very few applications are only an implementation of an open standard, but things like the QUIC implementation or other communications protocol implementations aren't applications, but lower level components.


Moreover, I think it abuses the meaning of "proprietary" to claim QUIC is proprietary. There is no exclusivity, nor secrecy, nor any other element of control by one party here.


Who other than Google currently has a vote on what constitutes the definition of QUIC? Merely being open to suggestions for changes isn't a relinquishment of control, and protocols can't be forked the way software can.


"Who other than Google currently has a vote on what constitutes the definition of QUIC? "

Anyone who wants to discuss it on the mailing list and submit code.


Wouldn't that be offering a suggestion to Google that they could accept or reject at their sole discretion? QUIC isn't defined by any organization or process that isn't completely governed by Google. Nobody outside Google can cast any actual vote with any kind of binding power, just persuasive influence.


Believe it or not, Chromium/etc (the ones we are referring to here) are open source project with a lot of Google committers, not a Google project that accepts things or not depending on whims.

In fact, the project has a ton of non-google non-drive-by committers (250+ IIRC, it's been a while since i looked ).


Is that even the relevant authority to be considering? Since QUIC is not formally specified yet and only exists as a de facto standard with little historical stability so far, isn't it primarily defined by the most prevalent implementations—Google's official client and servers—not the current state of the project's version control repository?

I do admit that things aren't as cut-and-dry for protocols and specifications than for actual implementations where rights and ownership are pretty clearly defined, but surely you can see that there is a distinction that can be drawn here? QUIC is expected to become an open standard (or die), but it's not there yet. Though it may be further along on the "open" than "standard" aspect.


By that reasoning, most open source software is "proprietary". If you submit code to sqlite then the author can accept or reject the code at his sole discretion.


Software and protocols aren't the same thing. An open vs proprietary distinction can be drawn for either, but that doesn't erase the important distinctions between the two.


In usage 2, QUIC is proprietary, following conventions older than me.


Protocols would be judged proprietary on the second axis, not software.

(Some software implements a proprietary protocol.)


When it comes to protocols and formats, using the word "proprietary" adds more confusion than clarity.

Consider:

H.264 has a spec that's written down by a standards-setting organization and not a trade secret (though behind a paywall) and has multiple independent interoperable implementations. Yet, it's "proprietary" in the sense that it's patent-encumbered. I.e. the patent holder are the proprietors.

VP8, OTOH, is Royalty Free with a Free Software canonical implementation and has other implementations, too, though their independence is debatable. Yet, VP8 is called "proprietary" by some, because the design of VP8 was under a single vendor's (Google's) control and not blessed by a standards-setting organization.

I think using the word "proprietary" as the opposite of "free as in Free Software" is fine when talking about a particular implementation, but it's better to avoid the word when talking about protocols and formats.

For protocols and formats, it's more productive to talk about:

* Royalty-free vs. encumbered

* Multiple independent interoperable implementations vs. single implementation.

* A Free Software implementation exists vs. doesn't.

* Fully specified vs. defined by a reference implementation.

* (If fully specified) Non-secret spec vs. secret spec.

* (If non-secret spec) Spec available at a freely GETtable URL vs. spec behind paywall or similar.

(A number of Googly things that are royalty-free and have a Free Software implementation go to worse end of these axes on the points of having a single implementation and being defined by the quirks of that implementation.)


In a competitive multi-vendor ecosystem like the Web, public-facing protocols that are introduced and controlled by a single vendor are proprietary, regardless of whether you can look at the source code. NaCl and Pepper, for example, are proprietary, even though they have open-source implementations.

The distinction between open-source-but-proprietary and open-standard is important for many reasons. One of the most important is that open-source-but-proprietary protocols, if they catch on, end up devolving into bug-for-bug compatibility with a giant pile of C++.


I don't understand this negativity towards QUIC/Dart/NaCL/Pepper etc. which are exemplary open-source efforts.

By your definition Mozilla's (your employer's) asm.js and Rust are also proprietary.

Somehow I doubt that you jump on every thread about asm.js or Rust to point out how proprietary they are or how they are implemented as a giant pile of C++. Double standards.

There have been plenty of research and work even in standard bodies like IETF that try to implement a better tcp/ip-like protocol.

They all went nowhere because at this point in time, you can't just have some guys in a room to design a new transmission protocol and have it taken seriously by anyone that matters (Google/Apple/Microsoft/Mozilla).

Google is following the only realistic route: implement something, test it in a scale large enough to conclusively show an improvement and then standardize it.

This is exactly how HTTP/2 happened.

We should be cheering them on instead of spreading FUD because it doesn't live up to your impossible standard of non-proprieterness.


I'm not entirely sure it's helpful to lump all of those together; there is at least some difference in kind.

I think dragonwriter's sibling comment to yours is pretty apt here. It's hard to tell the difference between something that will be submitted to standards bodies any day now and something that really will be submitted to standards bodies any day now. At a certain point (with e.g. Pepper) the statute of limitations runs out and you have to assume it's just going to be an open-source but proprietary API.

Of course, whether or not overloading "proprietary" is useful is a different discussion. Mostly it seems these conversations eventually just devolve into arguments over definitions of the word for no real insight.


> Somehow I doubt that you jump on every thread about asm.js or Rust to point out how proprietary they are or how they are implemented as a giant pile of C++. Double standards.

asm.js isn't a new protocol, and so isn't proprietary according to that definition. It's a subset of JavaScript (to be specific, ES6, including such extensions such as Math.imul). You can implement asm.js by simply implementing the open, multi-vendor ES6 standard. In fact, that's exactly what some projects, like JavaScriptCore, are doing.

Rust isn't relevant, as it's not intended to be added to the Web. Adding <script type="text/rust"> to the browser would be a terrible idea for numerous reasons. Nobody has proposed it.


Plenty of IETF standardization efforts can be described as "a subset of Javascript" or even just "a bunch of Javascript APIs". WebCrypto, for instance, fits that bill. What makes QUIC so different from WebCrypto?


QUIC and Web Crypto are both things that need to be standardized, so I don't know what the implication is or how to respond to that statement.

I do think there is a big difference between "a subset of JavaScript" and "a bunch of JavaScript APIs" from a standardization point of view. All engines have been implementing special optimizations for JavaScript subsets ever since the JS performance wars started. Nobody thinks we need to standardize polymorphic inline caches, for example, even though the set of JS code that is PIC-friendly is different from the set of JS code that is heavily polymorphic (and this distinction would be easy to describe formally if anyone cared to). asm.js is just an optimization writ large: the reason why it's not a protocol is that any conforming JavaScript implementation is also an asm.js implementation.

I think people are reading a lot more into my posts than was intended. I'm not calling out QUIC specifically, since I'm not involved with the details of its standardization anyway. The point is simply that open source doesn't automatically mean non-proprietary.


Oh, sure. QUIC is a proprietary protocol. An IETF-standardized QUIC would not be.


> Rust isn't relevant, as it's not intended to be added to the Web

"Proprietary" as used here isn't limited to the Web.


Point taken, and you're right that in a certain sense Rust is proprietary, with all of the very real downsides that that entails (for example, the risk of tight coupling of programs to rustc, as hard as we try to prevent it). But I still think it's a fairly irrelevant thing to bring up, because Rust isn't targeting a large, open, multi-vendor ecosystem (as I specified in my original post). If Rust catches on, no other vendor is going to be forced to implement anything; nobody outside the current Rust community is even asking for a seat at the design table. The only real downstream dependency that the success of Rust might impact is LLVM, and we actually have maintained a pretty good relationship with the LLVM community from the start.


I think this is kind of doing disservice to the way innovation of the web has proceeded virtually everywhere. Over and over again this is how web functionality has jumped forward. And there's plenty of examples of this happening at Mozilla -- e.g. the WebAPIs, not all of which are limited to FirefoxOS, and of those that have been submitted as working drafts, many haven't been touched or discussed since submission. Which is fine. Often an email to a mailing list isn't enough and you have to just start doing something to get the other browsers to act, and sometimes a standard has to languish for a while until everyone figures out it's needed after all.

Things like the "Championed" proposals model for TC39 and (as DannyBee notes) IETF's approach to standardization are direct acknowledgements of this. In a way the Extensible Web approach is also a direct acknowledgment, insofar as it says that innovation happens at the edges, so browsers should provide minimum flexible building blocks and get out of the way. asm.js is a great example of using those building blocks (though it should be noted that as asm.js catches on, other browsers are forced to spend time on it, explicit directive prologue handling or no). Network protocols, on the other hand, are something that can't be built and tested from the tools browsers provide.

I think the better question for you would be, what's the better way to develop network protocols like this, then? Assuming that purpose, I can't think of anything to criticize here except maybe they should have limited testing to beta and dev users of Chrome. However, that limits your test data and normally that sort of thing is done to make sure web compatibility isn't broken in the future by changing standards, and given that browsers already negotiate protocols, I don't see an imminent danger there.


> I think the better question for you would be, what's the better way to develop network protocols like this, then? Assuming that purpose, I can't think of anything to criticize here except maybe they should have limited testing to beta and dev users of Chrome. However, that limits your test data and normally that sort of thing is done to make sure web compatibility isn't broken in the future by changing standards, and given that browsers already negotiate protocols, I don't see an imminent danger there.

As I mentioned in my reply to tptacek, I'm not intending to call out QUIC specifically here; the point is simply that open source and open standards are not equivalent. Shipping implementations is fine as long as there are effective safeguards to prevent lock-in.

What we have to make sure we avoid is something like the -webkit CSS prefix situation, where the fact that WebKit was open source did nothing to prevent the mobile Web from very nearly coming to depend on all the quirks of a big pile of C++. (That situation is also an example of standardization leading to better outcomes—remember how bad the WebKit-specific "-webkit-gradient(linear, color-stop(foo), ...)" syntax was?)


> One of the most important is that open-source-but-proprietary protocols, if they catch on, end up devolving into bug-for-bug compatibility

I think there is a distinction to be made between closed spec protocols and protocols developed prior to being submitting for standardization; while its hard to tell them apart prior to the latter actually being submitted for standardization outside of potentially misleading forward-looking statements of intent, the latter is a reasonable way of cleaning up something and getting some solid real-world feedback and proving utility before submitting something as the basis for standards work while the former carries the problem you describe.


> One of the most important is that open-source-but-proprietary protocols, if they catch on, end up devolving into bug-for-bug compatibility with a giant pile of C++.

I love Mozilla, but this seems like an apt description of "JavaScript." (sure, they're not so much bugs as misfeatures, but..)


The existence of the weird corner case semantics in JS proves my point! Compare the strange semantics of JavaScript 1.0 with modern JavaScript—ES6. ES6 follows the open, multi-vendor standardization process, and as a result its features are extremely well-designed and interoperable.


You can't sanely standardize that does not already exist. IETF believes in "rough consensus and running code". That is what they standardize.

By the definitions of proprietary floated here, every open protocol standardized as IETF proposal started out as proprietary.

The only thing that wouldn't seems to be "stuff designed in the open by committee". A process that has worked so well, it brought us things like C++ and POSIX.


The IETF does not generally practice what they preach in this regard, Google's contributions being a rare bright spot. The fiasco of trying to get Curve25519 standardized for TLS is a pretty good example of the way IETF tends to work.


> You can't sanely standardize that does not already exist. IETF believes in "rough consensus and running code". That is what they standardize.

I fully agree, but it has to be counterbalanced with not shipping random single-vendor features to the entire Web. The proven model here, which is a policy shared by both Blink and Gecko, is developer and beta channels and feature flags.

> The only thing that wouldn't seems to be "stuff designed in the open by committee". A process that has worked so well, it brought us things like C++ and POSIX.

It also brought us things like CSS 2.1 (which everyone loves to hate, but it's much better than the nightmare of pre-CSS layout) and ES6 (which is extremely well-designed). Even the standard versions of C++ aren't really badly designed, especially if you limit yourself to C++{11,14}: there were a few notable standardization blunders, like the STL allocator API, but by and large it's hard to find things in C++11 and C++14 that were clearly mistakes at the time.

CORBA and XHTML 2.0 would be better examples, but the failure modes there were being unimplementable and impracticality of dropping backwards compatibility respectively, both of which the developer channel/feature flag approach address.


" fully agree, but it has to be counterbalanced with not shipping random single-vendor features to the entire Web. The proven model here, which is a policy shared by both Blink and Gecko, is developer and beta channels and feature flags." I don't disagree, but I'm also not sure what this has to do with proprietary vs open :)

Is it unrelated, or is your argument that if they do it this way, it's somehow now "open" and not "proprietary"? Because if so, i struggle to see why that would be the case :)

"Even the standard versions of C++ aren't really badly designed, especially if you limit yourself to C++{11,14}: there were a few notable standardization blunders, like the STL allocator API, but by and large it's hard to find things in C++11 and C++14 that were clearly mistakes at the time."

I'm just going to go with "C++ committee members i have easy access to" (IE sit 10 feet from me) disagree :)

Now, that's not to say it's a complete and utter disaster, but ...


> Is it unrelated, or is your argument that if they do it this way, it's somehow now "open" and not "proprietary"? Because if so, i struggle to see why that would be the case :)

Absolutely. If the goal is to create a open, multi-vendor implementation of some feature, then the right way to go about it is to (a) implement it behind a feature flag; (b) present it for standardization; (c) take feedback into account during the process, making changes as necessary; (d) ship it to stable and remove the flag once consensus emerges. Even better if multiple vendors do (a) at the same time, but it's not strictly necessary.

The reason why the flag in step (a) is so important is that it makes step (c) possible. Otherwise, there's a very real risk that content will come to depend on the quirks of your first implementation, making it impossible to take other parties' feedback into account. If you just ship to stable right away, you're running the risk of making the platform depend on a proprietary feature.

The reason why doing (a) before (b) is important is that it prevents unimplementable features and mistakes that only become apparent once implementation happens from being standardized. It also allows users of the feature, not just the folks who implement the platform, to take part in the process.

This process is really the only one I know of for popular multi-vendor platforms that both prevents proprietary features from being locked in and avoids the problems of design-by-committee. That's why both Blink and Gecko have adopted it (and Blink is definitely to be commended for following it).


I think the point made by 2 or more people here is that this non-standard feature has been shipped to half of Chrome users. That feels much closer to shipping to them all, than to shipping to say just Canary or Dev or even Beta users.

To some extent this might be a matter of appearance. Does Google have a firm rule against shipping a proprietary protocol to over 50% - is that why you stopped there? If so, that would be reassuring to hear.


The 50% number is probably just to maximize the size of their A/B test population comparing QUIC vs HTTP.


I agree, I was japing. :)


This is, after all, half the reason for making Chrome in the first place, right? All better protocols will start as proprietary protocols. To make the web better, faster, larger, yes, Google adds features to Chrome, and of course some of those are at the protocol level.

If the feature is actually an improvement, it should be on for everyone that's able to run the code as soon as possible. Ship fast and break nothing.

To address a different aspect of your comment, I do think it's very interesting how little attention we pay to the packets of data sent between software running on our personal devices and remote servers. Slap some TLS on it, and nobody even notices.

I think there's a fundamental OS level feature, and a highly visible UI component which is outright missing, allowing users to understand no just what programs are connecting to where, but what are they actually sending out and receiving. If it didn't have such horrendous implications and failure modes, I would love to have highly functional deep packet MitM proxy keeping tabs on exactly what my computer is doing over the network. You know, or the NSA could publish a JSON API to access their copy?


This is - as a personal, opt-in debugging tool something I have dreamt if for some time. However, I'm curious what horrendous implications you see there. Could you explain?


As part of telehash v3, we've separated out most of the crypto/handshake packeting into E3X, which has a lot of similarities to QUIC: https://github.com/telehash/telehash.org/blob/master/v3/e3x/...

Personally I have a much broader use case in mind for E3X than QUIC is designed for, incorporating IoT and meta-transport / end-to-end private communication channels. So, I expect they'll diverge more as they both evolve...


How is work on Telehash coming? I'm still waiting for an XMPP equivalent for the mobile age that will free us from the medieval [1] state of communication we are experiencing.

[1] https://www.schneier.com/blog/archives/2012/12/feudal_sec.ht...


MinimaLT [1], developed independently and about the same time as QUIC, also features the minimal latency, but with more (and better IMO) emphasis on security and privacy. (Though both are based on Curve25519). QUIC has an edge with header compression and an available implementation. EDIT: and of course, forward error correction!

[1] cr.yp.to/tcpip/minimalt-20130522.pdf


I hate to be harsh because I like a lot about MinimaLT, but until MinimaLT ships code it doesn't feature anything.

I wish we were having a conversation where djb had written an amazing and performant minimaLT implementation that we could prepare against QUIC. But we're not. We're having a conversation where shipping performant code runs a protocol and you're presenting an alternative that pretty much exists only as a PDF document.

Believe me, I looked to figure out if there was a good solution for incorporating MinimaLT into code right now and there's not. I have a project where this is relevant. I'm looking at QUIC now and I may incorporate it as an alternative transport layer. (It duplicates some of my own work though, so I'm not sure whether to strip that stuff out or just make mine flexible enough to work on top.)

(To say nothing that QUIC can be implemented without a kernel module, which is a handy side-effect of doing things over UDP. A shame that's a factor, but of course it is in realistic systems deployment.)


Not harsh at all. I agree completely and wish it wasn't so. At this rate, it might be better to focus on improving the security and privacy of QUIC.

Re. kernel module: both QUIC and MinimaLT can be implemented in user space.


I wonder if this is why I've been having weird stalls and intermittent failures using GMail the last few weeks. Every time, I try it in Firefox or Safari and it works perfectly.


I work on the QUIC team. If you file a bug with a network log, we'll take a look to see what is going on.


Possibly silly question: I was under the impression that only TCP allowed for NAT traversal; if I send a UDP packet to Google, how can Google respond without me configuring my router?


NAT traversal is easier with UDP than with TCP. Here's a good article on the topic: https://www.zerotier.com/blog/?p=226


Thanks for posting, that was a nice article but the intro to ZeroTier was even better, pretty cool software.


NAT traversal isn't necessary when you send packets out of your network, be they TCP or UDP. That's standard operation for NATs.


However routers often have an 'allow UDP' checkbox. UDP can be globally disabled, or enabled only for certain ports. uPNP can mitigate this, but most of us have that turned off to prevent Trojan horses from opening the gates entirely.


Bigger question is, why in the world does your router disable UDP by default!?

Things like games and video streams are almost universally UDP, because it's better to forget about the data than stop everything, and go get your lost packet.


Not to mention DNS. Presumably the router would have a DNS caching server that could get around the block, but you wouldn't be able to have a computer use any other DNS server...


Interesting, I wonder if this will will end up gaining enough momentum to become a standard, similarly to how SPDY ended up essentially becoming HTTP/2.


It will be interesting to see how this works out with NAT being as difficult to work with UDP as it often can be.

It's a shame that SCTP is not more widely adopted, as I suspect it may be just as good (if not better) as a transport layer for building a new web protocol on.


It's unlikely that DTLS over SCTP would be faster than QUIC, which has been specifically designed to have TLS with a minimal number of round trips.


I wonder how they managed the zero RTT connections? How would that ever work?


You might be interested in https://docs.google.com/document/d/1g5nIXAIkN_Y-7XJW5K45IblH... ("Client handshake" section).

The key is "Conceptually, all handshakes in QUIC are 0-RTT, it’s just that some of them fail and need to be retried" (at least the first time you contact the server a 1-roundtrip handshake is required).


Crypto? You can know who your peer is with a single packet if you've already exchanged keys, and other cleverness is also possible.


At the cost of perfect forward secrecy, since then you're no longer using ephemeral keys?


QUIC has a mechanism to upgrade to ephemeral keys once the connection has started.



I'm not sure if this is related. But sometimes I have a slow home internet (60 kbps after I cross a threshold). At those times, I see websites loading really slow, specially HTTPS connections crawling - But YouTube streaming, Google search and Google webcache works really fast! In fact I've been waiting for a normal website to load for a few minutes on my PC, and the whole time YouTube was streaming in another mobile without any interruptions.

Does UDP mess up other traffic?


The first image really confused me with the 'Receiver' looking like a server and the 'Sender' looking like a laptop.


50% and nobody noticed. Can't wait for another marginal latency win that makes the software stack more complex.


> Software Stack more complex

Does QUIC make things more complex? You are replacing TCP + TLS with roughly TLS over UDP with some reliability features build in. TLS and TCP are already crazy complex (behold the state diagram from closing a TCP connection! [CS undergrad heads explode]). Plus, people have already built a number pseudo TCP protocols run over UDP.

QUIC + their kindof TLS lite protocol is certainly newer and less well know. That may make things a little harder. But ARP is complex. IP is complex. TCP is complex. Wireshark and others largely abstract this away. I'm excited by the speed, and by the hopefully reduced attack surface of these potentially simpler protocols.


Isn't this an argument that nothing in TCP/IP should change, and that we should still be pretending that there is a point to the URG pointer?


I took it more for an argument on the "and nobody noticed" aspect.


I think the win here is for content providers more so than end users. Might not be a large overhead to the users, but I'm sure it saves Google tons of bandwidth over their hundreds of millions of users.


That's a very pessimistic view. Technology improvements in general benefits everyone, eventually. The invention of the plane didn't benefit the masses until much later when scaling brought the costs down.

I for one welcome what this would do for me on high-latency high-loss connections (read: poor cell phone coverage). I just need Apple to buy into this ...


I had to do a double-take there. Pessimistic? I'm fully supportive of this!

But realistically speaking, it's going to benefit content providers more than the average browser. I'm talking about magnitude of impact here. I did not that that is would not impact the average user, but the magnitude of the impact is nowhere near as much as the boon to Google.

And your metaphor makes absolutely no sense since apparently large swaths of Chrome users already have this in their hands today!


Chrome on iOS doesn't support QUIC.


I assume they aren't counting the transit time of the first SYN equivalent? Are they saying it traverses the network infinitely fast. Because it doesn't


Google should investigate (or perhaps just buy outright) a low level communications technology stack from one of the HFT firms - they've already mastered low-latency networking, they just have no incentive to share this knowledge with the outside world.


I think you'd be a bit disappointed with what HFT firms do.

They are limited in what they can do because they have to talk to the exchange, so its still tcp/ip for order sending, with either FIX or a binary protocol like ITCH/OUCH on top.

as far as their networking stack, if they are ultra low HFT then they'll use FPGA's and Arista brand switches or Infinband hardware.

The only big customization that most HFT firms do is move the networking stack into user land, but that's a well known area. I"m not aware of any HFT firms that write their own networking stack from the ground up though I'm sure there are a few:)

Not much that they do is transferable to everyday computing because most, I'd say 90%, of the performance comes from custom hardware and not the software.

Or put another way, Google already has more than enough talent to optimize their QUIC protocol, buying a HFT firm wouldn't do much for them as the HFT speed comes from area's that most people setting up servers won't want touch.


I think Google's solving a pretty different problem, low-latency communications over a quite heterogeneous network, where they don't control the lower-level infrastructure. HFT firms typically have a narrow range of configurations they have to work over and control the setup of their pipe; they aren't trying to ship technology that will work over every random person's DSL line and funky NAT setup.


Those communication stacks are not suitable for general-purpose use; they sacrifice everything, including usability, robustness, portability, and a hundred other factors in favor of latency.

For example, such stacks often put the entire communication stack in userspace, with hardcoded knowledge of how to talk to a specific hardware networking stack, and no ability to cooperate with other clients on the same hardware.


There are plenty of vendors that provide good UDP-based solutions, for example TIBCO. In my opinion multicast is not used widely enough, partially because everybody thinks that tcpip pub sub is good enough.

Financial incentives made HFT's and alike go farther than the average software companies - just look at the microwave networks.


Multicast isn't used widely for a lot of reasons, but the most important of them is that it simply doesn't scale across routing domains.


first 3 upvotes, then lots of downvotes - what's wrong with my comment? Why is it so bad to advise buying companies that might have the edge over google? I actually value the comments because they pointed out most HFTs probably do not have the it.


There is a higher than usual amount of downvoting in this discussion, with several decent comments getting buried. I'm not sure what's going on.


Wasn't the point of QUIC that it's basically encrypted UDP? I'm not seeing that great of a performance improvement here - 1 second shaved off the loading of top 1% slowest sites. Are those sites that load in 1 minute? Then 1 second isn't that great.

However, if the promise is to be an always-encrypted Transport layer (kind of like how CurveCP [1] wanted to be - over TCP though) with small performance gains - or in other words no performance drawbacks - then I'm all for it.

I'm just getting the feeling Google is promoting it the wrong way. Shouldn't they be saying "hey, we're going to encrypt the Transport layer by default now!" ? Or am I misunderstanding the purpose of QUIC?

[1] - http://curvecp.org/


The first diagram, if I'm interpreting it correctly, shows two whole round trip times shaved off compared to TCP + TLS, and one compared to plain TCP (which is basically no longer acceptable). For a newly visited site, that becomes one and zero.

The 100ms ping time in the diagram may be pretty high for connections to Google, with its large number of geographically distributed servers, but for J. Random Site with only one server... it's about right for US coast-to-coast pings, and international pings are of course significantly higher. [1] states that users will subconsciously prefer a website if it loads a mere 250ms faster than its competitors. If two websites are on the other coast, have been visited before, and are using TLS, one of them can get most of the way to that number (200ms) simply by adopting QUIC! Now, I'm a Japanophile and sometimes visit Japanese websites, and my ping time to Japan is about 200ms[2]; double that is 400ms, which is the delay that the same article says causes people to search less; not sure this is a terribly important use case, but I know I'll be happier if my connections load faster.

Latency is more important than people think.

[1] http://www.nytimes.com/2012/03/01/technology/impatient-web-u...

[2] http://www.cloudping.info


> Packet sequence numbers are never reused when retransmitting a packet. This avoids ambiguity about which packets have been received and avoids dreaded retransmission timeouts.

How does this work?

As a total guess, I assume the client gets a stream of packets, buffers them all up, waits for some threshold before re-requesting any missing sequence numbers. When that missing packet comes back in (all while the stream continued) with its new number, it puts in in place, and pushes the data up to the application and clears it's buffer. Client probably sends "I'm good up to sequence n" every once and a while so the server can clear it's re-transmit buffer.

That's pretty cool. Treat it as a lossy stream, rather than a "OH CRAP EVERYBODY STOP EVERYTHING, FRED FELL DOWN!". With this, FRED IS DED!


(Tedious disclaimer: my opinion, not my employer's. Not representing anybody else. I work at Google and have some involvement in this project.)

Others have discussed the technical aspects of what QUIC is achieving, but you can understand its purpose fairly easily by saying "QUIC" out loud ;)

If that's not clear enough, it stands for "Quick UDP Internet Connections", which I think makes it fairly clear what it achieves. You can read more about it in the FAQ: https://docs.google.com/a/chromium.org/document/d/1lmL9EF6qK...

Note that the blog post doesn't say "1% slowest sites", it says "1% slowest connections" - that's the mobile and satellite users. Think about how many seconds it takes to load google.com on your phone when your signal isn't great. How does taking a second off that sound to you?


QUIC was never intended to be encrypted UDP, although plenty of people had that misinterpretation. (DTLS is already encrypted UDP.) QUIC is a replacement for TCP and TLS.


So then it's an "always-encrypted" TCP? Or is the "always-encrypted" part wrong? Is it going to be like HTTP/2 where it still has an option for plain-text? (hopefully not)


You got the stats wrong. Its always the same site specifically some unnamed Google property (they used Search and Youtube as other examples so it could be one of them).

Google is say that, for clients connecting to the same site, the slowest 1% of those clients saw a 1 second improvement in page load time by using QUIC instead of TCP. (presumably its SPDY + QUIC against SPDY + TCP as they say at the end of the article). That's pretty good.

It was 1 second shaved


Not sure what you are getting at, s/sites/clients/. The definition of 'pretty good' depends on how slow that 1% of clients are, as the parent said, 1 second out of 60 (or even 30 or 10) would not really be anything of note.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: