Hacker News new | past | comments | ask | show | jobs | submit login
The Status of HTTP/3 (infoq.com)
159 points by feross on Jan 13, 2020 | hide | past | favorite | 120 comments



This article omits that Caddy has had long-running experimental support for HTTP/3 for a couple of years now: https://caddyserver.com/docs/json/apps/http/servers/experime...


Caddy's strange official binary licencing did them no favours. The licensing makes more sense now but they are off people's radar.


This might change with Caddy 2.0.


It changed months ago. It's all Apache licensed.


I think they meant that it'll be back on people's radars once there is a big release.


One thing that drives me off from Caddy is that many features don't compose with each other well. I understand this might be a sacrifice for a simpler user interface, but the composition of declarative-style configurations is always a mess compared to imperative ones like nginx.conf


Thanks for the feedback.

(To clarify, nginx's config is also declarative... well, mostly: they got in trouble for mixing declarative and imperative [1].)

You'd be surprised what we've been able to accomplish with upgrades to Caddy's configuration in Caddy 2. Its config is technically declarative, but the underlying JSON structure [2] implies a procedural flow, enabling you to compose certain behaviors that almost feels imperative. When Caddy 2 release candidates go out in a couple months, try it out with both the Caddyfile (soon to be majorly improved in beta 13) and the underlying JSON and you'll see what I mean.

Oh, but if you want to still use your nginx config, go ahead (WIP) [3]. :)

[1]: https://www.nginx.com/resources/wiki/start/topics/depth/ifis...

[2]: https://caddyserver.com/docs/json/

[3]: https://github.com/caddyserver/nginx-adapter


Thank you for the quick reply! Currently our routing needs lots of flexibility, and we are wondering if it's possible to have something similar to OpenResty-style [0] config where builtin directives are Lua functions, and users can handle requests by dispatching them in Lua with a function call.

[0]: https://openresty.org/en/


We have a handler-scripting solution in the works that may be about 2x faster than nginx+Lua (from initial early benchmarks). Can you file an issue to request specifically what you need? https://github.com/caddyserver/caddy/issues/new


I created a container image of Discourse using nginx with the cloudflare patch to enable HTTP/3[1] and for some reason the same config that works fine in HTTP/2 loses the content-type header on Google Chrome. It works just fine in Firefox...

1. https://github.com/cloudflare/quiche/tree/master/extras/ngin...


Because QIC is UDP based, Chrome first runs a race with TCP just in case you're sitting behind some device that blocks UDP.

I wonder how much bandwidth this will waste globally.


Most global bandwidth is from video streaming, so most of the data isn't in the first couple packets.

So the number would probably be surprisingly large but also a very small fraction of overall traffic.


I wonder if Happy Eyeballs means that there are 4 races in total for systems under dual-stack...


Or, a more interesting measure: how much more full will all our NAT connection-state tables be because of this?


> I wonder how much bandwidth this will waste globally.

One or two packets? A handful of bytes per initial connection. The Connection ID mechanism and QPACK are likely to save a lot more data than is lost on initial TCP races.


I could have worded it better. I understand it's small in any single area.

Just interesting to contemplate the total effect when you have a userbase like Chrome's.


Or how many "visitor" statistics will it flaw ?


Isn't that like any device behind NAT?


No. Most NATs handle UDP sessions trivially as long as one end of the connection is not behind a NAT itself. Tricks like UDP hole punching are necessary only between two endpoints that are both behind NATs.


And assuming the protocol can cope with changing source ports. I’ve seem some udp protocols that only work if the source port is not translated.


Most NAT still allows UDP hole-punching (STUN/TURN/ICE) unless specifically blocked (which a depressing number still do).


Yes. NAT, firewalls with admins that don't know what QUIC is, various corporate MITM boxes, etc.


Facebook has also open sourced implementation of HTTP3 (https://github.com/facebook/proxygen#quic-and-http3) and IEFT QUIC (https://github.com/facebookincubator/mvfst).


> with almost 300,000 services using it across the world

The Shodan search given scans headers for all requests. So the vast majority of results are sites using Google Fonts, Maps, etc.


According to W3Techs, 2.4% of the top 10m sites and 7.1% of the top 1k sites support HTTP/3: https://w3techs.com/technologies/breakdown/ce-http3/ranking


I hate to be a curmudgeon, but I can't help but think that designing a new service over UDP isn't the best idea. DNS has been fighting off wave after wave of attack vectors, some that realistically cannot even be fixed. Making it immune to these vectors is going to look a lot like a slapped together TCP over UDP...


> DNS has been fighting off wave after wave of attack vectors, some that realistically cannot even be fixed. Making it immune to these vectors is going to look a lot like a slapped together TCP over UDP...

Unlikely, since TCP + TLS implemented over UDP is a better description of QUIC.

Neither TCP nor UDP were designed when you had to assume the internet to be a battle ground which pitted hackers, police forces, nation states against each other, and you. In such an environment it's not at all surprising to hear Comcast was attacking their own customers using bit torrent or competing video streaming by sending them fake TCP RESETS.

But TLS on the other hand - that's exactly the environment TLS is designed for. They are not going to find it so each to screw with QUIC, because TLS is baked into its DNA.


It is slapped together TCP over UDP. They would rather have made it over IP but protocol ossification has made that impossible.

Don't think of it being UDP as anything more than "it happens to go through existing infrastructure without any or much work"


It's fun to see how we kind of reinvented goold old DNS as soon as we use HTTP/3 for DoH. It runs on UDP, but with HTTP in between (and encrypted, of course). We could have done that with DNS itself, but somehow this never gained traction.


The difference DoH started with something nearly every network was guaranteed to have allowed: HTTPS traffic on TCP 443. Trying to wait for every network to allow DNS on 31218 or whatever wouldn't have worked. Even with a massive amount of the web guaranteed to move to HTTP/3 there will be places that still don't allow UDP 443 five years from now.

If ever something seems dumb in network protocol design the answer probably lies with ossification. Hopefully the design of HTTP/3 ends that.


You don't need to wait. Just offer your service on that port and you are done. Why build protocols with shitty networks or shitty admins in mind?

You can use most ports just fine. Multiplayer games show this every day. If those don't work in corporate networks (I include "public" wlan in this) that's fine too. It's not up to you what should be allowed or not in a company setting. ISPs don't block ports just for fun.

And if you are big enough like Google or Facebook, just offer a service only on that port. If it doesn't work, show a warning that the customer should complain to the network owner.


It is basically TCP 2.0 implemented on top of UDP. What concerns do you have with this?


Well, I have a lot, but I don't want this to turn into a dissertation about why I don't like UDP for most things.

The biggest red flag to me is that it is trivial to make the server send sizable packets to clients who didn't ask for them nor want them. Spoofing is not a solved problem, no matter how many times people scream at providers to block it before it gets out. Why isn't there a minimal ACK process up front? I mean, this thing is designed such that it's vulnerable to reflection attacks on -day 1-.


UDP is a 4-page RFC that is pretty much a definition of IP with ports. You can easily imagine TCP as a bunch of algorithms on top of UDP. Knowing that I’m not sure what you don’t like about UDP?

Amplification attacks and spoofing are taken care of by QUIC, so these are not problems at the moment.


That's a little unfair. When someone tells you something relies on UDP, nobody in their right mind assumes that this person implemented TCP features on top. UDP is simple and quick, and the assumption is that there was a reason they chose that route.

AFAICT, QUIC loosely defines a way to do client address validation, but it isn't required! It's worded as 'CAN' and 'MAY' in places, and places outside the spec basically hint that 'oh no! it adds another round trip'. This makes it feel totally bolted on. Why wasn't it required up front? If someone came up to me with this idea of QUIC, the -first- thing I'd tell them is to solve this problem definitively first. The idea seems sound to me, but not if you let everyone turn it off...


> When someone tells you something relies on UDP, nobody in their right mind assumes that this person implemented TCP features on top.

AFAIK it's rather common in multiplayer games to do TCP-like features over UDP, as they usually need the UDP features but also need reliable transport for some data, and it's rather tricky to do both UDP and TCP at the same time.

See for example https://fabiensanglard.net/quakeSource/quakeSourceNetWork.ph... or https://gafferongames.com/post/reliability_ordering_and_cong...


Isn’t the DNS issue unrelated to the protocol of UDP vs TCP but rather how DNS service communicate with each other... triggers an amplification of one request into 100s or maybe 1000s... not really a tcp vs udp issue?


There's two issues at play there. What you're talking about is called amplification. That means you get responses far larger than the query. So you send a small query for say, TXT, and the server may reply with KBs of TXT records.

However, that isn't useful by itself. Where it's useful is for attacking a third party. To do this, you spoof the source IP of your query to be the IP of the victim. This is called reflection. See, in TCP, every connection starts with a tiny three way handshake. So if you spoof the IP, the target would just get an unknown SYNACK which is tiny. It wouldn't respond, and nothing further happens. In UDP, there is no such handshake. So the target gets the entire response.

So in short, DNS amplification attacks are only useful with reflection, which doesn't work over TCP.


QUIC addresses this, like TCP it's a connection based protocol. It's just implemented in user space on top of UDP. The server response is required to be smaller than the client hello.


QUIC has address validation, basically if you talk back and forth with a remote party you can get yourself a token which proves you're on path (you might really be 10.20.30.40 we can't tell, but we can tell you can intercept packets for 10.20.30.40 because you received this token we sent there, maybe you're an adversary at their ISP) and you can use that token in future to prove you're still on path for 10.20.30.40, which an off-path attacker wouldn't be able to do because they can't get that token.

So this lets you prevent amplification. If somebody asks a question with a long answer, but doesn't provide the token to prove they're on path, you make them go around again first, a legitimate user sees one round trip wasted to get a token but attacks don't work..


I was specifically answering the user's question regarding DNS attacks.

Address validation in QUIC is optional, per the RFC.


Too bad no major internet company wants to implement a new version of TCP. We can’t even get IPv6 done after 15 years.


It's a common misconception that routers handle TCP. They strictly handle only the IP headers (and lower-level headers).

The TCP protocol is implemented only by endpoints, at least in principle.

It's the "security appliances", also known as "middleboxes" that are the problem. Think web proxies, antimalware scanners, firewalls, and inline IDS systems.

These things are the bane of the Internet, because they ossify protocols, blocking any further development.


Although what a consumer considers a "router" is actually a middlebox doing a bunch of things and does care. (CG-NAT in provider networks is probably another example of a common problematic middlebox)


“A new version of TCP” is pretty much what QUIC (basis of HTTP/3) is. It’s just tunneled over UDP because existing Internet infrastructure likes to drop anything that’s not TCP or UDP.


That's the problem being cited. The best option is to do a real update of TCP at layer 4, but nobody wants to put in the work and investment to do so.


Depends what you mean by "real". You may know this already, but the only difference between UDP and raw IP is the UDP header, consisting of 4 fields in 8 bytes: source port, destination port, length, and checksum. That's it; there's no other protocol overhead. Thus, from a pure technical perspective there would be basically no advantage to running QUIC directly over IP instead of over UDP. The only advantage is from a human perspective, that it's a little more elegant to put QUIC on the same layer as TCP.

In exchange... among other things, it would break all existing NAT implementations, since NAT is based on port numbers and existing devices wouldn't know where to find the port number in the new protocol. So everyone behind a home router would be unable to use the new protocol until they upgraded their router firmware – which of course most 'normal people' never do, so realistically you're waiting years until they get a new router.

Not only is that a gigantic practical disadvantage, it also feels rather inelegant itself. After all, routers shouldn't need to know the details of the transport protocol just to route packets. If it weren't for NAT they wouldn't have to, which is probably why port numbers aren't part of IP itself. NAT sucks. But NAT isn't going away; even on IPv6 some people insist on using it. By tunneling QUIC inside UDP, we at least regain the elegance of separating what routers need to know (IP + UDP) from the real "transport protocol" (QUIC).


Except we already have those layer 4 replacements (i.e. SCTP covers a lot of the same ground), and they've never managed to get out of the niches they are in. How would you suggest "a major internet company" motivate their support better?


implementing a new version of TCP would be nearly impossible because of all the equipment out in the field already.

Changing transport protocol is far harder then changing IP protocol or layer 2 medium.


Why doesn't the IP protocol require new hardware if it's a lower layer?


It is about the same. It is called TCP/IP for a reason. Although there are more devices that deal with TCP/IP together than just IP alone. Either way it ain't going to happen.


IP doesn't require new hardware because it's a lower layer.

Transmission Control Protocol - TCP - is baked into the firmware of every client network interface card, and I would suppose in almost all of the switches and routers of business infrastructure.

I have no idea what data centers use. Infiniband and similar things aren't TCP, I think.


Infiniband is a lower level than TCP. Infiniband is often used as a replacement for ethernet in a supercomputing cluster.

If you wish you can run IP over Infiniband (IPoIB) but I think most people using Infiniband are running a lower latency protocol like RDMA

https://wiki.archlinux.org/index.php/InfiniBand#TCP/IP_(IPoI...


Is IP not handled by network devices and firmware? If it's just software then why don't we have IPv6 everywhere already?


Jiggawatts' comment [0] reminds me that transit routers don't do transmission control.

[0]: https://news.ycombinator.com/item?id=22040780

I enjoy discovering my misconceptions on this topic, as I am no longer building computer networks. Mostly harmless.


Totally agree with you. Many big corporations totally block UDP at ISP level because there is no easy/magic solutions against amplified DDoS. They usually use some internal backend for DNS, Ntp, routed through other ISP.

I am afraid that businesses that will host HTTP/3 will be less secure in terms of availavility.


> I am afraid that businesses that will host HTTP/3 will be less secure in terms of availavility.

Presumably most HTTP/3 implementations will gracefully fall back to HTTP/2 (or 1.1) if UDP is filtered. Chrome's existing QUIC implementation already does this.

Every Google property has had QUIC enabled for quite a while now (at least 4-5 years), so if UDP blocking would cause availability issues, affected businesses would've noticed by now.


In 2017 Google reported that "4.4% of clients are unable to use QUIC". See sec. 7.2 of https://static.googleusercontent.com/media/research.google.c....


Sure, but that doesn't mean those clients are unable to connect to QUIC-enabled domains. Chrome detects that the UDP packets are being lost, disables QUIC, and falls back to HTTP/2.


Many corporations also block SMTP and inbound host ports... doesn't mean the protocol is bad. Hell, many corporations install horrible proxy servers and require all outbound connections go through that proxy.


The fact that many corps block UDP doesn't have anything to do with the necessity of QUIC. QUIC is on top of UDP because we tried with SCTP, but is never coming.

For people who have reasonably configured UDP, it offers a benefit; the fact that some people have broken UDP is neither here nor there. HTTP/2 continues to be an option for TCP-only networks.


> I hate to be a curmudgeon, but I can't help but think that designing a new service over UDP isn't the best idea.

They tried with SCTP and DCCP.


curmudgeon (noun): a bad-tempered person, especially an old one.


Forget the narrow waist of the internet; this is the spaghetti ball of the internet.

It violates all of the encapsulation and decoupling principles you learned about as a CS undergrad.

If you have a better transport layer, fine. Make a better transport layer. But rolling the whole thing is the best way to ensure nothing will ever get done.

HTTP/3 is the new IPv6.


> It violates all of the encapsulation and decoupling principles you learned about as a CS undergrad.

I mean, it's not like encapsulation and decoupling ever worked with the networking stack. For example, let's pause a minute and think about which layer does TLS or NAT sit in.

To quote @tptacek[0]: There is no such thing as a layering violation. You should be immediately suspicious of anyone who claims that there are such things.

> HTTP/3 is the new IPv6.

Funny you mentioned IPv6. The famous article [The world in which IPv6 was a good design][1] actually gives some good context on why QUIC is needed, and believe it or not, layering violation was explicitly mentioned.

[0]: https://news.ycombinator.com/item?id=4556125

[1]: https://apenwarr.ca/log/20170810


> which layer does TLS

Transport Layer Security? Transport layer.

> which layer does NAT

Network Address Translation? Network layer.

---

That said, the names aren't as important as the layer and independence. E.g. TLS can be used to secure any TCP traffic: STMP, HTTP, etc.


Some might argue that TLS is on Layer 6, 7 and 4 at the same time: https://security.stackexchange.com/a/93338

Same for NAT: https://networkengineering.stackexchange.com/questions/3145/...

> the names aren't as important as the layer and independence

That's the point. Layers don't mean anything in the real world where ossifications are a thing and replacing infrastructures has a cost. There are plenty of protocols that require cross-layer coupling (aka "layering violations") and asking for layering compliance really doesn't make anyone except layering lawyers' life better.


I didn't say "layers don't mean anything." I said names don't mean anything, especially the OSI layers which as your post points out never really became a thing.

TLS is a encryption layer that works with many protocols. Assigning some special number to it isn't the important part.


> TLS is a encryption layer that works with many protocols. Assigning some special number to it isn't the important part.

Yeah but the point is TLS also doesn't work by just magically changing TCP to TLS (where would you even change that?). It works by using HTTP over TLS. The issue is also not with OSI layers, but the fact that there are dependencies between different components that are supposed to be encapsulated from each other. In this view, HTTPS (HTTP + TLS) isn't really different from HTTP/3 (HTTPS + QUIC).


"HTTP/1.1 keep-alive connections, though, do not support sending multiple requests out at the same time, which again resulted in a bottleneck due to the growing complexity of Web pages."

Is this true.

Below we use HTTP/1.1 pipelining to send multiple requests (30) over a single TCP connection in order to print short descriptions of the last 30 infoq.com articles posted to HN.

   http11 ()
   { 
   while read x;do case $x in https://*)x1=${x#https://*/};;http://*)x1=${x#http://*/};;*)x1=${x#*/};esac;[ $x1 != $x ]||x1="";x2=${x#*//};x3=${x2%%/*};printf "GET /$x1 HTTP/1.1\r\nHost: $x3\r\nConnection: keep-alive\r\n\r\n";done|sed '/^$/d;N;$!P;$!D;$d';printf "Connection: close\r\n\r\n";
   }
   curl https://news.ycombinator.com/from?site=infoq.com|grep -o "https://www.infoq[^\"?]*"|   http11   |openssl s_client -connect www.infoq.com:443 -ign_eof -quiet 2>/dev/null|sed -n '/@id/p;/^  \"description\": \"/p'


With HTTP/1.1 pipelining, you can't reliably start sending the second request until the first response is complete. As such, you can't have multiple requests out at the same time. It's also very much linear.


"With HTTP/1.1 pipelining, you can't reliably start sending the second request until the first response is complete."

In the example, all 30 requests were sent at the same time. openssl did not wait for any responses.

This example can be repeated again and again and every time, all the responses are received, in order. It is reliable.

Not sure who "you" refers to in the above statement, however if it applies to me then that statement is incorrect. I have been using HTTP/1.1 pipelining outside the browser for decades.

HTTP/1.1 was written for HTTP clients. Browsers are just one type of client, not the only type. More than half the traffic on the internet is initiated by non-interactive clients. Besides headless, that excludes browsers.

From RFC 2616:

user agent

The client which initiates a request. These are often browsers, editors, spiders (web-traversing robots), or other end user tools.


Ah, I seem to have misremembered; from RFC 7230:

> A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response).

Maybe this was just yet another case where plenty of intermediaries are broken and mass-deployment has always been difficult?


Yes, that. I always set firefox to attempt pipelining until they removed it in favor of HTTP/2. It worked well.


From the article:

> HTTP/2, derived from the now deprecated SPDY protocol, introduced the concept of first-class streams embedded in the same connection.

Was this not done by SCTP?

* https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...

It's just that (a) network boxes often block 'unknown' protocols, and (b) web servers/browsers did not bother implementing the protocol.


Yes, my impression of both HTTP/2 and HTTP/3 efforts is that they learned a lot from how many middle boxes clobber SCTP. I've heard the UDP-based QUIC that HTTP/3 uses described before as "middle box safe SCTP", though it does differ in details it attempts to accomplish much the same things but piggybacking over UDP.


For the builder of small to medium (say, 10k to 1 million monthly users, some media but not the primarily focus) websites, apps, APIs, etc, is it time to begin deploying HTTP/2 or even 3?

How would one make the decision, what factors would influence it? What are some of the best books/essays arguing either direction?

I believe in maintaining best practices even if you can get away with sloppiness on a specific project, to be good and fast at doing things the right way, but I honestly can't tell where the new protocols fall.


You should be using public cloud and a CDN and they'll turn on protocols when they turn them on.


HTTP/2 absolutely there are a ton of wins with some of the work they did. Especially around content loading and ssl.

HTTP/3? Meh. We've started into the realm of solving google scale problems in HTTP standards that have marginal if any benefit to the 99%


> google scale problems [...] that have marginal if any benefit to the 99%

Such as connection migration over different networks? Who ever needs that?

"Sorry, call dropped. I was leaving the house and I lost connection to the Wifi..."


I’d tend to say it the other way round, actually:

Compared with HTTP/1.1, HTTP/2 has known regressions which can be catastrophic on poor-quality connections, mostly due to TCP head-of-line blocking. At Fastmail we found concrete performance problems for some users from deploying HTTP/2 so that we rolled it back for a while and tried again later after shuffling a couple of things around to mitigate the worst such problems (can’t remember what we actually did). But even so, a small fraction of our users will get a worse experience from HTTP/2 than from HTTP/1.1. (Don’t think I’m speaking against HTTP/2; overall it’s an improvement for performance over HTTP/1.1, often a big improvement. But for developing things like interactive apps, it’s helpful to understand the differences and their effects.)

HTTP/3, meanwhile, does not suffer from such problems (which is most of why they made it). The main risk is that it’s less likely to work at all—but browsers take that into consideration and fall back to HTTP over TCP (HTTP/1.1 or HTTP/2) smoothly.

Presuming you have capable server software, I’d honestly consider deploying HTTP/3 to be lower risk than deploying HTTP/2.

One should also consider the robustness of the HTTP/2 or HTTP/3 implementation. Consider such things as this variety of DoS attacks against HTTP/2 from the last six months: https://github.com/Netflix/security-bulletins/blob/master/ad.... It’s improbable that these are are only major attack vectors in HTTP/2 and HTTP/3 implementations.


Slight tangent that actually shocked me when I learned about it: the OSI model that everyone keeps talking about is actually not the "official" nor "real" implementation we use. The actual protocol (and conceptual model) you really want to learn and work with is simply called "Internet Protocol Suite", commonly known as "TCP/IP".

Consider this comparison between OSI and TCP/IP models from Wikipedia[1]:

> The OSI protocol suite that was specified as part of the OSI project was considered by many as too complicated and inefficient, and to a large extent unimplementable. Taking the "forklift upgrade" approach to networking, it specified eliminating all existing networking protocols and replacing them at all layers of the stack. This made implementation difficult, and was resisted by many vendors and users with significant investments in other network technologies. In addition, the protocols included so many optional features that many vendors' implementations were not interoperable.

> Although the OSI model is often still referenced, the Internet protocol suite has become the standard for networking. TCP/IP's pragmatic approach to computer networking and to independent implementations of simplified protocols made it a practical methodology. Some protocols and specifications in the OSI stack remain in use, one example being IS-IS, which was specified for OSI as ISO/IEC 10589:2002 and adapted for Internet use with TCP/IP as RFC 1142

There's a similar discussion on the TCP/IP article as well[2], highlighting the impracticality of OSI "layers" in terms of implementation.

> The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated[citation needed] that Internet protocol and architecture development is not intended to be OSI-compliant. RFC 3439, addressing Internet architecture, contains a section entitled: "Layering Considered Harmful".

In practice, a short discussion about SSL/TLS is enough to point the inadequacies of OSI, and orient one towards conforming their conceptual model closer to TCP/IP and relevant RFC's.

The advice I follow personally, given the "popularity" of OSI (people call "layer 1-7" as if it were an actual thing, 90% of the blogs and literature uses the OSI model), is to simply translate into TCP/IP lingo/concepts in your mind whenever you read/speak about it. Not everyone will get it around you, but at least you'll have a more solid implementation should you be working with low-levels of the net stack.

For reference, RFC 1122[3] defines the Internet Protocol Suite with 4 layers: application, transport, internet (or internetwork, the idea of routing) and link. Does not assume or require a physical layer in the spec itself — hence it works on 'anything' from USB to Fiber passing by Ethernet or Bluetooth and PCIe/Thunderbolt if you just follow the protocol.

[1]: https://en.wikipedia.org/wiki/OSI_model#Comparison_with_TCP/...

[2]: https://en.wikipedia.org/wiki/Internet_protocol_suite#Compar...

[3]: https://tools.ietf.org/html/rfc1122#section-1.1.3


The OSI model was always more of a GoF software "Design Pattern" for network stacks. It's certainly one way you can do it, and breaking down the conceptual functions in the way they do makes it easier to understand the whole stack by looking each function in isolation.

But it's not "the" only way to do it and following it doesn't guarantee a better outcome than not following it. Still, I thought it was a good way of presenting the material, but then I came across student after student insisting it was the only possible way to structure network stacks. It's not. Despite being a ISO standard, no real follows it exactly. It's prime use just an educational tool.


Can anyone now give a reasonable ETA for the HTTP/3 RFC:s?

I see that the WG charter has one milestone in May 2020, and Daniel Stenberg of curl has mentioned early 2020 before. In addition, AFAIU, both Chrome and Firefox have implementations ready, though still behind flags.

Is it likely that we'll see actual deployments and a wider rollout already in 2020?


I saw a talk from Daniel about a month ago and then he said he had no idea.


From a brief look at the Rust side -

Actix uses Tokio

Quiche doesn't

Quinn does but doesn't look quite ready and I don't see any integration attempts with Actix yet.


Why are they calling it HTTP/3 and not just keeping the QUIC name?


Two reasons I can think of:

1. HTTP/2 was a 'new serialization' of the core HTTP data model. HTTP/3 is too, so since it was ratified it made sense to use HTTP/3 to keep things consistent. 2. QUIC still exists, but it's now the underlying framing/messaging protocol on top of UDP. I can imagine future internet protocols being developed on top of QUIC that don't need HTTP/3.


Yup. For example there's a draft proposal for DNS-over-QUIC: https://tools.ietf.org/id/draft-huitema-quic-dnsoquic-06.htm...


Well, why did we call it http/2 and not just keep the SPDY name?

Probably because it's easier to know what's being talked about if we keep the historical protocol name. If the submission title was "The status of QUIC" I wouldn't have remembered it had anything to do with HTTP.


You can use QUIC without HTTP/3. QUIC streams are just bidirectional Streams without HTTP semantics, and can therefore carry arbitrary data.

The HTTP mapping on top of QUIC is called HTTP/3.


The IETF is reserving the QUIC name for the underlying transport protocol, which has been separated from HTTP/3. Google's QUIC has them tightly coupled, so it's just "QUIC", whereas IETF QUIC is a general transport protocol (like TCP or SCTP, but technically implemented on top of UDP) for multi-stream connections, and HTTP/3 is a HTTP application protocol on top of QUIC.


Some rather muddled answers here, but it's simple enough: it's an (upcoming) IETF standard. The name makes clear that it's a proper standard, not an experiment run by one organisation.

Something similar happened with SPDY (Google's experiment) and HTTP/2 (IETF standard based on SPDY).


That would be like calling http/1 'tcp'.


Seems more verbose than https://caniuse.com/#feat=http3

TLDR - No browsers support it (without flag / config changes) yet.


In the age of evergreen browsers, it's possible to go from 0% to 85% in just a few weeks time. Upgrading web servers will be the real bottleneck.


nginx will get H3 this year, so I think we'll see a massive uptick in adoption.


nice! might be a while for haproxy [1] if ever and i've not been on apache's IRC in a long time so no idea if they are considering adding it to core.

[1] - https://github.com/haproxy/haproxy/issues/62


Do you know where I can watch the status on that?



This guy has clearly never heard of mobile Safari / iOS


I'll start worrying about HTTP/3 the same time I start worrying about IPv6, never.


All this to support more ads per page.

All this is only a win mostly if you have a huge number of little assets from different sources. Ads, trackers, icon buttons, malware, etc. If it's all coming from one source, HTTP/2 is good enough. If it's mostly one big file, HTTP/1 is good enough.


You are wrong. If you understand how browsers behave when they query a page that contains different resources from one source, and how head-of-line blocking works, you understand why so much effort was put into this protocol.

(Any introduction on QUIC will explain you these if you are interest.)


So, as a guy who is hellbent on not losing their ad blocking, are you telling me that QUIC allows the client (the browser in this case) to read from the stream up to a point, find a resource with a well-known name (or crypto hash) that is an ad/tracker, and the client is then allowed to just skip ahead in the stream, not downloading and never parsing the said ad/tracker?

If no, then QUIC has failed its purpose, for the tech-savvy browser users at least.

If yes, I'd be happy.


QUIC is just a way for a browser to make more HTTP-like connections in parallel more efficiently. It doesn't affect ad-blocking browser extensions. The browser can still implement the APIs for extensions the same way regardless of whether QUIC or a previous version of HTTP is used.


> All this to support more ads per page.

I'm also very afraid of the future of Web Assembly.

Both of these technologies i'm afraid aren't going to make things better, faster and more lightweight. They are just going to allow the bloat of the web to become worse without noticeable symptoms.


Who cares about the WebAssembly on the web? I'm looking forward to WebAssembly because it'll mean that Electron will become the new JRE: a lightweight, portable, feature-rich "full-fat GUI" application platform. ;)


This made me think of the problem of highways: the more lane you add, the more cars come to use your road and congestion is not reduced.

This is a problem with computers in general. For example IDEs and simple electron apps taking huge amounts of memory.


The problem with highways is called induced or latent demand. It’s a known problem in a lot of fields.

https://en.m.wikipedia.org/wiki/Induced_demand


It's the reverse of Moore's law:

https://en.m.wikipedia.org/wiki/Wirth%27s_law


I suspect more ads per page with less ability to block them is Google's endgame indeed.

But if browsers like Firefox find a way to skip ahead in the stream (through the ads/trackers while never downloading/parsing them in the first place) then things could be back to normal (for us the ad-blocker users).

Still though, if used for good, QUIC (or HTTP/3) seems like a pretty good piece of tech.


How does QUIC lead to "less ability to block them"?


I am not sure. But seems like the nature of the encrypted connection and the stream utilisation (akin to HTTP/2) would make it harder for something like e.g. your home PiHole to block ads at the DNS level -- or any other lower-level in your network before stuff hits the browser.

I should have worded my comment better. It wasn't a claim, it was more like a "I think Google wants this protocol to succeed for their own ulterior motives (serve ads and impede blockers), but is that what is actually going to happen?".


I mean, not directly. The ad team didn't get a team together and say "make a protocol that will get our ads to the browser quicker", it's likely a team made to "make the internet faster by solving problems with TCP", and it just happens that A. they're funded by ads and B. this speedup also happens to work in favor of ad delivery. Same for the other parts of the internet, like dynamic content [chrome, JS] and OS speed [android], they're making the entire internet faster which ends up including how fast they can push an ad.


You can't do much at the DNS level if everything comes from "google.com". Which seems to be where Google is going with this. One big multiplexed encrypted pipe between their browser and their servers, containing many streams. All pages are hosted by Google via AMP, all ads are hosted by Google, all tracking is performed through Google Tag Manager, and the power of add-ons to Google's browser is limited to prevent ad-blocking.


Yeah I'm not deploying this, pretty much ever.

I'll gladly eat the overhead of TCP to be able to avoid the reflection and spoofing issues of UDP.


The only reason QUIC is built on top of UDP is because ossification prevents it from being built on top of IP. It's essentially at the same level as TCP- and provides similar mechanisms to avoid UDP's issues.


That's my point, I don't want something at the same level of TCP because then you have to solve the same problems that TCP already solves.

Whitelisting patterns are incredibly common. Breaking that breaks a lot of stuff.


Google is building TCP over UDP because they're unwilling to fix TCP on Android.

Apple has shown that TCP is not ossified. They have innovative features like Path MTU probing/blackhole detection [1] and multipath TCP [2].

[1] This actually isn't innovative, but Android hasn't enabled it, even though it was widely available since before Android existed; maybe enabling it is an innovation?

[2] Initially for Siri only, but now available to applications if you make the right incantations. I haven't been able to actually use it, and I'm pretty sure there's some hidden performance issues. For example, there's no reasonable way to make all the substreams of a MPTCP connection go to the same network queue, and you probably want that for performance, but that would be a problem with QUICs multipath features as well; I haven't seen if anybody made reasonable configuration for MPTCP yet, either.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: