As an Opera user you were not very likely to be behind some middlebox. Middleboxes interfering with pipelined traffic was the reason it was never safe to enable by default.
Why being Opera user made me less likely to be behind a middlebox? I installed Opera wherever I was accessing the Internet. I think only place that they had heavy restrictions was a community college which restricted browser to IE, but I was able to use Opera from a thumb drive. Never had issues.
Has anybody ever given any even halfway compelling evidence of pipelining breaking websites?
Google didn't, Mozilla didn't.
I believe there was one small banking site using a ten year old IIS that didn't load - but maybe pipelining was the least of the problems there. Another that sent the wrong images to mobile Safari, or mobile Safari displayed them wrong, but was fixed.
Pipelining may not have been the technically best solution, but it certainly would have taken the impetus away from SPDY. If Mozilla had shown the courage to default to pipelining maybe the industry as a whole could have had some input on HTTP/2 instead of just rubber-stamping SPDY.
This probably could have been solved for HTTPS though, if you negotiate to http/1.1 via ALPN, then pipelining could be OK? Otherwise, http/1.1 with keepalive only should be used?
But now you're going to say that's a bug in a specific brain dead server, and it should have been fixed. I'm sure there are other bugs tracking other server's issues with pipelining, but that's the only one I remember seeing (mostly because of the amazingly terrible nature of the bug).
The point of the Bugzilla link is that enabling pipelining in Firefox would trigger bad behavior in servers. It's not some idle fear of unknown problems, it's fear informed by actual problems -- if you turn on the feature mostly things will work fine, but some stuff is going to break, and user interpretation when Firefox gets a broken page and IE (or Chrome, whatever) works, is that it must be Firefox's fault. There's not necessarily a way to detect an error in this case, so it's hard to gracefully degrade. The exciting world of (mostly) transparent proxies for plaintext http makes this even worse.
There were hundreds of millions of Opera and mobile Safari users with pipelining enabled, and the only problems you can point to are a few small things, in this case in something that doesn't even seem to be normally accessed from a browser.
I wrote a somewhat performance oriented web server some years ago, and wondered whether I should implement pipelining.
My conclusion was that pipelining is unusable in HTTP 1.x, and a potentially harmful behaviour for the server.
See, the problem is that there is no way in HTTP 1.x to map server responses to client requests. That is why the server has to send responses in the same order it received requests. This is disastrous for the server, because it means each response should be buffered until it can be sent at its correct order.
If you send 100 requests to the server, and the first one is slower to compute than the others, then you effectively need to buffer 99 responses until the first one finishes to send everything.
I'm not even sure the client would benefit much from pipelining versus concurrent requests, because the client would have to know how to order requests so that a GET for a big image does not block the whole pipeline because it was made before the small images.
If there was a way in HTTP 1.x to identify which response is for which request, then the server would be able to send responses in any order it can, and it would have been usable.
> I'm not even sure the client would benefit much from pipelining versus concurrent requests
Microsoft studied this when they evaluated SPDY and the found that pipelining does benefit, and scored nearly identically with SPDY (aka HTTP/2). So you don't have to guess; it does.
> If you send 100 requests to the server, and the first one is slower
That's not how pipelining worked. You still have your 6 concurrent connections, and you spread out the first 18 or so resources across them. If the first is slow then the other 82 are spread out among the other 5 connections, the page overall loads much faster, except for maybe a couple resources loading later than otherwise.
> because the client would have to know how to order requests
It's funny because Google found out that a priority inversion caused Maps to display much slower with HTTP/2 than with HTTP/1.1 because a key resource was put behind other less important data. Their proposed solution was to embed priorities into the JavaScript and let it fix the browser's guess, but I think they ended up just tweaking something specific to Maps.
But guess what, that would have worked for pipelining as well.
The problems with pipelining like the head-of-line blocking you described were largely overblown and easily fixable -- but Google didn't want to possibly because Chrome being extra fast only on google.com favored both their browser and services. Firefox was losing to Chrome, and Mozilla couldn't even take a minuscule risk for fear of losing more ground.
HTTP pipelining isn't intended to provide concurrency (and doesn't, unlike, say IMAP pipelining which allows for out of order responses). It helps with queuing, for when client to server round trip time is significant with respect to overall response time (including server response generation time). This can reduce the amount of connection idle time, at the risk of running into broken implementations.
Multiplexing (either with multiple tcp connections or http/2's tcp inside tcp construction, or quic's multiple tcp inside udp construction) addresses concurrency, and can address idle channels, depending on the amount of channels available.
As an HTTP server, it doesn't make a lot of sense to run pipelined processing, unless you're implementing a proxy and the origin servers are pretty far away from the intermediate. That way, you can keep requests flowing to the origin.