That's an unfair assessment. HTTP/2 fundamentally changes how requests are handled. With HTTP/1.1 there is a defacto connection pool inside the browser and this throttling has been a feature of front-end development for 15+ years (from when ajax became a thing) so this wasn't something on anybody's mind. HTTP/2 all of sudden removes this constraints and for lucidchart, it led to a number of unintended consequences. This is an important consideration because the mantra has been that HTTP/2 can simply be turned on and everything will simply work as before.
> With HTTP/1.1 there is a defacto connection pool inside the browser and this throttling has been a feature of front-end development for 15+ years
This is only true when you look at a single client. If you look at a larger number of clients accessing the service at the same time, you would expect similar numbers of concurrent requests on HTTP/2 as on HTTP/1.1. Clients send larger numbers of requests at the same time, but they are done sending requests earlier so there are requests from fewer clients being processed concurrently. It should average out.
If you have, say, a 1000 clients accessing your service in one minute, I doubt the number of requests/second would be very different between both protocol versions. It would only be an issue if the service was built with a small number of concurrent users in mind.
You may be forgetting that load balancers have been working on a per request basis, and that no two requests are the same cost (despite what load balancer companies would have you believe).
Under HTTP/1.1 requests may have been hitting the LB and then being scattered across a dozen machines. Each of those machines was in a position to respond on their own time scale. Some requests would get back quickly, others slowly, but still actively being handled.
Under HTTP/2 with multiplexing, if the LB isn't set up to handle it (and they often aren't) they can be hitting the LB and _all_ ending up on a single machine, which is trying to process them while some of those requests might be requiring more significant processor resources, dragging the response rate for all the requests down simultaneously.
But it didn't, unless you're saying that Lucidchart made an incorrect analysis. Is that your argument?
>Clients send larger numbers of requests at the same time, but they are done sending requests earlier so there are requests from fewer clients being processed concurrently. It should average out.
Again, it didn't average out. And you assume it 'will average out' at your peril. Maybe it will, maybe it won't. Lucidchart engineers thought that too and it turns out that was wrong in a way that wasn't foreseen.
>It would only be an issue if the service was built with a small number of concurrent users in mind.
I doubt Lucidchart 'was built with a small number of concurrent users in mind'.
It literally says “we are aware that our application has underlying problems with large numbers of concurrent requests”. How much clearer than that so you want it ?
If I'm reading this article correctly, they're claiming their application couldn't handle the load of a single-user loading their webpage. They didn't talk about load spikes during certain times, so it certainly sounds like they just have an inadequate backend.
> all existing applications can be delivered without modification....
> The only observable differences will be improved performance and availability of new capabilities...
Lucidcharts may have an inadequate backend, but it wasn't a problem until they moved to HTTP/2, so those statements weren't true for them. For anyone else rolling out HTTP/2, that is worth bearing in mind.