Yes there is such a thing as an HTTP connection. So now you learned something valuable. Each connection consists of one or more HTTP requests. In HTTP/1.1 in practice you must complete an entire request and response before beginning another.
(Edited to add, since you did) There are two things you might mean by QUIC. Google's test protocol QUIC (these days often called 'gQUIC') was developed and deployed in-house, and has all sorts of weird Google-isms, it's from the same period as their SPDY which inspired HTTP/2. gQUIC is no longer under further development. They handed the specification over to the IETF years ago, and the IETF's QUIC Working Group is working on an IETF standard QUIC which will replace TCP for applications that want a robust high performance encrypted protocol.
HTTP over QUIC will be named HTTP/3 and will offer most of the same benefits as HTTP/2 (which is HTTP over TLS thus over TCP/IP) but improve privacy and optimise some performance scenarios where head-of-line blocking in TCP was previously a problem - probably some time in 2020 or 2021. The HTTPbis (bis is a French word which has similar purpose in addresses as the suffix letter a would in English e.g. instead of 45a you might live at 45bis) working group is simultaneously fixing things in HTTP/2 and preparing for HTTP/3.
"... where head-of-line blocking in TCP was previously a problem..."
Has anyone ever shared a demo where we can see this occuring with pipelined HTTP/1.1 requests
I have been using HTTP/1.1 pipelining -- using TCP/TLS clients like netcat or openssl -- for many years and I have always been very satisfied with the results
Similar to the demo in the blog post, I am requesting a series of pages of text (/articles/1, /articles/2, etc.)
I just want the text of the article to read, no images or ads. Before I send the requests I put the URLs in the order in which I want to read them. With pipelining, upon receipt I get automatic catenation of the pages into one document. Rarely, I might want to split this into separate documents using csplit.
HTTP/1.1 pipelining gives me the pages in the order I request them and opens only one TCP connection. It's simple and reliable.
If I requested the pages in separate connections, in parallel, then I would have to sort them out as/after I receive them. One connection might succeed, another might fail. It just becomes more work.
TCP blocking is seen the most on mobile data connections. The LTE could be delivering whole megabytes of correct data but if that TCP connection is missing just one packet it must buffer all the rest until it can get that SACK / ACK back to the server and get the missing packet.
If there's congestion, there is a random chance of any packets being dropped, because that's how you signal congestion reliably. If there's neither congestion nor wireless links on the route between you and the server neither this, nor most other performance considerations matter to you, that's nice you're already getting the best possible outcome.
(Edited to add, since you did) There are two things you might mean by QUIC. Google's test protocol QUIC (these days often called 'gQUIC') was developed and deployed in-house, and has all sorts of weird Google-isms, it's from the same period as their SPDY which inspired HTTP/2. gQUIC is no longer under further development. They handed the specification over to the IETF years ago, and the IETF's QUIC Working Group is working on an IETF standard QUIC which will replace TCP for applications that want a robust high performance encrypted protocol.
HTTP over QUIC will be named HTTP/3 and will offer most of the same benefits as HTTP/2 (which is HTTP over TLS thus over TCP/IP) but improve privacy and optimise some performance scenarios where head-of-line blocking in TCP was previously a problem - probably some time in 2020 or 2021. The HTTPbis (bis is a French word which has similar purpose in addresses as the suffix letter a would in English e.g. instead of 45a you might live at 45bis) working group is simultaneously fixing things in HTTP/2 and preparing for HTTP/3.