Yes, its about changing the slow start congestion window from 2 to 10. And the author accurately points out that this is a huge win for your basic web page.
But lets think about that from the other side for a moment. Most systems will end up with a congestion window size of 1452 because their router has an MTU of 1500. With a CW of 2 that is 3000 bytes blasted out waiting for an ack (even though its only 2904 data bytes the MTU constraint will fragment it into two 1500 byte packets), with a CW of 10 that is 15000 bytes blasted out initially.
Lets say everyone on Comcast's network gets this change, they have an estimated 15 - 20 million subscribers. We'll be conservative and call it 15 million, and at any given time maybe half of them are doing an HTTP request (think about all the things that do HTTP requests in your house for a moment). So at any given instant in time you've gone from blasting out 3000 * 7.5M or 22.5GBytes (or 225 Gigabits) to 112.5GB or 1.125 terabits of data. Not surprisingly a lot of their traffic goes to some peering network and now their hammer is hitting with Terabit whacks instead of 200 Gigabit whacks. That is a noticeable change, and its an even bigger change when you consider they are running VOIP traffic as well.
Some useful work to be done here is to look at where the congestion window settles out on your network. And for what servers. And for which transit networks. How often does it get to 10? 20? The worst case would be it settles at 16. That's because with an IW of 10 the next window is 20, and then you get clamped.
There has been a lot of great work done on congestion control, and yes its really annoying like metering lights[1] when its not needed. But when it is needed it makes the system work.
Congestion is a function of cross section bandwidth and traffic demands. The cross section bandwidth of the Internet has gone up a lot, the number of 'ports' on the Internet has gone up even more. The ratio, has not improved much (and in some cases gotten worse) from the days of dial up.
I fully recognize that a number of people have done the same test (or thought experiment) the author has done with slow start and seen a green field for improvement. I was reasonably active in the IETF before the work on congestion was implemented, with a protocol that could be very latency sensitive (NFS), and it was a hellish environment. My disagreement, is that changing slow start in this way will destroy a number of interesting streaming services by increasing the standard deviation on latency outside of what they can tolerate. And for what? So that a 34K web page loads 30% faster? I'd much rather compress the web page or build a web service protocol that knows about slow start and accommodates it than this.
As others have pointed out, this change is going to happen regardless, and perhaps I'll have the opportunity for an 'I told you so', or perhaps I'll be relegated into the dustbin of ranting network dudes from the last century. But I stand by my TL;DR that the author did not demonstrate an appreciation for the impact of changing slow-start would have on the network in general because they focused only on how it would make their life faster.
[1] Here in California we have congestion control 'metering' lights on some on-ramps to the freeway, its annoying as hell when they are on and there isn't anyone on the freeway.
Since changing the congestion window doesn't increase the total number of packets for an HTTP request, for your Comcast example to cause problems there would have to be massive synchronization of the start of the HTTP requests.
But lets think about that from the other side for a moment. Most systems will end up with a congestion window size of 1452 because their router has an MTU of 1500. With a CW of 2 that is 3000 bytes blasted out waiting for an ack (even though its only 2904 data bytes the MTU constraint will fragment it into two 1500 byte packets), with a CW of 10 that is 15000 bytes blasted out initially.
Lets say everyone on Comcast's network gets this change, they have an estimated 15 - 20 million subscribers. We'll be conservative and call it 15 million, and at any given time maybe half of them are doing an HTTP request (think about all the things that do HTTP requests in your house for a moment). So at any given instant in time you've gone from blasting out 3000 * 7.5M or 22.5GBytes (or 225 Gigabits) to 112.5GB or 1.125 terabits of data. Not surprisingly a lot of their traffic goes to some peering network and now their hammer is hitting with Terabit whacks instead of 200 Gigabit whacks. That is a noticeable change, and its an even bigger change when you consider they are running VOIP traffic as well.
Some useful work to be done here is to look at where the congestion window settles out on your network. And for what servers. And for which transit networks. How often does it get to 10? 20? The worst case would be it settles at 16. That's because with an IW of 10 the next window is 20, and then you get clamped.
There has been a lot of great work done on congestion control, and yes its really annoying like metering lights[1] when its not needed. But when it is needed it makes the system work.
Congestion is a function of cross section bandwidth and traffic demands. The cross section bandwidth of the Internet has gone up a lot, the number of 'ports' on the Internet has gone up even more. The ratio, has not improved much (and in some cases gotten worse) from the days of dial up.
I fully recognize that a number of people have done the same test (or thought experiment) the author has done with slow start and seen a green field for improvement. I was reasonably active in the IETF before the work on congestion was implemented, with a protocol that could be very latency sensitive (NFS), and it was a hellish environment. My disagreement, is that changing slow start in this way will destroy a number of interesting streaming services by increasing the standard deviation on latency outside of what they can tolerate. And for what? So that a 34K web page loads 30% faster? I'd much rather compress the web page or build a web service protocol that knows about slow start and accommodates it than this.
As others have pointed out, this change is going to happen regardless, and perhaps I'll have the opportunity for an 'I told you so', or perhaps I'll be relegated into the dustbin of ranting network dudes from the last century. But I stand by my TL;DR that the author did not demonstrate an appreciation for the impact of changing slow-start would have on the network in general because they focused only on how it would make their life faster.
[1] Here in California we have congestion control 'metering' lights on some on-ramps to the freeway, its annoying as hell when they are on and there isn't anyone on the freeway.