Hacker News new | past | comments | ask | show | jobs | submit login

35% worldwide by population of users in random samples at APNIC:

https://stats.labs.apnic.net/ipv6/XA

US on 50%, India on 70% and China just shy of 40% -As China continues to grow (and it will) the likely outcome is > 50% IPv6 Capable. I doubt any new Mobile deployment will be single stack, the most likely is pure IPv6 with CGN for 4. So Africa which is still in growth, the most likely outcome is dualstack preferring 6

It may not be ideal, there may be significant issues with EH for instance, but at scale its alive and kicking.

Now, if only we could get jumbogram more widely deployed. Thats older than V6 is and still struggling to break the 1500 byte MTU limit.




There will be no larger MTU, this battle is lost. 1500 will live forever. If it makes you feel better, think about it as if Ethernet packets are just oversized ATM cells.

Solving PMTU problem would have required reifying the MTU to the IP layer. It could have been done like this:

1. Use a 16-bit field in the non-checksummed portion of the IP header. Initially this field is set to the MTU of the link that originates the packet.

2. Each router inspects this field, and sets it to the MTU of the next hop link, if it is lower than the MTU already in the packet. This will be cheap, as the field is not checksummed.

3. If the packet is bigger than the MTU of the next hop, just truncate it, and set a special bit somewhere in the packet header to indicate it. No need to recalculate the checksum either, the packet is going to be corrupt anyway.

4. The destination host gets the discovered MTU of the forward path, and sends it back to the originating host in the header of the next packet (in a checksummed part).

That's it. Easy, continuous MTU discovery, with robust handling of failures, that doesn't require any smarts from the routers (a comparator to update the MTU can be done in a few logical gates!).

Alas, nobody had the presence of mind to think about this back then.


It should be noted that pockets of jumbogram-in-the-wild exist. It's just normalised to clamp it to the point few people can exploit it.

For example, the NBN in Australia uses a 2000+ layer-2 frame and nothing like 500 bytes is consumed to mark the upper carrier. They COULD have gone higher than 1500 and I would be surprised if there arent customers using 1300 or less because of the ADSL configuration they brought over when they uplifted from a real modem.

A lot of people do 9000 in their filestore network. They know it works on the local segment. Reducing the forwarding burden in header-TCAM-routing by a factor of 5 is a significant win, if you have a lot of packets.

People continue to discuss mechanistic approaches to finding your MTU in the IETF but I think you're right: its 1500 or less pretty much forever unless somebody makes a move here for product differentiation reasons. Given the embedding of content inside the ISP or at the IX, I suspect it COULD happen, if e.g. Netflix said it was a better overall experience? The ISPs would do it.


Getting MTU=9000 to work is tricky even in a home network. I know, I spent several days setting it up.

And even then, I got slapped by WiFi. Its PHY MTU is limited to 2304 bytes, and that's a hard limit.

Even for the plain old wired Ethernet, I had to experiment a bit because the first multigig USB-C adapter didn't support Jumbo Frames.


Plenty of networks that use PPPoE over base network technology to connect to a "virtual" ISP implement at least mini jumbo frames. If you can set the MTU of the hardware to 1508 then the PPPoE connection runs at 1500.


If you don't do 2, you can have 3) if the packet is too big, just truncate it.

If the destination gets a packet that isn't as big as it says in the header, it was truncated, and the MTU is the size of the packet it received.

Even easier!

You still really should occasionally probe up, in case the path changed, but that's not very well done today either.


Yeah. You still need a way to reflect the discovered MTU value to the sender somehow, but this can be taken care of by higher-level protocols.


The approach obviously doesn't work at all for unidirectional protocols. This approach also doesn't work when there are multiple equal cost paths, but with different MTUs. Which used to be more common and now tends to only happen when changing link technologies. The original sender needs not just the discovered MTU, but the original header information that may have been used for flow hashing.


> Now, if only we could get jumbogram more widely deployed. Thats older than V6 is and still struggling to break the 1500 byte MTU limit.

Really, we barely hit 1500. Look at mss for popular websites, most people drop from 1500, because 1500 has problems in enough places. Does http/3 even send 1500 byte packets ever?

One major problem is most servers (Linux all versions, I think, FreeBSD before about 2000 and after something like 2019) always send the interface mss with a syn+ack. You get meaningfully better results by sending the lesser of the interface mss and the received mss; there are enough broken systems out there that don't communicate the real MTU to end systems[1], don't send enough ICMP needs frags, try to cover it up by manipulating mss in syns, but don't manipulate mss in syn+ack. Windows and iOS (and presumably mac) do a pretty good job of detecting pmtud blackholes, but it's often disabled on servers and I can't remember where Android is these days; I know it used to ship with the option compiled in but disabled and no way to enable.

Of course, packet sizing is actually a hard problem. Larger packets are good for faster links but bad for slower links.

[1] which is hard, because I don't know how you get windows to use MTU from dhcp; it doesn't request it, so it won't use it. This is a problem too.


> Does http/3 even send 1500 byte packets ever?

Iirc quic has a hard MTU cap at 1280 bytes.


Nope. QUIC requires the _minimum_ MTU of 1280: https://datatracker.ietf.org/doc/html/rfc9000#name-datagram-...

The maximum permitted size is 65527 (max_udp_payload_size).


It doesn't matter if even 98% of "the internet" is "on" IPv6. If public websites don't advertise an IPv6 address, every user is still going to use IPv4 to connect to them. All the cloud providers still prioritize IPv4, and usually don't support IPv6 at all until a few years after a new service comes out.


Entire countries still do not have IPv6. Ukraine for example.


There is some IPv6 adoption in Ukraine. https://stats.labs.apnic.net/ipv6/UA


Funny enough, when I measured IPv6 readiness/use in the world for the very first time (2008), Ukraine was on top. IIRC it moved to third as we got more data, but it still was a very surprising result.

We found out much later that the reason was that Opera had broken RFC3484 handling and prioritized 6to4 way too high (which skewed the results), and Opera was popular in Ukraine at the time. :-)


NAT64 handles the "public v4-only website" use case very well. I run my desktop without v4 today and it works fine.

A few websites without v6 aren't a blocker to either deploying v6 or undeploying v4.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: