Hacker News new | past | comments | ask | show | jobs | submit login
The world in which IPv6 was a good design (2017) (apenwarr.ca)
179 points by jnord on Aug 14, 2023 | hide | past | favorite | 306 comments



I have several times cited it as a key article, more insightful than almost anything else I've ever read about IPv6, but I concede it is overlong and unclear and needs more illustrations. (Which, as a technical writer myself, I generally regard as a crutch.)

I think the core argument can be summarised as this:

1. IPv6 is flawed because it has 2 main layers, but it needed 3.

2. It understands physical addresses, and it has its own logical addresses.

3. But it really needed another layer in between them: a virtual translation/mapping layer.

4. Crucially, it lacks this. As such it is not better enough than IPv4 to ever totally replace IPv4.

5. But apenwarr proposes that it is, in essence, possible to fake this using QUIC.

https://en.wikipedia.org/wiki/QUIC <- note the diagram in that article.

6. Using this, it's possible to fake what IPv6 should have had but didn't, and get some of the benefits at the cost of more work.

7. However IPv6 remains doomed to never totally replace IPv4.

Boiling that down to 2 key points:

Point A: IPv6 is broken because it didn't go far enough; its mapping model is fundamentally inadequate.

Point B: It's OK, here's how we can work around that, but it doesn't fix the problem & never will.

And generalising from that argument, my take is this:

Butler Lampson said: "All problems in computer science can be solved by another level of indirection."

David Wheeler pointed out: "… except for the problem of too many layers of indirection."

The corollary of this is: it is imperative to closely track how many layers of indirection you have.

Too many is bad, but arguably, not enough is worse.

With too many, you get inefficiency, but you can nonetheless get the job done. But with too few, you might not usefully be able to do the job at all.

IPv6 did not have enough, and so failed to achieve its primary goal.


I read the article, also found it a bit confusing and wayward, and couldn't quite articulate why I disagreed with it at first, but I think I can now.

- The "mobile IP" problem isn't IP's job and should not be. IP should be as stateless as possible because this makes it cheap and easy to add capacity and redundancy to a network.

- IP addresses are assigned to interfaces, not people, computers, devices, applications, nodes, etc. If you want a fixed reference identifier associated to something across the wire, it is correct to implement that in addition to the state it requires on top of IP. If your application assigns or assumes identity based on IP address, that's a badly written application at this point.

> Y has no idea what that means, and throws it away.

X should have sent Y a session ID? The client and server should maintain session IDs and not the IP layer (imagine the security issues)?

- I think the author mostly has a problem with TCP. QUIC may become the next TCP. And that's fine.


> The "mobile IP" problem isn't IP's job and should not be. IP should be as stateless as possible because this makes it cheap and easy to add capacity and redundancy to a network.

But, to be clear, many implementations of IP (both IPv4 and IPv6) do already have "mobile IPs." There's nothing stopping you from having "mobile IPs." They're just complex, and only work when everywhere the IP can move between all exists within a single AS.

• You can move between multiple wireless APs in a conference hall, and have a layer-3 address that follows you as you switch network segments and thus acquire new layer-2 addresses (which means that the packets destined to that address are being dynamically re-routed at some upstream switch, as the address assignment changes.)

• You can move between cellular antenna and frontend infra (think: WiMax or 5G APs) at different places in a city, while remaining connected to a single cellular backend infra (what you'd think of as a "cell tower") and thus holding a single persistent L3 address.


> You can move between multiple wireless APs in a conference hall, and have a layer-3 address that follows you as you switch network segments and thus acquire new layer-2 addresses (which means that the packets destined to that address are being dynamically re-routed at some upstream switch, as the address assignment changes.)

Is it possible to implement this at home, without paying a lot of money? I've tried it with UniFi AP (with software controller) and no luck :-(


For a house, you probably just want wireless range repeaters/extenders, or mesh APs like these (https://www.wired.com/story/best-mesh-wifi-routers/). Their key advantage being that the backhaul is wireless — you don't have to wire them back to the switch.

If you really want to do an "enterprise" wireless setup, and you want it to be cheap, well... you can buy the relevant equipment (802.11 enterprise wireless APs) used, often in bulk. Sometimes computer recyclers even have them!

Make sure you buy the stuff intended for office buildings, though, not the conference-hall open-plenum equipment. The conference-hall stuff is like studio lighting: powerful at a distance (five-storey hall ceiling down to you on the floor) at the expense of guzzling power and dumping tons of heat.

Also, obviously, unlike the home stuff, with the enterprise hardware, you do need to be able to run an Ethernet backhaul back to a switch somewhere, to join all these APs' L1 collision-domains into a common link-layer network segment. And that switch has to understand what it's doing, so you'd probably need something enterprise-y there, too, unless the state of open-source consumer router firmware has really caught up with enterprise.


Extenders works very badly, as they use 1/2 of bandwidth for themselves and need to be placed twice as often as APs. I have no problem to wire normal APs to switch, as I have several RJ45 sockets in each corner of each room :-)

Now I have one AP per room already, but can not implement seamless transition - when phone/laptop switches to other AP it requests DHCP again and drops all connections. It is very inconvenient.

I know "Buy Cisco's APs for $700/piece and controller for several thousand $$$", I wander why there is no open-source solution (hostapd & Ko), which "secret source" these enterprise solutions include, is it private technology or some open industrial standards, like 802.11[some obscure letter]?

Many expensive APs are built with Linux and hostpad inside, and still doesn't support this feature.


You can actually move anywhere with Mobile IPv6. If you move outside your AS, it automatically tunnels your traffic back to your home network with IPsec. There's also a feature called "Correspondent Node" which allows you to send traffic directly rather than tunnelling via your home network.

Which kind of makes it sound like IPv6 already has the feature the article thinks it should have had.


> The "mobile IP" problem isn't IP's job and should not be.

Not an expert here, but my understanding of the argument is:

For constructing virtual networks, which may be distributed across the real Internet, the new protocol needed a virtual translation layer. It doesn't have one. Instead it just had a vastly larger address space, which doesn't help with this.


The single biggest mistake IPv6 made was not encompassing the entire IPv4 space within it to ease transition.

As it is, I am seeing a transition to either non-IP or LISP/HIP based methodologies due to security and attestation concerns.


But it does, that's what ::ffff:0:0/96 is for


I can't `ping ::ffff:192.168.0.1` and have it ping my router. There is a range reserved for representing IPv4 addresses, but the stack doesn't translate.


You can if you have NAT64:

    $ ping 64:ff9b::1.1.1.1
    PING 64:ff9b::1.1.1.1(one.one.one.one (64:ff9b::101:101)) 56 data bytes
    64 bytes from one.one.one.one (64:ff9b::101:101): icmp_seq=1 ttl=54 time=10.4 ms
    64 bytes from one.one.one.one (64:ff9b::101:101): icmp_seq=2 ttl=54 time=10.0 ms


The NAT64 prefix (64:ff9b::/96) is not the one GP cited (::ffff:0:0/96).

Also I don't have NAT64... The fact that ISPs don't provide NAT64 by default is kind of my point.


And then we are back to NAT...


Yes. What were you expecting? There's no way for a v4-only device to reply to a packet from a v6 source address otherwise. The source address has to be mapped to an address the v4-only device understands, and then mapped back again for the reply packets.

How else could this work?


It does translate, but it doesn't work for ping because ping bypasses most of the stack by sending raw packets. Try something like `telnet ::ffff:192.168.0.1 80`.


That does work. Interesting, the OS translates this at the socket level.


> I can't `ping ::ffff:192.168.0.1` and have it ping my router.

How would that even work in theory?

How would a ('legacy'?) host that only understands the 32-bit data structure of IPv4 addresses talk to a >32-bit data structure IPv6 addressed host?


You need a translator, i.e. a middle host with dual IPv4/IPv6 stack that can convert an IPv4 packet to an IPv6 packet and v.v. By the way, it's not just theoretical, it exists and it has been standardised, see https://nicmx.github.io/Jool/en/intro-xlat.html#ipv4ipv6-tra...


If it truly encapsulated IPv4, then there wouldn't be two stacks. It would be one stack and legacy devices could snip the xtra bits (or have it done for them via a router).


I'm skeptical. How would the legacy device V4 understand the "extra bits"? How would this work on the same subnet (no router)?


If it can't natively (by creating a new networking stack), then a router would have to re-write the packet.

Endpoint dvices should not be direct peering (security). Always go through either a passthrough inspection device or router.


Endpoint devices peering directly is how things work on most small networks. What you describe would cause more problems than it solves.


> (or have it done for them via a router)

And then we are back to NAT...


But you can "ping $address" regardless of which IP version it's using. Please, elaborate what are you trying to solve.


I didn't say I couldn't type that in... my point was clear to everybody else who responded.


That part of IPv6 is mostly deprecated. The more modern version is NAT64 which uses 64:ff9b::/96 by default.


Those aren't publicly routable though... that's the problem.


Couldn't we just make it so?


If we could get the RFCs changed, sure!


> As such it is not better enough than IPv4

That's the crux of it.

Sometimes I think we'd be better off forgetting about IPv6, and starting afresh with an IPv7 - something that provides a meaningful incentive to upgrade.


IPv7 was used for the "TP/IX: The Next Internet" proposal from 1993: https://datatracker.ietf.org/doc/html/rfc1475

The next available version is IPv10


> IPv10

Which is fine, because it is IPv2 in human readable form after all.



IPv12 is twice as good as IPv6, right?

Note: This post is a joke and not expected to be a normative RFC and not under any sort of conformance with Internet Drafts under the IETF.


Note: This post is a joke and not expected to be a normative RFC

Totally. I only suggested as a means to avoid conflating a real RFC with a joke RFC. And I only did that because half of the things I have joked about resulted in having to say "No, wait, I was kidding, what are you doing?!?" and thus I learned to minimize joking about things.


OK, so let's just crank it all the way to 11, since 11 is 1 more than 10.

I think smarter people than me can handle all the technical underlayment, but when it gets to the point of where the techs and sysadmins are using it, it should have an 8 digit hex key at the start and then an IPv4 "alike" address at the end, and 0000:0000:-whatever should encapsulate the current network schema for backwards compatibility.

Then new systems would get 0000:0001:- and on. That would add 4 billion entire internets to the system and still work at the fundamental level like the current one. Your IPv11 layer would only be used at the edges where the systems leave your wlan, and they could send your mac address as the swapover key or use it as part of the key when sharing security data, and the best part is that end users and clients would not need to know anything more about the network to make it work than that 8 digit string, and most of them, especially home and mobile users, wouldn't even need to know that. It would stop at your modem and be handled by the cell carriers and your home internet providers.


What, something like this?

  $ ping 64:ff9b::8.8.8.8
  PING 64:ff9b::8.8.8.8(64:ff9b::808:808) 56 data bytes
  64 bytes from 64:ff9b::808:808: icmp_seq=1 ttl=113 time=8.75 ms
Seems like we already have something very much like that.


Yes, but most people don't even have access to that right now.

https://www.akamai.com/internet-station/cyber-attacks/state-...

Making a new system that will automatically go live, work transparently to the current system and is on by default as hardware is updated and replaced is a strong pathway forward to unsnarl the internet when you're stuck in this predicament.

Pedantics won't solve the problem and just ensure that nothing ever gets better.


That would be a strong pathway forward indeed, if it were possible. Is it? Because your suggestion so far was something v6 already did, and it wasn't good enough for you.

Computers are the worst pedants in existence, so if it's not possible, you're not going to be able to do it... which means that not only is it not a useful suggestion, but it drains time and energy away from doing things which will work.


Probably anathema to the dreams of IPv6+ IP-for-everything, but in your putative scheme, could we also make a carve out for local network? I like that 192.168.*, 10.* are defined as local.

So 0000:0000 local network, 0000:0001 legacy internet.


Yeah, that's smart. There's no need for everything to be uniquely identified on the internet anyway, right? Why would someone halfway around the world need to be able to ping my smart fridge after all?

Even for tech support purposes, people shouldn't have the ability to directly test my appliances firewall capabilities.

Actually, now that I've done the math, why not just extend IPv4 into hexadecimal?

Once again, none of the current addresses would change with this change, but it would turn 60 billion possible addresses (12^10 = 61,917,364,224) into 184 quadrillion addresses (12^16 = 184,884,258,895,036,416).

I feel like that would give us plenty of wiggle room. Sure, it's not on par with ipv6's ability to assign a unique address to every atom on the planet 100 times, but I think it would be enough for the next 40 years or so, right?

And it's inherently backwards compatible, what's not to love about it?


> what's not to love about it

hexadecimal, unironically. the url of

http://[2002:914b1:::1]

is one of my major sticking points for IPv6. i'd rather just have it be 16 octets or even 8 decimal quartets where each thing is required.

http://0.0.0.0.0.0.0.0.0.0.0.0.192.168.1.1:14246

would've at least looked a bit better than it is. would've been super easy too. imagine the convo:

"what's google's DNS IP in IPvX?"

"oh it's just 8.8.8.8.8.8.8.8.8.8.8.8.8.8.8.8"


So, in my IPv4.1 suggestion, every address you currently know would work perfectly fine.

But then so would AE1.224.78.BC2

Sure, a little harder to remember maybe, but adding nearly a billion times as many IP addresses would alleviate the strain on the internet, be backwards compatible with IPv4 (but not forwards compatible, so most interior/home networks would use either a NAT or have a software ipv4-4.1 bridge software)

It would also be much more similar to IPv6 which would ease transition to full IPv6 if the human race survives long enough to ever make the jump. IPv6 is just more hexadecimal after all.


Wait, you wanted to extend the text representation of v4 addresses? That's not a thing that exists in the protocol. The addresses are in binary in the packet format and in all data structures, so any extension of the character set in the text representation has to be implemented by increasing the bit length of the addresses.

I don't know why you think this is "inherently backwards compatible" yet think v6 isn't. It's just as backwards-compatible as v6 is.


"oh it's just 8.8.8.8.8.8.8.8.8.8.8.8.8.8.8.8"

I think what you are saying is that it should be turned into a jingle so everyone can dial it fast. [1]

[1] - https://www.youtube.com/watch?v=5m6qutSER9Q [video][11s]


Decimal addresses are an artifact from the time before CIDR, when subnets were always /8, /16, or /24. IPv4 subnet math now requires obscure binary/decimal conversion. Hexadecimal fixes that problem.


I think what is first necessary is to force compliance of upgrading of IPV4 stacks to accept some information in the options section of the IP packet. According to wikipedia, many/most routers ignore or block packets that have anything specified there.

You know what? That sounds like the RFC isn't being followed, but if it's a large enough pattern, then the RFC doesn't matter. It is the standard.

I would say:

1) gather all the engineers from the major switches and networking companies, the Linux network stack, and apple and microsoft, and sure the mobiles. And ask them the easiest way to change the code to support a vast expansion of addresses in the general format of the IPV4 header: options section? Magic IPV4 address that triggers a IPV4r2 packet?

2) as a carrot, maybe you make some way that NATs / VPNs / bridging / whatever works BETTER in the IPV4r2 packet? If the packets had more information in them about the mapping/bridging so external polling was easier and numerous other use cases, then that would spur the vendors to change.

3) accept that NATs, Bridges, etc exist and will continue to. They are now established as a way to think about networking, and there are likely millions of people that think about networking in these ways, even if they are superfluous / unnecessary in the utopia of IPv6 universal adoption. So put them in the protocol. If universal addressing with firewalls eliminates everything, then they will wither on the vine over the course of a couple decades.

So, what's wrong with this? It's obviously naive.

Also, NO SLASHES NO COLONS in the notation. And ... can we increase the number of ports to 32 bits or more?


OK I want to expand on a "magic IPv4" address that is used to flag if a packet is legacy IPV4 routable or requires IPV4 processing.

If there is that magic IPV4 and an unupgraded router encounters that packet, it ROUTES IT TO THAT MAGIC IP.

... that magic IP isn't a ghost IP. It's a service that takes the packet and routes it using IPV4v2, and NAT translates it back to the legacy router on the return path.

... that might not work ... or maybe it will?


You've reinvented 6to4 and the 192.88.99.1 well-known anycast address.


So I think the difference here is then is gradual migration of the same IP stacks / infrastructure, and not introducing an entirely new "separate" stack.

My only experience with IPv6 was converting an IPV4 distributed database to IPV6. It wasn't... fun. Looking up separate flags and settings, weird errors (thank the gods for stack overflow), frustration.

Nowhere was it suggested, demonstrated, or detailed such a backwards compatibility / migration strategy.


That's what we have with v6. It's not entirely separate; it tends to be implemented together with v4 in the same stack of code and deployed on the same infrastructure. It's designed to allow you to migrate gradually.

> My only experience with IPv6 was converting an IPV4 distributed database to IPV6

I assume you mean patching the software to handle it... there are basically two socket APIs, the old one which was v4-only and the new one which works with any generic IP family. v6 had to add the second one because the first one was v4-only, but any replacement protocol for v4 would have had to do the same.


> Also, NO SLASHES NO COLONS in the notation

CIDR notation I think is fine, but hard agree on the colons holy crap. Also not being clever with the things, remove the "empty dot" thing.

> And ... can we increase the number of ports to 32 bits or more?

That requires a TCPv2 or UDPv2 or whatever, it happens at a higher layer than the IP layer.


> Point A: IPv6 is broken because it didn't go far enough

Alternatively, it failed because it went too far. When you have an established system which is used everywhere, it is immensely difficult to replace it.

Something like IPv4, with 64 bit addressed might have been easier to push through. Eg, addresses like 123.123.123.123.123.123.123.123.

We have jumbo frames, why not jumbo addresses?


Because it comes with all of the drawbacks of IPv6 but also ditches some of the advantages.

You still need to update every router and application. Network admins still need to learn something new. The two protocols still don't interoperate. If you're going to go through all of that trouble why only do a half measure. IPv6 is supposed to be the final version of IP.


> The two protocols still don't interoperate.

On the contrary, they would, the behavior's and quicks would be the same. And if we define, say, that if the last four components are zero, then the addr is the same as normal IPv4 address, then you could deploy the whole thing without having anybody assigning new addresses. NAT's/configs/etc could keep working.

The big problem with IPv6 is that everything has to be double-configured to support both IPv4 and IPv6. Two addressed for all. Different semantics. No backwards compatibility.

If you imagine that all network HW is recycled, say every decade, you could roll the thing in without having anybody to reconfigure everything. Eventually coverage would be complete. This cant happen with IPv6, because the double configuration problem. Extending vs replacement.

This is of course a pointless though experiment, because IPv6 is the route that was chosen.


The devil is in the details. All applications that use the Socket interface (which is almost everything that talks on the network) still needs to be rewritten. Firewall rules still need to support longer addresses, even if you do keep the old ones--it is basically the same situation we are in now, only the line between the networks is fuzzy and there is more confusion. You still end up with two sets of configurations for everything.


> then you could deploy the whole thing without having anybody assigning new addresses. NAT's/configs/etc could keep working.

How does a device that thinks that addresses fit in a 32-bit address space send a packet to a device with a larger address?


I have been stating something very similar this for close to 10 years.

It could possibly be known as IPv5 considering Internet Stream Protocol was never really used.

Or simply IP64.


> If you're going to go through all of that trouble why only do a half measure.

Because a "half measure" would have been easier to adopt, therefore would have been more likely. The strategic error IPv6 made was, I think, taking the point of view that as long as a breaking change is necessary, then increasing the scope of that change doesn't bring greater cost.

But it does, quite a lot of it, and that greater cost is the primary reason why IPv6 adoption has suffered.


> Something like IPv4, with 64 bit addressed might have been easier to push through. Eg, addresses like 123.123.123.123.123.123.123.123.

IPv6 supports dotted quad notation, if that is your problem with it. You can absolutely write a 128-bit IPv6 address with the last 32 bits in 123.123.123.123 notation if that makes you feel happier. ::ffff:123.123.123.123

Technically, there's nothing stopping you from building a quick library that translates 128-bit dotted quad to IPv6 addresses. Use something like 123.123.123.123.123.123.123.123.123.123.123.123.123.123.123.123 if you really want to. The 4-hex digit, colon-separated notation for IPv6 wasn't designed to make it "weirder than IPv4", but to make it easier to write/mnemonically remember than just accumulating dotted quads.

> We have jumbo frames, why not jumbo addresses?

Because there's no room. IPv4 has a fixed header size (period) and almost every field is used. IPv6 had to break compatibility in some way, no matter what, to get "jumbo addresses".


> IPv4 has a fixed header size (period)

Unless I'm mistaken, I would not say it's fixed due to the rarely used IP options field that varies depending on the IHL field. It has a variable size within a range. There might be a way to get "jumbo addresses" but it would have to be terribly hacked onto the design of the header as it is, which is one reason for IPv6's existence.

https://en.wikipedia.org/wiki/Internet_Protocol_version_4#He...


My understanding of options is that many routers and/or firewalls consider it a dangerous and deprecated field and are most likely to drop packets that use them, especially when they are variably sized.

That also gets back to the point that almost all of IPv4 is too "known" as a design and every field accounted for in some router and/or firewall logic somewhere by the time IPv6 was designed.


To be honest, the hex strings make it much harder to remember imo, especially with the terrible syntax of nothing between a colon being a 0.

Even having them be 16-bit integers would've been find imo


If you think you have a better notation idea for 128-bit numbers, build your own address converter and try it?

The obvious other notations for numbers that large to compare to are the various notations people use for UUIDs/GUIDs.

I personally find IPv6's designed notations one of the easier ones to use, especially because of that :: fill with zeroes shortcut to focus on the easier separation of prefix versus suffix. (For a network you control you likely only need to remember the prefix, and then suffix is whatever numbering scheme you want to implement so it may be algorithmic and simply ordered ::1, ::2, ::3, etc. Also, the regular pattern of a colon every four hex digits versus say the strange group order of UUIDs is nice. Trying to write the notation of a UUID without software help is much more painful than IPv6 address notation, I think.) But also, I have a bit of dyscalculia (my brain catches all the individual digits in a number but not always their correct order) and hex works much better for me at remembering or visualizing long numbers.


I pulled out this as the most salient point in the article: IPV6 designers assumed that IPV4 would be phased out completely in a short period of time, for some definition of a short period of time, and IPV6 would completely replace internet networking.

I think it was protocol design hubris: we are fixing SO MUCH STUFF that people will flock to this irresistible shining trophy of protocol design.

And now it's been ... almost ... 30 years.

Which means what is really necessary is a new IP protocol that will somehow, SOMEHOW (don't ask me how, I don't effing know) speak seamlessly to IPV4 and IPV6, and be so irresistible that people will want to migrate off of both.

I almost wonder if what is necessary isn't a formally predesigned protocol. What is needed is someone, somewhere, to come up with an approach that becomes a grassroots, and the industry rushes towards it. I don't even know if such a thing is possible anymore, IPV4 land may at this point be complexity theory/mathematically impossible to make a N+1 umbrella protocol of any real "elegance".


> Which means what is really necessary is a new IP protocol that will somehow, SOMEHOW (don't ask me how, I don't effing know) speak seamlessly to IPV4 and IPV6

You don't know how, and nobody knows how, because it's not possible to do. v4 simply does not support addresses longer than 32 bits. If it did, we wouldn't need v6 in the first place.

Choosing not to do something that's impossible isn't hubris. Hubris is criticizing the people who knew what they were doing without realizing how little of the problem domain you yourself understand.


> You don't know how, and nobody knows how, because it's not possible to do.

I think you misunderstood the comment, but at first reading, I did too.

But on consideration, I do not think @AtlasBarfed meant that this hypothetical IPv12 or whatever (7 through to 10 are taken) would be able to talk to both in one protocol.

It's well established that no seamless extension of the IPv4 4-octet address space was possible. I know, a lot of people still don't get that, but honestly, to the majority of people working in tech today, all this stuff is black magic that just happens. Either they learn, or they don't and can be ignored. That's OK. We have to live with that and move on.

The real, the big question here is: was there some single obvious thing that IPv6 failed to do or failed to include that has made its uptake so slow? It's taken some 30 years to reach approximately half the IP market. That is not just "not good" - that's terrible.

What IMHO apenwarr's blog post was trying to get at was the complexity of connection schemes needed, and how a new protocol with an integral way of of constructing virtual networks, connected over the public internet into cohesive wholes via some form of built-in redirection or mapping layer, would have made it a far more compelling offering.

I have seen others saying this, but I can't find any links any more. I welcome other pointers to articles not merely pointing out problems -- there are lots of those -- but proposing solutions.


Based on "speak seamlessly to IPV4 and IPV6", I don't think I did. At the very least it would need to speak seamlessly to v4, which as you say is obviously not possible for any address length bigger than 32 bits.

> The real, the big question here is: was there some single obvious thing that IPv6 failed to do or failed to include that has made its uptake so slow? It's taken some 30 years to reach approximately half the IP market. That is not just "not good" - that's terrible.

Is it terrible though? Obviously it would be nice if it were faster, but what's the expected deployment time for something like v6?

There are about 30 billion network devices, arranged in hundreds of millions of separate networks managed by as many separate people. Noone has authoritative control over all of them. There's no hard deadline for v6 deployment (we saw with Y2K how much a deadline helps). Network effects work against it, and that's unavoidable because of v4's 32-bit limits.

I don't think humanity has ever tackled a migration project of this scope and scale before. So how can you know that it's going terribly?

I don't think there's an obvious thing we missed either, at least not in the set of things which work and are actually possible to do. I've talked to a lot of people about this, and their suggestions are basically either: not possible ("just get everyone to switch over all at once"), broken ("just pad v4 with some zeros"), or something v6 already did (frequently NAT64 or 6to4 but described in weird terms). Either there's a big conspiracy to keep the obvious thing a secret, or it doesn't exist.

> I welcome other pointers to articles not merely pointing out problems -- there are lots of those -- but proposing solutions.

Sigh... yes please. But solutions are hard, especially to unsolvable problems. Making a clickbaity article that points out the problems and says "somebody should have solved them" is easy, and as you can see from the response to this article and djb's each time they're posted, that's all people are interested in anyway.


Hmmmm.

I am not a network engineer (any more, thank the hypothetical deities) so I have no skin in this game.

But in the 1990s, I moved networks from IPX/SPX, or NetBEUI, or AppleTalk, or DECnet, and almost any combinations thereof, to IP. I added IP on top of existing networks. I migrated systems from 10base-2 to 10base-T to 100base-T. I stitched together WANs. Then I moved IP networks from static to DHCP, from no name resolution to DNS to WINS, and so on.

I am not a total rookie to this stuff.

So when you say

> I don't think humanity has ever tackled a migration project of this scope and scale before.

I have to disagree.

The IP rollout itself was bigger, and yet, it happened much, much faster.

We moved the networked world from a dozen protocols to IP, then we totally re-architected how IP worked, from static networks to `hosts` files to name resolution to dynamic IPs and dynamic name resolution.

Then we rejigged it all again for a world of proxies and gateways, and firewalls, and NAT.

You make out like this is some vast super-hard thing, but in fact, the world of networking is way older than many people in the modern IP-only world realise, and we've rebuilt it over and over and over again repeatedly.

When a new technology comes along that offer compelling advantages, then the world moves to it, not in one smooth operation but incrementally and piecemeal, but it happens.

It hasn't happened to IPv6 and my argument, and much more importantly the arguments of the Avery Pennarun here and of Dan Bernstein, are that it hasn't happened because IPv6 isn't good enough.

It's good but it only fixes 1 problem and that one, while big and important, is not the whole problem and it's not even the most important problem because there are workarounds for simple IP address starvation, and the workarounds are _good_ workarounds with their own advantages, and some of those advantages are compelling.

The world has, overall and on average, decided that IPv4 is good enough to mean it's not worth the pain of moving. It's better to fix what it already has.

It is the same issue as Plan 9 vs Unix.

Plan 9 is better by almost every objective measure, but Unix is good enough, so the world decided to stay with Unix rather than the pain of moving.

For me, what is an interesting question here is "could we fix Plan 9 to make it worth moving to?" The Plan 9 people don't care, though.

Similarly, although it's not my area, it's an interesting line of questioning to ask "what is wrong with IPv6 that the world didn't move to it, and can we fix that?"

But when someone proposes it, the IPv6 proponents treat it as heresy, not as a simple, relevant question.

That in itself is interesting too, IMHO.


I don't think the original rollout of the v4 internet really counts as a migration project, because there was no "NetBEUI internet" or "AppleTalk internet" to migrate from in the first place. IPX/SPX/NetBEUI/AppleTalk/DECnet all either run on only a single L2 network, or were only used in small, local groups of networks. Nothing existed that linked every network on the planet together in a single shared address space. You can't migrate a global network from one protocol to another when the global network you're migrating from doesn't exist yet; that's a new deployment, not a migration.

As a result, network effects [in the economic sense of the term] worked in favor of deploying v4, but they work against deploying any replacement to v4.

You could argue that it required migrating local networks, and it did, but migrating from IPX/etc to IP is fundamentally simpler than migrating v4 to v6, because you only need to worry about your local network -- which you have full authoritative control over and can mandate a migration deadline for. You didn't need to worry about maintaining compatibility with the IPX Internet either, because there wasn't one.

Note that I'm drawing a distinction between a network and an internet. IPX/etc only did the former, IP does both. The level of challenge in migrating a network (even multiple of them) is very different to migrating an entire internet, due to the interconnectivity or lack thereof between the networks.

You're also talking about a time when there were three orders of magnitude fewer networked computers, and relatively little networked software except for the vendor software that came with the network stacks in the first place, which wasn't expected to be compatible with any other network protocol.

Moving from static networks and hosts files to DNS and DHCP, is all L4 stuff, and could be done without changing L3. Similarly with firewalls and NAT: there's no need to change L3 to implement those and their deployment is a local-network-only project.

You make a good point that the 90s had far more technological change -- it must have been interesting to live through. But it was technological change that was fundamentally easier to deploy because it only involved making local changes on each network, not changes on all networks. v6 doesn't have that luxury because v6 is replacing the IP address, which is the one thing that goes end-to-end through all of the networks.

So yeah, that's my argument for not agreeing that the original rollout of v4 was bigger or harder than v6, even though it involved more technological changes. It's not the technical difficulty but the sheer scale of the deployment -- v6 needs to deal with far, far more software, devices, networks and involved people than anything v4 had to deal with in the 90s -- and the strength of the network effects that worked in v4's favor but which are working against v6.

> Similarly, although it's not my area, it's an interesting line of questioning to ask "what is wrong with IPv6 that the world didn't move to it, and can we fix that?"

I don't think it's fair to say that the world didn't move to v6, because the world is moving to v6. It's gone from 50 million users to 2.3 billion users in the last 10 years. That's about the number of people that were using v4 at the start of those 10 years.

I'm not treating these questions as heresy, it's just that we keep seeing the same broken takes over and over and it gets very tiring. Things like "v6 would have been deployed in 5 years if they had just added some zeros to the beginning of v4 addresses", from somebody who doesn't have a clue how to make that work and who doesn't know enough to realize that it can't work. There are dozens of people like that just in the comments to this article alone. You can't expect posts like that to be taken seriously by anyone that knows what they're talking about.

> It's good but it only fixes 1 problem

It's interesting that you say this, because the person I replied to above said "I think it was protocol design hubris: we are fixing SO MUCH STUFF that people will flock to this irresistible shining trophy of protocol design."

Is it fixing too much stuff or too little stuff? You guys can't even agree on that!


I disagree with the overall point you're making here, but I can see that you are seriously committed to it, so I think that no amount of examples of internetworks from the 1980s and 1990s is going to persuade you: to you, I think they'd all be toy-scale and thus irrelevant.

Networking in the 1980s and early 1990s was all small LANs, for the vast majority of users, and for PC-type kit.

But there were internetworks, yes. DEC ran a global network; DEC tech support was my first experience of true worldwide follow-the-sun support, where the techs on 3 different continents could pick up my case from a single shared database and help me move it forward.

I was sysadmin of the London node of a DECnet internet that spanned Oslo, Copenhagen, Stockholm and Helsinki. No IP links between the sites, just DECnet.

Companies were running global IPX internetworks: that is how the problems with scaling Netware 3's bindery database arose, which is why Novell developed NDS and Netware 4.

Yes, it was a thing. Updating and modernising it was hard.


...okay, I guess that was a bit of a wall of text, sorry about that. This was actually an interesting line of conversation for once, rather than picking apart yet another "just put 2^128 numbers into 2^32 holes, it'll be fine" post.


Lampson attributed that “fundamental theorem of software engineering” to Wheeler, he did not claim to have said it first.


(2017), and it should be noted that the author has updated his views since then: https://apenwarr.ca/log/20200708 ("IPv4, IPv6, and a sudden change in attitude", 2020).


And, lest anybody be misled by the title you quoted,

> No, I'm not switching sides. IPv6 is just as far away from universal adoption, or being a "good design" for our world, as it was three years ago.


"The awesome unstoppable terribleness that is Postel's Law."

Eloquence for the ages.


About a year ago I started the IPv6 migration for my home network (2 Remote sites, connected via IPSEC, with 10 VLANs (subnets) and about 70 devices, 10 people).

- I started on one side of the IPSEC, where I have an OPNsense

- there were like 5 updates of OPNsense in the last year where different IPv6 issues were fixed (and others have been introduced).

- my ISP only hands out /64-Prefixes, and these are also dynamic, which makes configuration more difficult

- a number of times I had to turn off IPv6 because different parts were suddenly not working anymore, mostly based on software issues in my stack

All of that over 20 years after IPv6 was introduced makes me wonder if it is the correct technology, if it is so difficult to implement.


I really really don't get ISPs' difficulties in deploying IPv6. The only conclusion I can reach that actually makes sense is that they don't have the in-house talent to deploy it and refuse to hire someone who does. I guess there's just not enough pain in staying IPv4-only or halfway implementing IPv6 to make them get up and do something about it?


IPv6 doesn't actually solve any problem people wanted to solve.

IPv4 (as used in practice) has 48 bits of addressing, we don't need more.

What we do need is a standard way to do address translation for routing decisions, to replace the 1001 half-baked solutions for VPN and overlay networks that are used today. (Linux has something like five or six "standard" ways to tunnel IP over IP. WTF?)


That's assuming end-to-end connectivity is, for some reason, no longer desirable. There's a bunch of stuff where all the NAT-associated problems would just go away when switching to IPv6, like SIP.


IPv4 has 24 bits for allocating network prefixes. All of these prefixes have been allocated, and they change hands for as much as $15,000 each.

IPv6 has 48 bits for allocating network prefixes.


Even worse IPv6 solves the NAT traversal problem in a way how people don't want it to be solved.

People wanted a DynDNS kind of solution.


> IPv4 (as used in practice) has 48 bits of addressing, we don't need more.

Do you mean an entire address space of NATs?


> there were like 5 updates of OPNsense in the last year where different IPv6 issues were fixed (and others have been introduced)

That's a bummer. I've been using FreeBSD + pf at home for years now, and it's been smooth sailing.

> my ISP only hands out /64-Prefixes, and these are also dynamic, which makes configuration more difficult

Comcast/Xfinity hands out a /60, which means I get 16 x /64 subnets. And they haven't changed my subnets in at least four years. Pro-tip: when you change firewall hardware, keep the same MAC addr to keep the same IPv4, keep the same DHCP Unique Identifier (DUID) (/var/db/dhcp6c_duid on FreeBSD) to keep the same IPv6 subnets.


You're comparing two remote sites linked by IPv4 NAT to distributing IPv6 from a /64 (which was never meant to be subdivided) which is dynamic..


Yes, I know - but as the article mentions, it is not possible to "deprecate" IPv4 yet. I need both. And during this "migration phase", both IPv4+IPv6 must work alongside each other, which frankly speaking, I haven't yet managed to accomplish.


Well, tbh dual-stack is enabled in 70% of France's customers for example.

If you start with a solid base (having a fixed IPv6 /56 delegation for example), or at least a dynamic allocation with IPv6-PD, then you'll see that it's way easier than IPv4 in the long end


Advocates often drag out the [large number]% of IPv6 in some case or another.

It is only thanks to "happy eyeball" algorithms in the browser which prefer v4 when v6 is broken or non-performant that mitigate end user complaints to the point that people can just kind of turn it on in some state of broken and forget about it.


> It is only thanks to "happy eyeball" algorithms in the browser which prefer v4 when v6 is broken or non-performant that mitigate end user complaints to the point that people can just kind of turn it on in some state of broken and forget about it.

There are entire ISPs that are IPv6-only at the CPE and have to deal with brain-dead software that can't handle it and so have to spend enormous amounts on CG-NAT:

* https://community.roku.com/t5/Features-settings-updates/It-s...

* https://news.ycombinator.com/item?id=35047624


That specific article is about streaming, which absolutely provides a better experience over v4. As much as we like to pretend that the presence of v6 implies a "dual stack" environment, the reality is that at the transport layer v6 is a lot of hasty hacks to make a vocal minority happy.

You still can't talk to Hurricane Electric from Cogent over v6. Lots of v6 links are still tunnels. v6 PMTU Discovery was a massive mistake that introduces latency.


I already get a headache when thinking of configuring firewall rules with a dynamic prefix for all my hosts.


I found building rules around a dynamic prefix to be simple enough on my old EdgeRouter once I found the poorly documented magic incantation to do so.

It isn’t possible on the UniFi system that replaced it. Who needs basic core functionality anyway.


Yeah, pfsense apparently just solved this partly (except for aliases) [1], same with OPNsense [2], where there is still an open issue [3]. There are some declined PRs that are not "high priority" [4]. Note that I have just looked these issues up - I don't have insight into any of these. This is not meant as ranting.

    [1]: https://redmine.pfsense.org/issues/6626
    [2]: https://github.com/opnsense/core/issues/2544
    [3]: https://github.com/opnsense/core/issues/6158
    [4]: https://github.com/opnsense/core/pull/5574


Just grab yourself a Hurricane Electric tunnel and be done with it. It's so irritating to work with dynamic IPv6 assignments when you're not Just Some User. Best case, your ISP has really long lease times and you can fake it being "static." Usual case is you get everything working but there are often 5-minute periods where IPv6 is down because some dynamic changes happened at the wrong time.


IPv6 failed because they tried to boil the ocean. It was design by committee, where everyone got their pet feature thrown in to appease and gain consensus. Alternatively IPv4 is a mountain of small hacks, which is its biggest strength.

We could have done a lot of good by adopting proposals to extend v4 like 0/8 and class D, but instead the decision was made to collectively drown the babies in the bathwater and insist on v6 at all costs.


It's looking more like a slow victory than a failure: https://www.google.com/intl/en/ipv6/statistics.html

People like to complain a lot about the new features in v6, but they don't make it any worse as a v4 replacement.


IPv6 adoption is just the traffic shift from desktop to mobile. IPv6 kinda makes sense in mobile because it solves a problem of needing multiple addresses per person (phone, tablet, gaming device, etc) and the whole stack is maintained by two entities (the phone OS manufacturer and the carrier). It probably would have worked even better if it was far less complex and only solved the problem that was needed.

https://web.archive.org/web/20210122043401/https://blogs.aka...


> IPv6 adoption is just the traffic shift from desktop to mobile.

There are entire ISPs (in the US) that are IPv6-only for CPE because IPv4 is unavailable and they have to use expensive CG-NAT boxes to deal with IPv4:

* https://community.roku.com/t5/Features-settings-updates/It-s...

*https://news.ycombinator.com/item?id=35047624

Meanwhile India was 80% IPv6 as of a few months ago:

* https://news.ycombinator.com/item?id=32798003

The fact that the US and EU just happened to get a bunch of addresses first doesn't mean the rest of the world has the same options available.

More addresses are needed if everyone on the planet is to be able to connect.


Residential broadband often has it too. The laggards are corporate networks and cloud where IT is ultra conservative and “if it’s not broke don’t fix it.”

The only thing that will make corporate environments change is if something they need starts requiring it, and not a second before.

Cloud is slowly getting it. Slowly. GitHub still doesn’t have it though, which makes pure v6 nodes annoying for a lot of use cases.


I work for a federal space where IPv6 native is a mandate. As someone who's working on k8s, we often have to build and patch everything ourselves to support that mandate. Want to pull a helm chart from a github repo? Gotta either dual stack the node or run a reverse proxy to make that happen.

I'd love to be at a point where everyone just dual stacked everything so that one day we can flip off the IPv4 switch.


I think major failure of Kubernetes was not being IPv6-only from the beginning. Its model of every pod having own address works much better with IPv6 where addresses are cheap. With IPv4, it needs complicated overlay networks. The cluster boundary would also make a good place for NAT64 proxy.

Kubernetes didn't get IPv6 support until later, and it sounds like it isn't reliable yet.


> I think major failure of Kubernetes was

Heh. Kubernetes as a whole was a failed project at Google that got open sourced.


I was under the impression that Google simply latched onto the project? Doesn't Google still use borg to this day?


Try NAT64. It'll let v6-only clients reach v4-only websites.


Interesting. I'll take a look! Thanks!


Corporate networks are not lagging for being ultra conservative; they lag, because the moment they start thinking about any change, their vendors start pushing all manners of products and licenses on them, that they insist are absolutely necessary for that change and it won't work without it. Sharks feel the blood in the water, and they want to make whatever is possible on it.

So in the end you are looking both at capex, increased opex and for what? You have ipv6? Congratulation, now you go to your C-level bosses and explain to them why it was worth it. Good luck.


Residential broadband has been getting v6 too. It saves money and provides added functionalityfor the same reasons as on mobile. And academic/research networks got there first long time ago. Corporate envirments seem the biggest holdout.


IPv6 only solves a problem if you don't understand the problem itself! Just because people have five devices doesn't mean they need 65,000 * 5 internet addresses ... there is an endpoint for every port and each device needs only a handful of ports if not only one! Each user can certainly get by with five ports on five shared ipv4 addresses.

As I said before IPv6 will never happen fully because it solves a non-problem very expensively - by assuming the whole world is 64-bit workstations!


That works fine for outbound connections but doesn't work for inbound connections. You can't have two devices listening for inbound connections on the same IP address.


I'm worried about the long tail. IPv6 won't actually be useful until more or less everything supports v6; as long as there are enough clients which don't support v6 servers need v4, and as long as there are enough servers which don't support v6 clients need v4. And until we can start disabling v4, v6 gives no advantage and only causes significant added complexity.

I'm worried that the time when we can start removing v4 and therefore see the actual advantage of v6 won't be here for at least a hundred years, optimistically.


> And until we can start disabling v4, v6 gives no advantage and only causes significant added complexity.

v6 advantages:

* If you're an ISP, you need more complex hardware for CGNAT-v4 if your traffic is huge. If you do support v6, netflix, youtube and majority of your traffic is already on v6, you can get by without upgrading your CGNAT Infra.

* I suspect v6 should have faster initial connection - time to first byte, because of no NAT. I assume NAT is implied in v4 because if you're an ISP say in India where you have 1.2B mobile devices, you cannot buy 1/4th of all IPv4 addresses.

* Because of more IP addresses you can run VMs with public v6 addresses. It's not common to have ISPs give multiple IPv4 to a single customer, but with v6 that's always the case.


* Until the transition is complete, I and everyone else is gonna have to use an ISP which provides a v4 address, whether they're end users or a server operators. Fair enough though that ISPs may have some incentive to making more people have v4+v6 (not that they seem to have realized...)

* I really don't think NAT could possibly make a noticeable difference in the time to first byte. My guess about what's "barely noticeable" would be a few hundred added milliseconds, my guess at what NAT would add would be a few milliseconds. Happy to be proven wrong though if there are any studies or experiments on the topic.

* I'm not sure what benefits there are to giving your VMs public v6 IPs when you still need to support incoming v4 connections.


For whatever reason (it's probably a mix of NAT, the extra routing needed for CGNAT, and other things) there is a measurable difference in time to first byte on v6. Apple measured it as 40%: https://www.zdnet.com/article/apple-tells-app-devs-to-use-ip...

(Also, doing things like loading a webpage requires many round trips, so even a small RTT difference multiplies to a bigger delay for total load time.)

You don't need to remove v4 to get benefits from v6. For example, you can handle inbound v4 on your load balancers to avoid needing to mess around with it on your entire VM fleet.


s/w stack penalizes IPv4 by 25 ms to 300 ms. This is not a point to say that v4 is bad, but has been made bad artificially: https://ma.ttias.be/apple-favours-ipv6-gives-ipv4-a-25ms-pen...


I did a small experiment with 2 websites: federalreserve.gov and one of the Google's server located in Delhi. I'm in India in a city around 250 kms from Delhi on ISP: Reliance Jio Fiber. I see around 8ms benefit when using IPv6 for Google, and 20 ms when using federalreserve.gov Hardly noticable


I think you’ll find the real transition to be a lot quicker than that. All it takes is one of the big companies drawing a line in the sand because they’re unable to buy enough IP addresses, so they finally take a stand. Just like when YouTube nailed the lid into the coffin of ie6.


> Just like when YouTube nailed the lid into the coffin of ie6.

That was the work of a few people on the YouTube team, not the company itself taking a stand:

https://blog.chriszacharias.com/a-conspiracy-to-kill-ie6


I doubt a big company would draw the line in the sand. But I could see an upstart (think TikTok) not having up enough IP addresses and just giving a crappy, slow proxied experience over IPv4, but having it be native, zippy and good over IPv6. Suddenly you have teens begging their parents to switch ISPs.


That comes with a major assumption that switching ISPs is an option. Most people get to choose between their cable company, or a fleet of ill-trained pigeons


As an example, the options where I am right now (thankfully temporary) are:

- $55/mo. 3 Mbps DSL

- $80/mo. 300 Mbps cable (or even more expensive, faster cable)

- $120/mo. 100+ Mbps (if you're lucky) Starlink

- A few other heavily restricted, very expensive satellite options (e.g. HughesNet), to which the aforementioned fleet of ill-trained carrier pigeons might be preferable

Only one of those is practical and (mostly) reliable for anything remotely approaching something like remote work.

Back home it's such a luxury to be able to choose DSL, cable, or fiber. I can only dream that all markets will have an actual choice between Internet service providers someday.


Did you look at doing 5g home Internet? There's great coverage in lots of areas.


At least in Canada 5g comes with $50-for-15GB levels of data cap pricing


Actually most people (in the US) get a choice between the cable company, the LEC, and a 5g carrier, and maybe even StarLink.


Many of the "choices" are false choices. My home, in the downtown of a major city, shows as having two choices for wired broadband, but the phone company's wired broadband option is an old ISDN network grandfathered in to "broadband" that probably shouldn't have been, it is the very bare minimum of "broadband" in 1990s standards. (And the phone company charges the same monthly cost for it as actual high speed broadband options they provide, just to add additional cruelty.)


I have a choice between the cable company and sorta Starlink (Starlink isn't actually available in my area yet, and is a nonstarter anyway). There are no other options available to me.


I think you'll be surprised. As soon as most large and medium businesses support IPv6 we'll start to see people dropping v4, likely within the next 5-10 years. When only a tiny % of your potential customer base is dependent on v4 you start to weight the cost of staying dual stack or just going pure v6


I find partial IPv6 support quite useful. For example, at home I have a lot of services where it it convenient if I can access them but it is not essential. I give those services IPv6 addresses. I have IPv6 at home, at work, on mobile. So it is not a big deal if I can't access them from some part of the world. With increasing prices of IPv4 addresses, I expect that more internal services will move to IPv6.

For services at home, where I do need IPv4 support, access over IPv6 is simpler and more robust. We can expect more of that in the future. Increasing prices of IPv4 addresses, certainly if for routing purposes you need a /24, may result in worse traffic engineering for IPv4.


Certainly, it is at least a start that all the cloud providers are slowly raising the hourly costs for v4 addresses to better reflect scarcity and other externalities. Those costs rise high enough it will be a pressure on corporate bottom lines (and then in turn IaaS tools and many other SaaS providers).


And yet Azure has comically bad IPv6 support, so you can't even move (partially) away from IPv4 even if you wanted to. Besides, the cost of a v4 address is peanuts compared to the other cloud costs.


All of the big three (AWS, Azure, GCP) have comically bad IPv6 support and comically bad security advice on IPv6 and private networking in their documentation.

Kubernetes is comically bad at IPv6. Docker is comically bad at IPv6.

GitHub is comically bad at IPv6.

Like I said, pricing IPv4 at all, is at least a start, a baby step in the right direction. Even if it is a drop in the bucket compared to the rest of the cloud invoice, it is still at least a line item that corporate accountants are going to notice. It is now an obvious cost to cut. Maybe that will put pressure on fixing how comically bad the above offenders (and more) are actually supporting IPv6, because corporate accountants may start asking hard questions.


Most networked things are not end user facing. It's plenty useful now.

End user facing apps also use it in gradual way when available, eg with WebRTC it lowers service ops costs and gives better latency.


It has one huge advantage: larger address space.

(Which we could have had with a minor tweak of IPv4 instead.)


We could, but it would have broken compatibility with v4 just as thoroughly as v6 did and so would have had the exact same deployment difficulties v6 has.

In fact v6 mostly _is_ a minor tweak to v4; most parts of it are lifted directly from v4 but with a longer address.


The larger address space is only an advantage if we don't need a v4 address. The situation "I need a v4 address" is not worse than the situation "I need a v4 address and a v6 address".


Not exactly. There's some pretty snazzy interoperability tech out their. (I.e. 4-6-4 XLAT) that let's clients get v4 addresses when they need them, which means if there's a big chuck on devices on both ends that support v6 you don't need as many v4 address.

This kinda a problem for motivating people to move to v6 because as implementations of v6 grow there's going to be less and less pressure on the v4 address space.


It spikes up 3-5 percentage points every weekend. Presumably mobile use goes up and computer use goes down.


Corporate use goes down and Residential use goes up.

Corporate networks were the last to drop XP, the last to drop IE, and will be the last to adopt IPv6.


That's not quite true. Even after corporate users have switched the government will still be on V4. The US military is drowning in IPv4 addresses and feels little pressure to switch.


Azure AD only got IPv6 support this year. Most corporate networks have not switched, whole most federal agencies have implemented IPv6(due to mandates), mobile carriers are heavily utilizing IPv6 and so are residential ISPs (Comcast and Time Warner have been deploying IPv6 since 2011)


Federal agencies turned on the Cloudflare flag that auto-translates incoming IPv6 packets to the IPv4 that they support for front facing websites. That checked the box and little progress has been made since then.


Wow a ~40% migration over the course of 20 years. What a victory :|


You try and get billions of people to do anything quickly


I run a large multi-campus network. At least 75% of our outgoing Internet traffic is IPv6. Looking at home ISPs accessing our services, it’s at least the majority of them coming in on IPv6. My guess is it’s a similar ratio as outgoing.

IPv6 has issues but it hasn’t failed.


> At least 75% of our outgoing Internet traffic is IPv6

One question I have about this statistic (and also about Google's IPv6 statistic) is, does traffic mean raw bytes? And in that case, is this just a reflection that like 75% of all internet traffic is just YouTube and Netflix due to video being a bandwidth hog?

(I'm a huge proponent of IPv6 - I just wish we had more useful statistics!)


The GP's statistic is probably raw bytes, and yes, a significant chunk of that will be the big video streaming sites, which mostly have v6. That's still a useful statistic, because part of the cost of v4 is NAT and NAT capacity is measured in terms of number of packets.

I guess you're thinking that the number of v4-only websites is much higher than the traffic numbers represent -- which is true but that's actually fine because v6-only clients can still reach those sites easily via NAT64, so having a lot of v4-only websites isn't blocking deployment of v6 or undeployment of v4. The only real problem it causes is that people use it as an excuse to not do v6...

As for Google's stats, they're probably percentage of either connections or users as measured by their frontend load balancers.


Sadly, some ISPs have yet to adopt it. Bell Canada is notoriously slow on it, and shows no signs of starting.


Unfortunately in my country (Italy) our major provider (TIM) is not handing out IPv6 addresses to users. Other providers do, for the fact that nowadays IPv4 addresises are expensive and thus they no longer provide a public address. One of them (Iliad) is IPv6 only, and the IPv4 traffic is tunneled into IPv6 at the router level.

I think the only way that we will move to IPv6 is a law (probably from the EU) that imposes to every provider to give its consumers an IPv6 address.


IPv6 did not fail. It's used by a sizeable chunk of hosts, network owners made investitions in hardware, software and skills. It's not going anywhere, like it or not. Just like IPv4 will not go anywhere. They will coexist.

I still don't understand why IPv6 is a thing. End users can use NAT just fine. Servers can use CDNs and reverse proxies, sharing single IPv4 address among any number of hostnames. But it is a thing, so it's hard to imagine any other protocol to take over.


IPv4 addresses are getting increasingly expensive. And being behind an ISP’s NAT is terrible. I don’t want to share an IP with my street. It should be easy to run little network servers at home without worrying about reverse proxies or upnp or whatever nonsense we need today to make the network work.

There’s plenty of numbers out there. Ipv6 lets my house have a whole subnet of them. It’s good.


I was (by default) part of a ISP based NAT. I play counterstrike online, and my ping was 80ms... calling them up and getting it disabled, dropped it to 30ms.

being behind their NAT caused all sorts of issues that i didnt realise they were causing... stuff like UPnP didnt work right, opening ports wasnt working right... everything was all over the place.


I don't believe that they're expensive when I can rent VPS for few dollars per month. They might be more expensive than 10 years ago, but this cost is shared among all people behind NAT, so in the end it must be a rounding error.

Running servers at home is a good thing to have, but I doubt that ISP cares much about users running servers at home. Users watch youtube and netflix. That's what they optimize for.


The version 4 address is now a substantial cost when renting a server. It starts getting billed separately, so you can drop it if you don't need it.


The days of getting a free IPv4 address when you rent a VPS are numbered. AWS is already rolling out a plan to charge by the hours for an IPv4 address and other VPN providers are paying attention.


> End users can use NAT just fine. Servers can use […] reverse proxies, sharing single IPv4 address

So you want all computers to be behind at least a single layer of NAT. And you also want people to not only have to purchase a domain but also have to pay their NAT operator to add their domain to the reverse proxy


Eyeball networks are vastly different from content networks. Even among the tinkerer "homelab" and HN crowds, it is rare to host content from the same connection/address you browse from.


You're confusing cause and effect.

They're very different precisely because of hacky nonsense like NAT.


How is (home) NAT making the problem more complex than a stateful firewall? You never want to have a policy where incoming connections/UDP streams are permitted by default to reach any random device on the network, regardless of whether that device has a routable IP or not.

Now, CGNAT is a different beast and more worrisome from that point of view.


> How is (home) NAT making the problem more complex than a stateful firewall?

ICE/TURN/STUN: the address that your software sees on your laptop, desktop, home NAS is not the address that clients can connect to.

In both NAT and non-NAT you have to use UPnP/PCP to do hole punching, but with NAT you have to do a bunch of address-y stuff as well.


How do you have two different devices running a webserver on two different IPs at home with NAT?


In a decade or two, everyone is going to be behind CGNAT. There are not enough IPv4 addresses.


'NAT' and 'fine' do not belong in one sentence.

NAT requires stateful tracking of stateless protocols. It's a hack much larger than anything related to ipv6.


Freeing up more IPv4 space wouldn't have helped. IANA was assigning /8 per month at the end. The extra space would have gone in less than a year.

IPv6 would have worked better if they had made minimal changes to the support protocols. But it was have had slow adoption because there was no incentive to switch until addresses ran out.


I sat in on the ipv6 ietf meetings. That was certainly the intent (minimal changes). I still remain confused about why people think this is such a big deal.

- changed arp. Ok, new design is better but that didn’t need to happen. Shouldn’t be a problem for anyone?

- prefixes are an addition but there are really good arguments for them and not much downside. This can be argued I think

- fusing the end system identifier in the public address was a mistake, and I thought so at the time, and I guess it’s been mostly rectified.

So what is so tragic here? ISPs just didn’t care for 20 years because the crunch was delayed. Now they do. So where did Steve screw up?


Neighbor Discovery is basically ARP wearing a trenchcoat. The only people "hurt" by the change are the ones who were parsing the output of the arp command for whatever reason.

IMHO the biggest problem is that IPv6 address autoconfiguration was half-baked. There is no mechanism to inform anybody about which address you have configured for yourself, unlike IPv4's DHCP where a central server knows everyone's address and can do things like update DNS entries and configure security devices. Autoconfiguration also didn't include critical details like "Who provides DNS for the local network?" and "What's the NTP server?". There isn't even any way to authenticate that the Router Advertisement your machine receives is valid, although this problem is shared with DHCP. The committee seems to have put a lot of faith in anycast routing, which has never been a good idea outside of toy networks.


> that IPv6 address autoconfiguration was half-baked.

Indeed. I thought I was being an idiot and just not understanding how this was supposed to work, until I learned that it just doesn't do a lot of important things. So when the day comes that I have to move my network to IPv6, I plan on continuing to use DHCP because I want the omitted functionality.

Of course, I still might be being an idiot and not understanding. Getting a solid picture of how IPv6 is supposed to work is genuinely hard to do with any confidence.


Be aware that DHCP6 doesn't work the same as DHCP. It's really intended for configuring routers, not hosts.


I wasn't aware. That's a real bummer, and a good example of the numerous kinds of gotchas that make this transition much more painful than it would otherwise have to be.

The more I learn about IPv6, the more I dread it.


We should probably have auctions for IPv4 space to encourage more efficient use. Not that we have some kind of authority to require this, but we've often suggested this in connection with our proposal to prepare to allocate 240/4.

While one can say that there's no way that IPv4 demand can ever be "satisfied" (which seems right to me), one can also imagine a different quantity demanded at $0.50/address than at $0.00/address, and also different levels of effort to make sure that almost all addresses that are allocated get put into use on the Internet. (I know $0.00/address isn't exactly the right way to describe what RIR allocations cost, but economically they've been more similar to the "beauty contest" than the "auction" allocation method.)

In the final direct initial allocation phase, people learned that something (with substantial economic value that can likely be sold in the future, no less!) was very scarce, and was being given out nearly at no cost for almost the last time. It's not surprising that they would have jumped at the opportunity to get as much of it as they could qualify for, somewhat independent of what use they expected to make of it in the short term. Maybe especially when they were hearing how other people were jumping at that opportunity.


There are auctions for IPv4 space, though they did not really become a thing until after the RIRs ran out of IPv4 blocks. The price per address is about $50.


Yes, and hopefully we can use that kind of mechanism to allocate 240/4 in a way that achieves very high utilization.


Or we could just remove the purely artificial scarcity and call it a day. The cost here is backbone table size, not endpoint addresses


> Freeing up more IPv4 space wouldn't have helped. IANA was assigning /8 per month at the end.

You are confusing hoarding demand for legitimate need. The Amazon's and Cloudflare's of the world were playing games to get large allocations, speculators were spinning up hundreds of legal entities to get allocations, tons of backroom deals to look the other way.

Pretty much every use case today for IPv6 is environments where CGN would have worked just as well.


It really doesn't matter if your part of the ocean didn't make it to the boiling committee.

The one thing that mattered was more address space. 4 billion addresses are not enough for the world. Anything else in ip6 is nice to have but falls off a cliff of importance and naval gazing that, if really of even nearly comparable significance, would be better directed to ipX or whatever's next.

IP6 is already facilitating connectivity for billions. Globally. That's a pretty major thing, more so than renditions of doing it my way.


The author would have liked Xerox Network Services. The Xerox plan was that devices had a 48-bit Ethernet address, and local area networks had a 32-bit network ID. Routing was by network ID until the packet hit the final LAN. No need for IP-level addressing.

Early Stanford and PARC routers could route XNS packets, but this died out some time in the 1980s.


Ugh. This is one of my favorite "what ifs" in the computer engineering.

Things I would change:

1. Use 72-bit addresses. 56 bits for the network address, 16 bits for the end-user networks.

2. Just use the IPv4 "local subnet" prefix logic for broadcast domains. No "on-link" nonsense.

3. Replacing ARP with neighbor discovery via multicast messages to interface addresses is... ok? But it's not necessary.

4. Remove SLAAC and stateless DHCPv6. Statefulness is helpful for network management.

5. Reify the MTU into the IP layer. No more ICMP nonsense for PMTU.

6. Rework extension headers to be actually useful. No more "next header" bullshit.


> Use 72-bit addresses. 56 bits for the network address, 16 bits for the end-user networks.

16 bits is just way too small. The article clearly states that network operators just love to bridge together larger and larger networks due to the mobile IP problem. In the IPv4 world they can even have 24 bits (10.0.0.0/8) why should IPv6 have only 16 bits? It's definitely not enough.


8 bits for the home network feels cramped even now, my home network is at 56 devices. But 16 bits are fine for end-user networks.

And once you go over 16 bits, you really need to start dealing with routing.

> In the IPv4 world they can even have 24 bits (10.0.0.0/8) why should IPv6 have only 16 bits? It's definitely not enough.

This is not a fair comparison. You won't have a 10.0.0.0/8 network in IPv4 that has 16 million computers in the same broadcast domain. You'll likely have multiple /8 or /16 networks, with routing between them.

And in my hypothetical world, you'll have 56 bits for that routing. Your ISP can delegate you a /32 prefix, giving you 24 bits for your own routing hierarchy.

This is not dissimilar from the current situation. You have just 64 bits of the "network address", because the lower 64 bits are needed for SLAAC.


Large, sparse subnets are nice for their security benefits.

For a 16-bit network, you can enumerate all active public servers by exhaustively port-scanning it; it takes something like a few hundred gigabytes of traffic, which is nothing these days. For a 64-bit network, it takes quadrillions of gigabytes of traffic and just isn't feasible.

64 bits is enough space to fit a small public key, which v6 uses to secure neighbor discovery.

There are anonymity benefits too: privacy extensions wouldn't work as well on smaller subnets.

As an added bonus, having extra bits to spare is useful if it ever turns out that we need them. If we run out of space in 2000::/3 then we can start over in one of the five other unused /3s using tighter allocation policies. L3 protocols are incredibly hard to deploy and it would really suck to deploy a bigger one only to have to deploy another, even bigger one again soon after.

I don't think there's a good reason to give up all of that. Smaller addresses break compatibility with v4 just as thoroughly as bigger ones do, so it wouldn't even help deployment much.


> And once you go over 16 bits, you really need to start dealing with routing.

Disagree. you don't want to be routing unless you actually have to. A large flat network is more desirable a lot of the time (e.g thousands of devices in a DC) than a bunch of artificially carved up subnets.

The reason you don't see them very often is because people have had to use IPv4, which means ARP, which just doesn't scale. At some point in size, ARP chatter becomes the majority of the traffic on your network, which isn't great.

ND fixes this, and allows for ridiculously large networks (the way our good maker intended).


> Disagree. you don't want to be routing unless you actually have to. A large flat network is more desirable a lot of the time (e.g thousands of devices in a DC) than a bunch of artificially carved up subnets.

First, a /16 network is 65536 devices, which is pretty big as-is. And like in V4, you'll be able to disregard recommendations and choose a larger local net size (just change the netmask).

But it's a bad idea. You will have a shared media that can be brought down by erroneous broadcasts or devices. This is a classic story: https://www.computerworld.com/article/2581420/all-systems-do...

> ND fixes this, and allows for ridiculously large networks (the way our good maker intended).

ND doesn't solve it. It works in practice using the same old broadcast, just like ARP. Some switches might do ND snooping, but if you have thousands of devices, they'll overflow their internal tables and fall back to regular broadcasts.

ND also has unsolvable issues, like the neighbors cache size problems. Since you have a freaking /64 for your local network, you can't easily store the mapping for ALL hosts, and you're susceptible to various cache exhaustion attacks (including negative entries).


> First, a /16 network is 65536 devices, which is pretty big as-is.

"big" is a relative term. For some cases, that's a really annoying restriction.

> But it's a bad idea. You will have a shared media that can be brought down by erroneous broadcasts or devices. This is a classic story: https://www.computerworld.com/article/2581420/all-systems-do...

IPv6 doesn't have the concept of "broadcast".

> ND doesn't solve it. It works in practice using the same old broadcast

No, ND uses multicast.

Running any network with >100k hosts has its challenges, but these are surmountable with IPv6, and not at all with IPv4.


> IPv6 doesn't have the concept of "broadcast".

Indeed. It has magic fairies delivering multicast packets to the right interfaces.

> No, ND uses multicast.

How do you think multicast is implemented in Ethernet?


> How do you think multicast is implemented in Ethernet?

Via MLD snooping on switches that support it. Yes, some switches will fall back to broadcast, but only if they're not multicast aware.


Snooping table sizes are typically around 16k entries. IGMP/ND packets are almost always punted to the CPU, so once the CPU is saturated, switches typically either fall back to broadcast or stop forwarding multicasts.

It's not a good outcome either way.


This varies widely by switch model, and 16k is on the low end. I looked at a few random Cisco and Arista switch datasheets just now and saw numbers ranging from 16k to 768k (usually 25% to 100% of the unicast MAC table size of the same switch).

Unicast MAC table space is scarce, too, and suffers the same failure modes when filled. You don't see people claiming this makes IPv4 over Ethernet infeasible. Do proper capacity planning, and this isn't a problem. Oversubscribe your network, and this is a problem even without multicast in the picture.


> Unicast MAC table space is scarce, too, and suffers the same failure modes when filled.

Yup.

> You don't see people claiming this makes IPv4 over Ethernet infeasible

Actually, people DO claim that. Flat Ethernet doesn't scale, and you need to use routing to break up broadcast domains.


> Actually, people DO claim that [MAC table limits make IPv4 over Ethernet infeasible]. Flat Ethernet doesn't scale, and you need to use routing to break up broadcast domains.

This has nothing to do with unicast MAC table limits, and everything to do with ARP's O(n^2) scaling property. You use just as many MAC table entries in a network with 10 VLANs of 100 hosts as you do with 1 VLAN of 1000 hosts. Ethernet scales fine; ARP doesn't.


Except security... every network I build, ALL traffic between hosts/apps must go through an inspection device.

Due the mass use of encryption and overlays, you can't trust any process or device any more.


16 bits is fine if thinking about discrete devices in the home.

But what about thinking about the next step, neurons in the home. 65k's on the low side for home neuron count.


Are you sure you want to expose individual neurons to the Internet?


Think of the unexpected unknowns. The feet of your chairs having light sensors, each tile of a roof having its own ambient vibration monitors, a thermostat on every bookcase shelf, all self organising around the you GooFaceAI determines you to be.


> 3. Replacing ARP with neighbor discovery via multicast messages to interface addresses is... ok? But it's not necessary.

I don’t even think it’s really ok. ARP was layered correctly: IPv4 runs on Ethernet+ARP. Or it can run on other things that aren’t Ethernet+ARP. IPv6 uses IPv6 messages to make itself work on Ethernet, thus baking knowledge of Ethernet into the IPv6 neighbor discovery logic.

If multicast neighbor discovery is useful (which it may well be), then ARPv6 could have used multicast.


I'm honestly not sure if ND along with interface addresses in V6 is a useless hack, or if it is a clever trick. I keep changing my mind all the time. I guess it's kinda both.

On one hand, V6 addresses are big enough to represent hardware addresses directly. So you don't _need_ a low-level protocol like ARP to resolve V6 addresses to hardware addresses, you can represent it as special messages in IPv6 itself.

It also allows some interesting tricks, like local network applications using interface addresses to communicate normally without setting up global connectivity.

On the other hand, interface addresses in V6 cause no end of confusion ("what the heck fe80::88a:fd34:d1b:2de7%en0 means?!?"). And V6 also has no clear distinction between the local network and the Internet, and the easy ability to use broadcasts (just send a packet to the network address).

And it's not like many applications actually use interface addresses anyway. So this is kinda a moot point.


> Reify the MTU into the IP layer. No more ICMP nonsense for PMTU.

If the MTU is fixed, you can't have VPNs, or any other protocol that encapsulates IPv6 packets and then sends them over IPv6. There needs to be some way for a middlebox to communicate that the MTU is lower than normal on a specific path because it is taking up a bit of every packet for overhead.


MTU will not be "fixed", I described my proposed solution here: https://news.ycombinator.com/item?id=37117455


> Remove SLAAC and stateless DHCPv6.

This statement then implies that we would need to have NAT66. Why? Stateful DHCP then implies the possibility of an endpoint having only a single assigned address. But what if the endpoint needs to have multiple addresses such as tethering or running VMs? With SLAAC the endpoint can just get multiple addresses. Now, just because today stateful DHCPv6 is a possibility, hypervisors need to have NAT66 built in. So the status quo is strictly worse than either only SLAAC or only stateful DHCPv6.


Stateful DHCPv6 allows prefix delegation even now, in its half-assed stupid form.

And if you don't want/have prefix delegation, you can just use multiple DUIDs to get several leases for one endpoint. Unlike in V4, this is fully supported in V6.

Your VM hypervisor will need to request an address during the VM startup, but I think it's actually better from the management standpoint. The network operator will be able to see VMs as the first class citizens in the network management console.

> But what if the endpoint needs to have multiple addresses such as tethering or running VMs?

Android developers are actually adding support for stateful DHCPv6 for exactly that reason :) They want to support tethering for V6 without doing NAT or bridging.


The VM could just have a bridged network connection, no NAT66 required.


IPv6 was built for a world with 64 bit machines. That's why the address is basically two 64 bit chunks, the first being the global network and the second being the local network. This is why you never allocate an IPv6 network smaller than a /64, so router manufacturers can optimize their hardware to only have to examine the half of the packet that the current routing step cares about.


I must admit I'm at a loss about this IPv6 hate. I love IPv6.

I love that (almost) every IPv6 subnet is a /64. Just this morning I assumed an IPv4 subnet was a /24, only to discover it was a /20, causing me to spend a couple of minutes re-working.

I love that (almost) every IPv6 is a /64 because you'll never have to widen a subnet because you started of with a /24, but then after your office grew to 200 people you had to switch to a /20. And no matter how automated you are, there are always some important devices that have static IPs and that you have to reconfigure manually.

I love that every IPv6 client gets a routable IP. This means NAT isn't necessary. NAT is a clever hack, but we've grown so accustomed to it that we've become blind to its failings. I've had to trace packets coming out of a corporate NAT, then into an AWS ELB (Elastic Load Balancer) back into a different NAT. With the IP address remappings, it's awful

IPv4 NAT can also lead to IP address collisions. Your home subnet is 192.168.0.0/24? And so's your work? Good luck VPN'ing in.

I like that IPv6 has an abundance of IPv6 addresses—I don't need to share the ports on my sole IPv4 address to 5 different machines.


The IP/Ethernet section is surprising to me. It sort of claims that there is no reason to have ARP and give your default router an IP when you could just give it a MAC address and skip the ARP. Maybe this is true now but it reads like it was a bad idea at the time.

I wasn’t there (sounds like Apen was?) so I could be missing context, but I was under the impression that there were loads of layer 2 protocols at the time IP was designed (token ring, frame relay, etc) and so IP needed to be agnostic to the L2 protocol in order to be adopted.


Related:

The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=25568766 - Dec 2020 (131 comments)

The world in which IPv6 was a good design (2017) - https://news.ycombinator.com/item?id=20167686 - June 2019 (238 comments)

The world in which IPv6 was a good design - https://news.ycombinator.com/item?id=14986324 - Aug 2017 (191 comments)


>It's hard to imagine a network interface (except ppp0) without a 48-bit MAC address?

Not super common but there's Infiniband. Latter versions supported encapsulating ethernet frames (Ethernet over InfiniBand) but not all hardware supports it. Otherwise it's straight IP over IB. There must be other niche networking technology that does IP without Ethernet.


Does networking over Thunderbolt work similarly?


35% worldwide by population of users in random samples at APNIC:

https://stats.labs.apnic.net/ipv6/XA

US on 50%, India on 70% and China just shy of 40% -As China continues to grow (and it will) the likely outcome is > 50% IPv6 Capable. I doubt any new Mobile deployment will be single stack, the most likely is pure IPv6 with CGN for 4. So Africa which is still in growth, the most likely outcome is dualstack preferring 6

It may not be ideal, there may be significant issues with EH for instance, but at scale its alive and kicking.

Now, if only we could get jumbogram more widely deployed. Thats older than V6 is and still struggling to break the 1500 byte MTU limit.


There will be no larger MTU, this battle is lost. 1500 will live forever. If it makes you feel better, think about it as if Ethernet packets are just oversized ATM cells.

Solving PMTU problem would have required reifying the MTU to the IP layer. It could have been done like this:

1. Use a 16-bit field in the non-checksummed portion of the IP header. Initially this field is set to the MTU of the link that originates the packet.

2. Each router inspects this field, and sets it to the MTU of the next hop link, if it is lower than the MTU already in the packet. This will be cheap, as the field is not checksummed.

3. If the packet is bigger than the MTU of the next hop, just truncate it, and set a special bit somewhere in the packet header to indicate it. No need to recalculate the checksum either, the packet is going to be corrupt anyway.

4. The destination host gets the discovered MTU of the forward path, and sends it back to the originating host in the header of the next packet (in a checksummed part).

That's it. Easy, continuous MTU discovery, with robust handling of failures, that doesn't require any smarts from the routers (a comparator to update the MTU can be done in a few logical gates!).

Alas, nobody had the presence of mind to think about this back then.


It should be noted that pockets of jumbogram-in-the-wild exist. It's just normalised to clamp it to the point few people can exploit it.

For example, the NBN in Australia uses a 2000+ layer-2 frame and nothing like 500 bytes is consumed to mark the upper carrier. They COULD have gone higher than 1500 and I would be surprised if there arent customers using 1300 or less because of the ADSL configuration they brought over when they uplifted from a real modem.

A lot of people do 9000 in their filestore network. They know it works on the local segment. Reducing the forwarding burden in header-TCAM-routing by a factor of 5 is a significant win, if you have a lot of packets.

People continue to discuss mechanistic approaches to finding your MTU in the IETF but I think you're right: its 1500 or less pretty much forever unless somebody makes a move here for product differentiation reasons. Given the embedding of content inside the ISP or at the IX, I suspect it COULD happen, if e.g. Netflix said it was a better overall experience? The ISPs would do it.


Getting MTU=9000 to work is tricky even in a home network. I know, I spent several days setting it up.

And even then, I got slapped by WiFi. Its PHY MTU is limited to 2304 bytes, and that's a hard limit.

Even for the plain old wired Ethernet, I had to experiment a bit because the first multigig USB-C adapter didn't support Jumbo Frames.


Plenty of networks that use PPPoE over base network technology to connect to a "virtual" ISP implement at least mini jumbo frames. If you can set the MTU of the hardware to 1508 then the PPPoE connection runs at 1500.


If you don't do 2, you can have 3) if the packet is too big, just truncate it.

If the destination gets a packet that isn't as big as it says in the header, it was truncated, and the MTU is the size of the packet it received.

Even easier!

You still really should occasionally probe up, in case the path changed, but that's not very well done today either.


Yeah. You still need a way to reflect the discovered MTU value to the sender somehow, but this can be taken care of by higher-level protocols.


The approach obviously doesn't work at all for unidirectional protocols. This approach also doesn't work when there are multiple equal cost paths, but with different MTUs. Which used to be more common and now tends to only happen when changing link technologies. The original sender needs not just the discovered MTU, but the original header information that may have been used for flow hashing.


> Now, if only we could get jumbogram more widely deployed. Thats older than V6 is and still struggling to break the 1500 byte MTU limit.

Really, we barely hit 1500. Look at mss for popular websites, most people drop from 1500, because 1500 has problems in enough places. Does http/3 even send 1500 byte packets ever?

One major problem is most servers (Linux all versions, I think, FreeBSD before about 2000 and after something like 2019) always send the interface mss with a syn+ack. You get meaningfully better results by sending the lesser of the interface mss and the received mss; there are enough broken systems out there that don't communicate the real MTU to end systems[1], don't send enough ICMP needs frags, try to cover it up by manipulating mss in syns, but don't manipulate mss in syn+ack. Windows and iOS (and presumably mac) do a pretty good job of detecting pmtud blackholes, but it's often disabled on servers and I can't remember where Android is these days; I know it used to ship with the option compiled in but disabled and no way to enable.

Of course, packet sizing is actually a hard problem. Larger packets are good for faster links but bad for slower links.

[1] which is hard, because I don't know how you get windows to use MTU from dhcp; it doesn't request it, so it won't use it. This is a problem too.


> Does http/3 even send 1500 byte packets ever?

Iirc quic has a hard MTU cap at 1280 bytes.


Nope. QUIC requires the _minimum_ MTU of 1280: https://datatracker.ietf.org/doc/html/rfc9000#name-datagram-...

The maximum permitted size is 65527 (max_udp_payload_size).


It doesn't matter if even 98% of "the internet" is "on" IPv6. If public websites don't advertise an IPv6 address, every user is still going to use IPv4 to connect to them. All the cloud providers still prioritize IPv4, and usually don't support IPv6 at all until a few years after a new service comes out.


Entire countries still do not have IPv6. Ukraine for example.


There is some IPv6 adoption in Ukraine. https://stats.labs.apnic.net/ipv6/UA


Funny enough, when I measured IPv6 readiness/use in the world for the very first time (2008), Ukraine was on top. IIRC it moved to third as we got more data, but it still was a very surprising result.

We found out much later that the reason was that Opera had broken RFC3484 handling and prioritized 6to4 way too high (which skewed the results), and Opera was popular in Ukraine at the time. :-)


NAT64 handles the "public v4-only website" use case very well. I run my desktop without v4 today and it works fine.

A few websites without v6 aren't a blocker to either deploying v6 or undeploying v4.


I've always seen "The Internet" as a network of networks.

I interpreted the article as inferring that "every device should only speak internet, and we shouldn't have non-internet hacks to allow devices to connect to the internet."

But, if we interpret the internet as a network built on top of other networks, it negates the thesis (as I interpret it) of the article. It also locks us into networking, as understood in the late 1990s, and designed into IPv6.

IMO: It seems like IPv6 suffers from second system syndrome. The authors lost sight of the purpose "network built on top of other networks" and tried to add lots of features for the "other networks" that really aren't needed.

Maybe it's time for IPv7? Really, all we need is IPv4 + larger address space. I'd even argue that NAT is a good thing (security feature,) because allowing devices on a private network to automatically open ports on the public internet is insecure.


> I'd even argue that NAT is a good thing (security feature,)

Any router that I’ve come across already blocks incoming IPv6 traffic by default, so I’d have to disagree. NAT was designed for a very specific purpose and keeping it in a new implementation makes no sense if it’s not necessary, which it’s not if the hardware requires you to open incoming ports.

No NAT would make it a lot easier to understand what’s happening too, since the process would be “allow traffic from the internet to this device” rather than creating a translation for an internal address, possibly having to set up a MAC reservation to ensure the DHCP lease doesn’t expire etc.


I was asked by my employer in 2003 when IPv6 would replace IPv4? After substantial research, my answer was, "never"!!! I almost got fired for that answer but I am correct so far.

There are a large class of people out there who do not understand the end to end argument in system design and who never saw a piece of bloatware they didn't love. For them IPv6 will happen tomorrow. They are to internet like maga people are to democracy - caustic.

In 2004 IPv6 was hyped as solving the impending IPv4 address shortage. NAT had already been invented and there was no shortage as IPv4 addresses were allocated as inefficiently as possible and there was no market to buy and sell IP addresses nor was there any rental cost for owning ipv4 addresses. So it was solving a non problem.

My employer Qualcomm eventually proposed using IPv6 for cell phone handsets in an overlay network called openran. This was a convenience and not a necessity because there are more than 4 billion cell phones in the world.

You just don't need these bloated addresses in IPv6. You need banks of addresses mainly for the server. The client never needs 65,000 incoming or outgoing connections. The number of client computers in the world that are running as servers and that need open datagram accessibility from anywhere in the internet is virtually zero. Want it? yes! Need it - meaning impossible to do without it - No! There are always bridging workarounds to avoid giving client machines dedicated IP addresses!

IPv6 is not respectful of small system design and hardly provides usable improvements to IPv4. IPv6 was a marketing tool invented by Cisco to sell bigger more expensive routers, not to solve a problem with any economy ... It is especially detrimental to IOT.


Does google downrank sites that are IPv4-only yet? That would shift some behaviors


Google search results are already awful these days, please don't make them any worse.


I switched to M$ search already, thank you.


> configuring DHCP really is a huge pain

Not sure why the author is so sniffy about DHCP. To me it seems easy to understand and configure.

Having said that, I appreciate the author's remarks about DHCP being a 'fake' IP protocol, and in reality being an ethernet protocol; I hadn't seen it that way before, but it's a reasonable way to look at it.


A bit off topic but the most interesting part of this was clicking through to the link about TCP BBR. It's available for the Linux kernel but not bundled / enabled by default, but if these related posts are telling the truth, maybe it should be:

https://djangocas.dev/blog/huge-improve-network-performance-...

https://atoonk.medium.com/tcp-bbr-exploring-tcp-congestion-c...


One thing that isn't discussed very often but is IMHO one of the bigger roadblocks for IPv6 deployment is the Berkley Socket Interface. The API that loads of code is built upon to do internet communication.

The problem is that the API is too low level, or more specifically there is no high level API for it. Ideally it would have a function that looks like:

    int sockfd = connect_to(hostname, port, SOCK_STREAM, options_bitfield);
This would allow the stack to work out the details on its own and automatically use IPv6 if available. Or whatever future protocols can provide you a STREAM socket. The old interface could also be available for people who need to do low level stuff, but most of the time this would be sufficient.


IPv6's biggest problem remains not that it's badly designed (at least not nowadays, there were problems but they were solved ten years ago) but that millions of network engineers never bothered to look deeper into IPv6 than "I don't get it, this feels off".

You can't make a backwards compatible "IPv4 with more bits" like people dream of. L2 routers and middleboxes would still need to be replaced, software would still need to be rewritten, nothing would be different. IPv4 changed how private networks worked because its first attempt at private networks failed.

People are stuck with the IPv4 mindset through a combination of lacking education (who even taught IPv6 when our current sysadmins were in college?) or assuming IPv4 is normal and well thought out. There are free guides, books, and playgrounds out there if you want to learn IPv6, so the education problem is one you can solve yourself. Realizing the flawed nature of IPv4 is harder.

I've come from IPv4 networking, but learning IPv6 later made me realize how silly old networks really are. DHCP is a hack to solve a design failure in IPv4 and SLAAC is a much better solution. Companies have started relying on awful hacks originating from when companies decided to staple features to a side effects of a generic address distribution protocol. ARP feels more like a placeholder that should've been included a layer lower or higher in the network graph, put in its own little place to solve the theoretical "what if we don't run IP over our switch" problem that stopped being relevant decades ago.

As annoying as it may be, we live in an age where the OSI model with seven layers of networking protocols don't exist. Token ring is dead, SCTP died in the womb, Ethernet II exist purely in theory. The world now runs on HTTPS on top of UDP or TCP on top of IPv6/4, on top of some kind of wire that carries ethernet.

Ethernet now exists to support IP and vice versa in 99% of all use cases. TCP and UDP exist to serve HTTPS, or some legacy protocol that will be rewritten into HTTPS in the next ten years. WiFi and high-speed data networks came in as a whole new networking system and have turned out to be "what if ethernet, but wireless" with some control logic to make the wireless antennae talk. The OSI model and all the expansion and flexibility it provided simply died over a decade ago. Anything on top of the data link exists purely to support Ethernet + IP + HTTPS.

The migration path to IPv6 is now blocked by excuses. People pretended to care about servers not supporting IPv6 as the reason not to use but, but three or four different ways of providing backwards compatibility to all IPv4 clients were thought up and nobody actually asked for any of them. People complained that their data center provider didn't support IPv6 but now that enabling IPv6 is just a single click in a web UI they don't enable it anyway. People cared about the IPv6 privacy risks but never let go of that concept even after rfc4941 fixed that oversight. Companies like Microsoft and Github, too incompetent to set up a network that their dollar store competitors have supported for years now, have become something to point at and go "see? we need those!" as if NAT64/DNS64/464XLAT/SIIT/whatever haven't been providing IPv4 compatibility for years now.

"I don't know enough about it" and "I don't like it" are perfectly good excuses not to enable IPv6 in your home network, but they're not design flaws or protocol problems. If you're willing to accept the packet maiming we have nicknamed "NAT" or even "CG-NAT", you should feel refreshed at the sight of the plain and simple protocols IPv6 provides you with.

The way people talk about IPv6 now reminds me of the way people talked about HTTPS back when Let's Encrypt started gaining popularity, and the way people dealt with systemd reinventing a better Linux management system. Grumpy people, clinging to what they know, delaying unavoidable change until the very last moment. You can be like the Dracut people running ipromiseiwillneverrunssl.com if you want, but it's a losing battle.


The user/customer is always right.

If the people who would use IPv6 don't like it and don't want it, if they think it's bad, then it's bad.

When forest rangers observe hikers repeatedly deviating from the official trail at certain spots, the ranger understands this to mean the trail is wrong, and he re-designs it to accommodate the hikers. The forest ranger is able to do this because he understands what the trail is for: it's for the hikers, and he empathizes with and adapts to the hiker's needs, not his own needs, not his ego, not his whims. Never does the forest ranger defend a bad trail. The forest ranger takes feedback without ego, and he is able to do so because he does not personally identify with the trail, he identifies with the needs of the hikers.

Engineers would do well to become more like the forest ranger. Many engineers seriously lack empathy for end-users, which results in the engineer creating inadequate products/services that don't meet the user's real needs. Even worse, when the product/service is criticized, when the engineer is given an opportunity to improve the product, the engineer instead blames the user, rather than himself, rather than adapt and recalibrate to the user's needs, he submits to his own ego, his own pride.

It's this lack of empathy and user-blame that yields poor results, results like IPv6.


I really like this – thank you for writing it! A related (but not quite the same, I think) idea is that people should not have to bend to technology; technology should bend to people.

It can be difficult to design for people, though. People tend to over- or underestimate the frequency and severity of rare events, are susceptible to the normalization of deviance, and due to their finite nature, fail to consider wider or longer-term implications of their decisions. Going along with the analogy: forest rangers also have a duty to protect the forest (and in fact, I think this ought to take precedent over accommodating hikers) and they also want to guide hikers away from dangerous terrain or wildlife that underprepared hikers may be tempted to visit.

I think the core idea here is "design around the way users actually behave, not the way they ought to behave".


> Many engineers seriously lack empathy for end-users, which results in the engineer creating inadequate products/services that don't meet the user's real needs.

Ain't that the truth. Modern operating systems are getting worse by the year.

That said, IPv6 has been significantly altered. SLAAC was fixed with two RFCs, one a decade IPv6 was designed, and another in 2015 for fixing oversights in duplicate address detection. DHCPv6 got updated with all kinds of options and it was already late to the party. RDDNS got added in 2007 and updated with more options in 2017. IPv6 Privacy Extensions got added when people brought up the privacy issues with SLAAC. People set up their own weird 4-to-6 translation mechanisms so various standards were introduced to cover any use case you may need.

IPv6 as it was originally designed is practically unusuable today. The trails have since been adjusted and the hungry mountains lions are gone. The hikers don't even notice the difference from their old paths.

However, park management decided that hikers should never go down the new and improved trail because they heard a story from their friend once that someone got lost there fifteen years ago, and some of them have lost the map to the start of the trail.


Users aren’t cognizant of the design issues. Users don’t like ipv6 because it causes them operational problems. ISPs have done a really shit job of managing the transition.


I gave a couple of talks promoting IPv6 in 1999 and I'm happy that I finally have it at home as a residential customer. (I didn't until this year.)

I'm also working on a project to reclaim some IPv4 address space, which people often object to on the grounds that people should "just use IPv6". So I have to defend the legitimacy of the demand for IPv4 address space.

In connection with this issue, I recently ran some DNS lookups against lists of top 1,000,000 domains (the last Alexa one and the Cisco Umbrella one). What we see is that only dozens to hundreds of "top million sites" (depending on one's definition of "sites" and so on) are IPv6-only. That is, less than 0.1% of Internet sites currently have an AAAA record without a corresponding A record, notwithstanding things like Mythic Beasts's offering to sell this configuration to them.

The over 99.9% of sites that still have an A record have it for a very good reason, which is that somewhere around half of all of their users (of course, quite a bit more in some regions and markets, and quite a bit less in others) would be unable to reach them at all otherwise. This is probably going to be true for a long time, even if that fraction keeps creeping steadily downward, and there's not much the site operators can do about that.

On the other hand, maybe you're talking about things like the "A record but no AAAA record" case (sites that don't offer IPv6 support). This is around 45% of FQDNs that have any form of address record, according to my scans using recent Cisco Umbrella data. I don't particularly have a defense of this; in fact, I find it really unfortunate. I happen to also be involved with Let's Encrypt, and I've often seen a pattern where smaller site operators, at least, show no awareness of what IPv6 is and no desire to debug it (e.g. if their certificate request fails because their old AAAA record was broken). I think Happy Eyeballs has been kind of bad on this particular dynamic: small site operators will themselves perceive their sites as working fine with completely broken IPv6 configurations, and it can be hard to convince them otherwise!

I'd love to see some kind of tool, messaging, initiative, or whatever that would encourage the long tail of site operators to be willing to spend, like, three minutes learning that IPv6 is a thing and that it's good if they have it set up correctly rather than not having it set up correctly. I still don't know what that would look like. I've seen dozens, if not hundreds, of forum posts telling people various forms of "it looks like your AAAA record is out of date; maybe you should delete it".


> The over 99.9% of sites that still have an A record have it for a very good reason

A records work on v4 and v6, so they'll probably stick around for a while. Perhaps they'll end up being concentrated around 4-to-6 forwarding NAT-as-a-service companies, but they're the fallback mechanism. I don't think anyone is advocating for dropping A all together unless you're really trying to pinch pennies.

> I'd love to see some kind of tool, messaging, initiative, or whatever

If Google and Microsoft decided to put even the slightest bit of preference towards IPv6 capable websites, I think SEO hacking would do the rest for us.

> like, three minutes learning that IPv6 is a thing and that it's good if they have it set up correctly rather than not having it set up correctly

Learning to set up IPv6 properly will take more than three minutes. As much as I think IPv6 is a better designed protocol now that the necessary RFCs have come out, there's still a huge difference with legacy IP stuff. The concept of link-local addresses needs to be conveyed or people will put fe80:: addresses in their DNS records, and concepts like /48 or /64 subnets representing customers needs to be explained to prevent bots and spammers from taking over. Unlearning NAT and realizing NAT≠firwall is also something that can take surprisingly long. Enabling IPv6 may take five minutes, but the required background knowledge can take a day or more of learning and experimenting.

Internet forum posts about deleting AAAA records are a great helpfulness thermometer for a forum. I treat them the same as the "just disable SELinux" posts; if that's a popular opinion, the forum probably doesn't know what it's talking about so all advice that gets upvoted there needs to be taken with a grain of salt. They're a problem, but also a warning beacon.


> Unlearning NAT

NAT is certainly not a firewall, but it is a very useful router function. I still don't understand how IPv6 makes NAT a thing that isn't useful to know.

I want to expose my servers to the internet through a single shared IP address, and to be able to have those servers exist on different IP addresses inside my network. How does IPv6 allow this without NAT?


By reverse proxying. Run a load balancer on a single machine and have that reverse proxy connections to their destination.

But what if you insist on not using a proxy for whatever reason?

When people say "NAT", they're usually talking about SNAT/MASQUERADE, i.e. NATing outbound connections. What you're asking for here is port forwards/DNAT, i.e. applying NAT to redirect an inbound connection.

If you want to NAT inbound connections, you can do it without NATing outbound connections. Essentially: you don't need to "NAT", you just need to port forward.

Honestly, I think you should just suck it up and use different hostnames for different services, because running all of your services on one IP is really bad for security since it makes it much easier to enumerate every service you're running -- it only takes scanning 65k ports on one IP to find them all, rather than 65k ports on 2^64 IPs. That's the difference between megabytes and yottabytes of port scan traffic.

(If you NATed outbound connections to also come from this IP then things get even worse because every outbound connection from any of your machines immediately informs the server of the IP needed to make an inbound connection to you. That's a completely unnecessary security sacrifice.)

But if you're going to run everything on one IP without proxying, you only need port forwards to do it, you don't need to run the network on some local IP range too.


Thanks for this.

> I think you should just suck it up and use different hostnames for different services, because running all of your services on one IP is really bad for security since it makes it much easier to enumerate every service you're running

I really, really don't want to do this for a ton of reasons. Port scanning isn't high on my security worries, to be honest. I've been dealing with that for decades and am well-protected, so that's not a compelling reason for me.


"I want to expose my servers to the Internet through a single shared IP address" that's a load balancer/reverse proxy, not NAT.


IPV6 doesn't prevent you from natting if you really want to. It just gets rid of the thing that forces you to NAT.


A records do not allow IPv6 connections


> The world now runs on HTTPS on top of UDP or TCP on top of IPv6/4, on top of some kind of wire that carries ethernet.

Am I misunderstanding what you mean by "the world" here? Most of the networks I work with are not HTTPS on top of UDP or TCP.

> Grumpy people, clinging to what they know, delaying unavoidable change until the very last moment

This seems unrealistically broad. For all of the things you cite, there are downsides along with upsides. For all of them, we lose something as well as gain something. That means there's a cost (even ignoring the cost of making the change itself) as well as a benefit. That means a cost/benefit calculation takes place, and the results of that calculation are not guaranteed to be in favor of the replacement tech.


> People cared about the IPv6 privacy risks but never let go of that concept even after rfc4941 fixed that oversight.

In my own experience in our household, IPv6 destroys privacy.


How so?


the fedora dracut? or something else?


Anyone else mostly fine with IPv6 like it is?

Biggest complaint here is that I wish I had a way to correlate SLAAC addresses with hostnames somewhere other than the host. But I don't so when it matters I run a DDNS client on the host, which is probably the "more correct" answer anyway because the host always keeps DNS updated with its current address.


I'm fine with IPv6.

The more correct answer to your complaint is to dump the router's ND cache. I'm pretty sure this is part of SNMP.


Interesting, thanks! I'll have to see if I can get OpenBSD to do that.


On OpenBSD, you can dump the ND cache with `ndp -an`. You'd still have to map MAC addresses to hostnames somehow.


Ah hah, thanks! I can probably just correlate them with DHCPv4 leases.


The myth that it would have been easier to switch to a protocol that is just ip4 with more bits needs to die.

The companies too cheap or lazy to adopt ipv6 would still be clinging to classic ipv4 with NAT.


Yes, it's a common idea but relies on a misunderstanding - it would have required exactly the same difficulty as switching to IPv6, because there's no way to make an 'ipv4 with more bits' compatible with regular IPv4 software or hardware, without changing to a dual stack arrangement (which requires replacing all the packet processing hardware in big routers that relies on the layout of the IP header) - exactly the same problem we had with IPv6.


No.

The difficulty is social. Everyone would have been fine with "IPV4.2", IPV4 with six octets, because it's just like something they already knew, fixing its one obvious defect, not enough addresses.

The consitituency for replacing the hardware/firmware and software stacks would have been there.

IPV6 was not just like IPV4 and so people naturally resisted the devaluation of their hard-won knowledge.


Sure there’s a way. Define a standard way to encapsulate an “IPv4-with-more-bits” packet (let’s call it “IP+”) inside an IPv4 packet. When a router supporting IP+ forwards an IP+ packet to a router that doesn’t support it (assume there’s some way for routers to learn whether their peers support it), it wraps it in an IPv4 packet. When a router receives an IP+ packet wrapped in an IPv4 packet, in unwraps it. In this way, as long as both sides of a connection belong to LANs that support IP+ internally, they can communicate with each other even if some or all of the Internet routers in between only support IPv4.

But even if that weren’t a thing, even if you did need all the routers to be replaced, “IPv4 with more bits” would still be better. After all, it’s been decades - the hardware has been replaced. If it were a seamless incremental upgrade, just a switch to flip that enabled “more bits support” and didn’t break anything, then ISPs would have enabled it even when there was little short-term benefit. By now, practically all of the Internet would support it. Instead it’s an entirely separate network, with entirely separate configuration, which historically had a high tendency to break things. No surprise that even today, many people just don’t bother with it.


Your idea sounds a lot like the 6to4 transition mechanism that was used for several years before it was phased out. It won't "just work" on many networks because they firewall off non-tcp/udp protocols but otherwise it served pretty well.


The biggest problem with 6to4 is that the anycast gateways (192.88.99.0/24 and 2002::/16) often go to a different network than the one you're paying for transit, so you can't just turn it on for production traffic and expect it to work.

The anycast gateways are only used when communicating between 6to4 and native IPv6 addresses, so if 2002::/16 had been the only IPv6 address space, then it would have been more reliable, but then we'd be stuck with IPv4-based IPv6 addresses forever.


> but then we'd be stuck with IPv4-based IPv6 addresses forever.

Yep, but would that have been so bad? Certainly with IPv6's 128-bit addresses, there would still be enough space to go around…

Though admittedly, when I see "IPv4 with more bits" hypotheticals, they often involve smaller addresses than IPv6.


It wouldn't be the end of the world, but it's inelegant. Take a look at your Gmail headers for example.


Send IP+ by tcp/443, the only working protocol right now.


"Ipv4 with more bits" could have a very simple cut-over. You internally update your stack and networks. Up until the cut-over date, the addresses are truncated into IPv4. After the cut-over, they're routable. This could have been given a time table of say 5 years.


No, it didn't work.

Other than already mentioned issues of hardware, you also had for years the issue of lots of applications requiring substantial rewrite to support another protocol due to use of BSD Sockets which leaked protocol internals up to application layer. It was a very big and vocal issue about porting to v6 even in early 2000s despite BSD Sockets finally getting a new API (lifted from the Streams-based XTI) that made handling dual stack easier - but everyone still learnt from older manuals.

And tricks with time table were tried - vendors would lobby for all sorts of extensions "while they work to update the code", the result was that none did because none wanted to actually put the work to upgrade network stacks and handle dual-stack in applications.

After all, IPv6 is not the first attempt to replace the "temporary" solution that was IPv4 whose planned EOL was in 1990.


> you also had for years the issue of lots of applications requiring substantial rewrite to support another protocol due to use of BSD Sockets

Past tense? I still regularly see code that uses BSD sockets and doesn't support IPv6. Actually, I feel like among C and C++ codebases that make direct TCP or UDP connections, the majority are IPv4-only, even in 2023. Though, direct TCP and UDP connections themselves are less popular than they used to be, and so are C and C++...


Past tense on have to write two complete code paths to support v4 and v6 at the same time.

Today we have getaddrinfo, so if you're writing from scratch you can just use that, you can also simplify older code with it and get v6 at the same time.

But there's a lot of old code that still doesn't use GAI and was never upgraded.


v6 basically is v4 with more bits. You could do what you describe with it.

The problem is that nobody has the authority to enforce such a timetable on the Internet as a whole, so that's not actually a workable plan.


Is it such a myth, though?

IPv6 still causes troubles in deployment.

For example, Android phones (still!!!) don't support stateful DHCPv6. Moreover, DHCPv6 was designed by idiots and out of many options for DUIDs it doesn't allow the most logical one: a user-specified host name.

PMTU in IPv6 is even _more_ broken than in V4 because extension headers just plain don't work in the wide Internet.


Thank you for mentioning extension headers. IMHO, they are the worst mistake in IPv6, because they are unbounded. If you design your hardware to handle extension headers, there are all kinds of edge cases to worry about, and regression testing the different combination of headers is nearly impossible. So lots of hardware vendors just don't handle them (or handle just one), which means they might as well not exist.


> For example, Android phones (still!!!) don't support stateful DHCPv6.

That's Google's fault, not the protocol's.


IPv4 with more bits is usually proposed to have 64-bit (or even just 40-bit). IPv6 with 128-bit seems excessive but it makes transition mechanisms possible. NAT64 encodes the IPv4 address in the 64-bit host portion. MAP is even more interesting since it stores the whole NAT address and port for stateless CGNAT.


ipv4 with NAT is in fact still extremely common, but skipping that, which parts of ipv6 (beyond extra bits) do you regard as essential versus which parts are over-engineered?


What "parts of ipv6 (beyond extra bits)" are you concerned about? As far as I know there aren't any, unless you're talking about extensions or something?


I haven't been fresh on this stuff for years, but there are some other differences. Included multicast, true QoS, included IPsec, a few other things. It's just that I sometimes see people advocating upgrading from ipv4+NAT to ipv6 for reasons beyond address bits, so I thought that's where you (edit, "they", oops) were coming from. I was just curious.


That's....not true.


Connection roaming seems more like a TCP problem than an IP one. Wouldn’t it be wise to extend the TCP side? Or, rather, should we expect IPv6 to solve it?


One thing I’ve realized lately is that scarcity is actually a benefit of ipv4, much in the way of the maximum amount of bitcoins theoretically increases their value.

Ipv4 addresses are being ranked by their reputation. This is a good thing, at least right now, as it makes scammers/spammers/hackers/ddosers lives more expensive to acquire fresh addresses. This can only exist when a shortage exists.


> This is a good thing, at least right now, as it makes scammers/spammers/hackers/ddosers lives more expensive to acquire fresh addresses. This can only exist when a shortage exists.

No it is not. More and more people share addresses so more people would be affected and it would just move the problem elsewhere.


I can guarantee you no one shares my server's IPv4.


And this is privilege and entitlement.

You know that if everyone in the world wants to have a dedicated, unshared IPv4, it is mathematically impossible. Bragging about it is showing your privilege and sense of entitlement. What makes you special that you deserve a dedicated IPv4?


in ipv6 you can (should) grade whole subnets as end customers typically get whole /64s.


Not always though. Lower end VPS providers seem to be moving towards placing you on a shared /64. I run into issues on a particular Digital Ocean instance of mine where my IPv6 allocation is something like a /112 and it gets swept up in bans on the parent /64 because of other's bad behavior.

Then you have other providers like Vultr that do weird things like not statically route your /64 prefix to your instance which kind of defeats the entire point of having a /64.


Digital Ocean has been in the wrong with their IPv6 allocation since the day the implemented it. One of the rules of IPv6 is never allocate less than a /64. Unfortunately their network admins don't seem to care.


There's still 2^64 /64 IPv6 subnets, and only 2^32 IPv4 addresses in total, of which huge swaths aren't even used.


"People are DUI without a driver license, so let's make the driver license cost $$$$ for every Average Joe."


The core network stack is going to be dual stack any way during at least the transition phrase, OSes like linux should allow v4/v6 communication.


"Actually, RARP worked quite fine and did the same thing as bootp and DHCP while being much simpler, but we don't talk about that."

RARP is basically dhcp but only for your IP address. DHCP/bootp won because you sort of need a bit more than your ip. It was also easier to manage.

RARP probably still works mostly.


I have said for the last decade or so whenever IPV6 comes up on HN that IPV6 was way too big of an address space for anything but having some sort of unique identifier in there to support an online digital ID. Like some space for a hash of a biometric or something in the lower 64 bits.



About a decade ago I worked on a project for secure networks where machines would encode identity information into the lower 64 bits of an IPv6 network and security devices could use that to enforce policy based on your identity, not your machine.

The most interested customer, the government, had the problem of having way too much IPv4 space and little incentive to upgrade so it never went anywhere. Even after the government mandates to switch to IPv6, which mostly just ended up with loads of IPv4 only government websites behind Cloudflare gateways.


The author is unaware of, or has ignored, Mobile IP [1]. There are implementation of it available.

[1] https://en.wikipedia.org/wiki/Mobile_IP


Cloud providers don't even support ipv6 that well let alone expecting the general public to switch over.


AWS will start charging for most IPV4 addresses next year [1]. And I'd expect other cloud providers to follow AWS' lead here.

This could provide the needed push for the Industry to switch.

[1] https://aws.amazon.com/blogs/aws/new-aws-public-ipv4-address...


GitHub is the most annoying one. It's funny, given how many parts of IPv6 stack are developed here.


The interesting part is that GitHub has already assigned[0] IPv6 addresses to a lot of endpoints of its stack. They just haven't created any AAAA record yet. Perhaps they'll add support for it soon?

[0]: https://api.github.com/meta


The big ones do. Small ones are small for a reason.


isn't the "mostly good enough" aspect of ipv4 the biggest problem for ipv6?


The "I don't like it, it's different" mob are out in force today. This isn't the Daily Mail.


I've only dabbled in IPv6, but the one thing that blew my mind was that you cannot set DNS automatically without DHCPv6. Other methods of "automatic addressing" are thus useless.


RFC8106 IPv6 Router Advertisement Options for DNS Configuration https://datatracker.ietf.org/doc/html/rfc8106


Ans what software do support it? There are tons of these IPv6 RFCs which is not supported by "common" software - Linux and FreeBSD daemons, nsd, unbound, etc.

How could I implement this in my home network, with off-the-shelf SOHO router or even something like OpenWRT / custom OSS-based "firmware"?


I'm using opnsense and it seems to have worked for my network.


Nice! I last looked at IPv6 in 2019 ish. I wonder how I missed this. Or perhaps the support for it in consumer routers is low?


I guess you haven't heard of RDNSS yet? Doable as part of RAs.


I had not, or had forgotten.


This would be relevant in v6-only networks, but in practice everyone also runs v4 which and v4 dhcp which gets you dns.

(Also there's a DNS option in router advertisements now like subling comment said)


Android ignores any IPv4 DNS address it gets if it gets an IPv6 address. Instead it falls back to Google's own DNS servers. So if you got Android devices in your house, you need to configure a IPv6 DNS server. AFAIK a IPv4 DHCP server can't hand out a IPv6 DNS address.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: