Hacker News new | past | comments | ask | show | jobs | submit login

The myth that it would have been easier to switch to a protocol that is just ip4 with more bits needs to die.

The companies too cheap or lazy to adopt ipv6 would still be clinging to classic ipv4 with NAT.




Yes, it's a common idea but relies on a misunderstanding - it would have required exactly the same difficulty as switching to IPv6, because there's no way to make an 'ipv4 with more bits' compatible with regular IPv4 software or hardware, without changing to a dual stack arrangement (which requires replacing all the packet processing hardware in big routers that relies on the layout of the IP header) - exactly the same problem we had with IPv6.


No.

The difficulty is social. Everyone would have been fine with "IPV4.2", IPV4 with six octets, because it's just like something they already knew, fixing its one obvious defect, not enough addresses.

The consitituency for replacing the hardware/firmware and software stacks would have been there.

IPV6 was not just like IPV4 and so people naturally resisted the devaluation of their hard-won knowledge.


Sure there’s a way. Define a standard way to encapsulate an “IPv4-with-more-bits” packet (let’s call it “IP+”) inside an IPv4 packet. When a router supporting IP+ forwards an IP+ packet to a router that doesn’t support it (assume there’s some way for routers to learn whether their peers support it), it wraps it in an IPv4 packet. When a router receives an IP+ packet wrapped in an IPv4 packet, in unwraps it. In this way, as long as both sides of a connection belong to LANs that support IP+ internally, they can communicate with each other even if some or all of the Internet routers in between only support IPv4.

But even if that weren’t a thing, even if you did need all the routers to be replaced, “IPv4 with more bits” would still be better. After all, it’s been decades - the hardware has been replaced. If it were a seamless incremental upgrade, just a switch to flip that enabled “more bits support” and didn’t break anything, then ISPs would have enabled it even when there was little short-term benefit. By now, practically all of the Internet would support it. Instead it’s an entirely separate network, with entirely separate configuration, which historically had a high tendency to break things. No surprise that even today, many people just don’t bother with it.


Your idea sounds a lot like the 6to4 transition mechanism that was used for several years before it was phased out. It won't "just work" on many networks because they firewall off non-tcp/udp protocols but otherwise it served pretty well.


The biggest problem with 6to4 is that the anycast gateways (192.88.99.0/24 and 2002::/16) often go to a different network than the one you're paying for transit, so you can't just turn it on for production traffic and expect it to work.

The anycast gateways are only used when communicating between 6to4 and native IPv6 addresses, so if 2002::/16 had been the only IPv6 address space, then it would have been more reliable, but then we'd be stuck with IPv4-based IPv6 addresses forever.


> but then we'd be stuck with IPv4-based IPv6 addresses forever.

Yep, but would that have been so bad? Certainly with IPv6's 128-bit addresses, there would still be enough space to go around…

Though admittedly, when I see "IPv4 with more bits" hypotheticals, they often involve smaller addresses than IPv6.


It wouldn't be the end of the world, but it's inelegant. Take a look at your Gmail headers for example.


Send IP+ by tcp/443, the only working protocol right now.


"Ipv4 with more bits" could have a very simple cut-over. You internally update your stack and networks. Up until the cut-over date, the addresses are truncated into IPv4. After the cut-over, they're routable. This could have been given a time table of say 5 years.


No, it didn't work.

Other than already mentioned issues of hardware, you also had for years the issue of lots of applications requiring substantial rewrite to support another protocol due to use of BSD Sockets which leaked protocol internals up to application layer. It was a very big and vocal issue about porting to v6 even in early 2000s despite BSD Sockets finally getting a new API (lifted from the Streams-based XTI) that made handling dual stack easier - but everyone still learnt from older manuals.

And tricks with time table were tried - vendors would lobby for all sorts of extensions "while they work to update the code", the result was that none did because none wanted to actually put the work to upgrade network stacks and handle dual-stack in applications.

After all, IPv6 is not the first attempt to replace the "temporary" solution that was IPv4 whose planned EOL was in 1990.


> you also had for years the issue of lots of applications requiring substantial rewrite to support another protocol due to use of BSD Sockets

Past tense? I still regularly see code that uses BSD sockets and doesn't support IPv6. Actually, I feel like among C and C++ codebases that make direct TCP or UDP connections, the majority are IPv4-only, even in 2023. Though, direct TCP and UDP connections themselves are less popular than they used to be, and so are C and C++...


Past tense on have to write two complete code paths to support v4 and v6 at the same time.

Today we have getaddrinfo, so if you're writing from scratch you can just use that, you can also simplify older code with it and get v6 at the same time.

But there's a lot of old code that still doesn't use GAI and was never upgraded.


v6 basically is v4 with more bits. You could do what you describe with it.

The problem is that nobody has the authority to enforce such a timetable on the Internet as a whole, so that's not actually a workable plan.


Is it such a myth, though?

IPv6 still causes troubles in deployment.

For example, Android phones (still!!!) don't support stateful DHCPv6. Moreover, DHCPv6 was designed by idiots and out of many options for DUIDs it doesn't allow the most logical one: a user-specified host name.

PMTU in IPv6 is even _more_ broken than in V4 because extension headers just plain don't work in the wide Internet.


Thank you for mentioning extension headers. IMHO, they are the worst mistake in IPv6, because they are unbounded. If you design your hardware to handle extension headers, there are all kinds of edge cases to worry about, and regression testing the different combination of headers is nearly impossible. So lots of hardware vendors just don't handle them (or handle just one), which means they might as well not exist.


> For example, Android phones (still!!!) don't support stateful DHCPv6.

That's Google's fault, not the protocol's.


IPv4 with more bits is usually proposed to have 64-bit (or even just 40-bit). IPv6 with 128-bit seems excessive but it makes transition mechanisms possible. NAT64 encodes the IPv4 address in the 64-bit host portion. MAP is even more interesting since it stores the whole NAT address and port for stateless CGNAT.


ipv4 with NAT is in fact still extremely common, but skipping that, which parts of ipv6 (beyond extra bits) do you regard as essential versus which parts are over-engineered?


What "parts of ipv6 (beyond extra bits)" are you concerned about? As far as I know there aren't any, unless you're talking about extensions or something?


I haven't been fresh on this stuff for years, but there are some other differences. Included multicast, true QoS, included IPsec, a few other things. It's just that I sometimes see people advocating upgrading from ipv4+NAT to ipv6 for reasons beyond address bits, so I thought that's where you (edit, "they", oops) were coming from. I was just curious.


That's....not true.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: