Hacker Newsnew | past | comments | ask | show | jobs | submit | AtlasBarfed's commentslogin

Ipv6 was a protocol engineered in isolation from the social / political environment it had to be adopted in.

A successor to ipv4 wasnt a technical issue. duh, use longer addresses. The problem was social.

It's a miracle it was used at all

What's annoying about ipv6 discussions is that the ipv6 people are incredibly condescending when the problems of its adoption were engineered by them.


Exactly. IPv6 was developed in the ivory towers where it was still assumed that everyone wanted to be a full participant of the internet.

But the social/political environment was that everyone just wants to be a passive consumer, paying monthly fees to centralized hosts to spoon-feed them content through an algorithm. For that, everyone being stuck behind IPv4 CG-NAT and not being able to host anything without being gatekept by the cloud providers is actually a feature and not a bug.


We've seen only the world where everything has been adopted to IPv4. p2p technologies strive even under it, but they could really shine with the ability to connect directly between devices. Imagine BitTorrent on steroids, where you don't have peers with assigned IPv4 and seedboxes and everybody else. Torrents are generally faster than usual channels to download things, but with ipv6 it would be far faster than now.

Cloudless cameras streaming to your phone without Chinese vendor clouds, e2e encrypted emails running on your phone without snooping by marketing people and three-leter agencies, content distribution network without vendor lock-ins. The possibilities are impressive if we have a way to do it without TURN servers that cost money and create a technical and legal bottlenecks.

We can't say nobody wants that world because we've never tried it in the first place. I definitely would like to see that.


Don't you think everyone should have the option to be a full participant? Being locked behind cloud providers and multiple layers of NAT with IPv4 means that can never happen, even if consumers want it to.

I was lucky enough to experience the 90's internet where static IP addresses were common. I had a /24 (legacy "class C" block) routed to my home, and still do.


> Exactly. IPv6 was developed in the ivory towers where it was still assumed that everyone wanted to be a full participant of the internet.

IPv6 was developed in the open on mailing lists that anyone could subscribe to:

    The criteria presented here were culled from several sources,
    including "IP Version 7" [1], "IESG Deliberations on Routing and
    Addressing" [2], "Towards the Future Internet Architecture" [3], the
    IPng Requirements BOF held at the Washington D.C. IETF Meeting in
    December of 1992, the IPng Working Group meeting at the Seattle IETF
    meeting in March 1994, the discussions held on the Big-Internet
    mailing list (big-internet-at-munnari.oz.au, send requests to join to
    big-internet-request-at-munnari.oz.au), discussions with the IPng Area
    Directors and Directorate, and the mailing lists devoted to the
    individual IPng efforts.
* https://datatracker.ietf.org/doc/html/rfc1726

Just like all current IETF discussions are in the open and free for all to participate. If you don't like the direction things are going in participate: as Gandhi did (not) say, “Be the change you want to see in the world.”

One of the co-authors on that RFC worked at BBN: you know, the folks that actually built the first the routers (IMPs) that created the ARPA/Internet in the first place. I would hazard to guess they have know something about network operations.

* https://www.goodreads.com/book/show/281818.Where_Wizards_Sta...

> But the social/political environment was that everyone just wants to be a passive consumer, paying monthly fees to centralized hosts to spoon-feed them content through an algorithm.

Disagree, especially with the hoops that users and developers have to jump through to deal with (CG-)NAT:

> [Residential customers] don't care about engineering, but they sure do create support tickets about broken P2P applications, such as Xbox/PS gaming applications, broken VoIP in gaming lobbies, failure of SIP client to punch through etc. All these problems don't exist on native routed (and static) IPv6.

* https://blog.ipspace.net/2025/03/response-end-to-end-connect...


Well, with such a description of the 'vices' of IPv6 vs the 'virtues' of IPv4 count me as one who considers himself in full support of the ivory towered greybeards who decided the 'net was meant to be more than a C&C network for sheeple. Once I got a /56 delegated by my IAP - which coincided with me digging down the last 60 metres of fibre conduit after which our farm finally got a real network connection instead of the wires-on-poles best-effort ADSL connection we had before that - I implemented IPv6 in nearly all - but not all - services. Not all of them, no, because IPv6 can make life harder than it needs to be. Internally some services still run IPv4 only and will probably remain doing so but everything which is meant to be reachable from outside can be reached through both IPv4 as well as IPv6. I recently started adding SIP services which might be the first instance of something which I'll end up going IPv6-only due to the problems caused by NATting the SIP control channels as well as the RTP media channels, something reminiscent of how FTP could make life difficult for those on the other side of firewalls and NAT routers. With IPv6 I do not need NAT so as long as the SIP clients support it I should be OK. Now that last bit, client support... yes, that might be a problem sometimes.

The problem of IPv6 adoption in the US was largely engineered by major ISPs not caring while hardware manufacturers take their cues from major ISPs.

Isn't CGnat due to IPv6 use on the mobiles? You could quit and say that's an IPv6 problem that didn't get solved in the IPv6 engineering

IPv6 is used on mobile networks since there aren't enough IPv4 addresses. Some of these mobile networks are so big there aren't even enough private IPv4 addresses for their CG-NAT private side to fit, leaving the only clean solution being NAT64/DNS64.

Why would CGNAT be deployed as a response to IPv6 on mobile? I don't understand the logic there. CGNAT is deployed due to a shortage of publicly routable IPv4 addresses. IPv6 was introduced due to having much larger publicly routable space.

Because the internet as a whole is ipv4. The mobiles are IPv6. The ipv4 internet does not care about any server running on any mobile device.

Thus, CG Nat was invented so that IPv6 could talk to IPv4 and get the information from it.


No, CGNAT (Carrier-Grade NAT - https://en.wikipedia.org/wiki/Carrier-grade_NAT) is an IPv4 only thing. https://www.rfc-editor.org/rfc/rfc6598 specifies they should use 100.64.0.0/10 for it, to avoid conflicting with the pre-existing private-use ranges. IPv6 removes the need for using CGNAT, as each home router is allocated a public IP (rather than a CGNAT IP) on its public link.

No, NAT64 was invented so v6-only hosts could access v4-only resources. CGNAT was invented so v4 hosts can have a v4 address without having to purchase limited public address space.

Is pg partition tolerant in CAP?

This sounds perilously close to hazing

Can you expand on that?

Currently we do shadow shifts for a month or two first, but still eventually drop people into the deep end with whatever experience production gifts them in that time. That experience is almost certainly going to be a subset of the types of issues we see in a year, and the quantity isn’t predictable. Even if the shadowee drives the recovery, the shadow is still available for support & assurance. I don’t otherwise have a good solution for getting folks familiar with actually solving real-world problems with our systems, by themselves, under severe time pressure, and I was thinking controlled chaos could help bridge the gap.


It strikes me reading a linked 2010 article about how they talk about aws being worse networking, less reliable instances, and higher latency.

It's been 15 years. Aws still sucks compared to your own hardware on so many levels, and total Roi is dropping.


I'm waiting for the good AI powers software.... Any day now.

Ideally, llm should be able to provide the capability to translate from memory inefficient languages to memory efficient languages, and maybe even optimize underlying algorithms in memory use for this.

But I'm not going to hold my breath


.....

You don't see any value in knowing that numbers?


That's what I just said. There is zero value to me knowing these numbers. I assume that all python built in methods are pretty much the same speed. I concentrate on IO being slow, minimizing these operations. I think about CPU intensive loops that process large data, and I try to use libraries like numpy, DuckDB, or other tools to do the processing. If I have a more complicated system, I profile its methods, and optimize tight loops based on PROFILING. I don't care what the numbers in the article are, because I PROFILE, and I optimize the procedures that are the slowest, for example, using cython. Which part of what I am saying does not make sense?

That makes perfect sense. Especially since those numbers can change with new python versions.

As others have pointed out, Python is better used in places where those numbers aren't relevant.

If they start becoming relevant, it's usually a sign that you're using the language in a domain where a duck-typed bytecode scripting-glue language is not well-suited.


We still need a lot of plug and Play with home servers.

In theory, AI should be good at helping building interfaces between cloud backups and home server apps. Because AI should be good at apis.

In theory


I’d like a turnkey k3s and a 10” rack designed for consumers. Set up to host your Minecraft server, store your media, and be incrementally upgradeable.

So they buy preforms, or they mix it themselves and pour into forms?

It's precut autoclaved blocks.

Setting aside the address scarcity issue, how is IPv6 going to simplify the routing table? If anything, it would just be an explosion of the number of addresses?

I mean a million is objectively a large number if it's all on paper, but to me, that's not a particularly large data set for talking about the entire freaking internet.

And how cheap of a SOC can handle that in memory? A better question might be to even make a system on a chip that couldn't handle that memory?


The small ISP that serves my home has six IPv4 prefixes and one IPv6 prefix.

The small hosting provider I use has I think 7 v4 prefixes, but could be one v6 prefix (if they supported v6 which they sadly don't). Maybe not --- a lot of their /22s are advertised as four /24s to allow for a DDoS Mitigation provider to attract traffic when needed; but it'd probably still be fewer prefixes with v6.

Not every ASN looks the same, but many of them would advertise a lot fewer prefixes if they could get contiguous addresses, but it's not possible/reasonable to get contiguous allocations for v4.

Since the routing table is organized around prefixes, if there is complete migration, the routing table will probably be smaller.


A single /32 IPv6 prefix is actually easier on the router (computational and memory wise) than a dozen /24 IPv4 prefixes.

What matters is the total number in the end. If IPv6 prefixes end up outnumbering IPv4 prefixes by a lot, then that will be a problem.

Since we don't have time machines probably the best solution is to refuse prefix portability.


Huh A single prefix is easier on the router than a dozen.. I should hope so? Isn’t this kind of like saying the grade 1 math test is easier than the grade 12 math test ?

The thing is that the abundance of IPv6 addresses enables fewer prefixes to be used, by allowing addresses to be allocated in much larger chunks.

For instance, Comcast (AS 7922) owns about 2^26 IPv4 addresses, distributed across 149 different prefixes. Almost all of these prefixes are non-contiguous with each other, so they each require separate routing table entries. Comcast can't consolidate those routes without swapping IP address blocks with other networks, and it can't grow its address space without acquiring new small blocks. (Since no more large blocks are available, as this article discusses.)

In contrast, Comcast owns about 2^109 IPv6 addresses, which are covered by just 5 prefixes (two big ones of 2^108 each, and three smaller ones). It can freely subdivide its own networks within those prefixes, without ever running out of addresses, and without having to announce new routes.


There theory might be that an organisation would end up advertising a single prefix, rather than whatever they have now (say 40 networks with various prefixes).

It's not just any memory. When it comes to core infrastructure routers those routes need to fit into specialized and expensive CAM (Content Addressable Memory) to do the lookups in hardware. And on every single one.

Right but that's still not really answering his question. Sure, the constant factor is higher for router TCAM memory. Still: you can sum this post up as "in the late 1990s, tier-1 carriers filtered advertisements for all but the 'swamp' range down to /19s or smaller prefixes; now everything is the 'swamp'". Why is that?

Because IPv4 address scarcity means small blocks get sold as they are available to people in completely different parts of the Internet. With IPv6 the address space is so large that they can easily keep the blocks in one piece.

No, obviously, I get that (we buy a lot of IPv4 space --- and I'm actually happier with the current regime than I was with the "supplicate to ARIN" regime). I'm just wondering what technologically happened to make universal /24 advertisements fine. I assume it's just that routers got better.

The transition to 7200 VXRs as core routers really hit a tipping point around 2000. They could handle millions of entries in the FiBs and really led to a relief in pressure. Subsequent devices had to match that.

On the IPv6 side; by 2002, nobody was really experimenting with A6 records any more, and EUI64 was needless. Both were parts of IPv6 designed to facilitate "easy" renumbering, so that single prefixes could be replaced with larger ones. But the ISPs weren't complaining any more about table size.


It's interesting to consider that the IPv4 address space is only 32 bits wide. Back in the early 2000s asking for 4GB of RAM was unthinkable, but today (well last year) that's not even a big ask. If your routing decision can fit in a single byte (which interface to use next) you could load the entire thing as a 4GB table easily. 8GB if you need two bytes for the next hop. Multicast might be a problem but since multicast doesn't work on the backbone anyway I think we can ignore it.

> I'm just wondering what technologically happened to make universal /24 advertisements fine. I assume it's just that routers got better.

Routers had to get better (more tcam capacity) because there wasn't much choice. Nobody wants to run two border routers each with the table for half the /8s or something terrible like that. And you really can't aggregate /24 announcements when consecutive addresses are unrelated.


The issue is; in the default free zone, every peer which gives you a full table, gives you 1 million routes. Core infrastructure is not getting refreshed every 5 year, I have heard so...

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: