Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Intel desktop CPUs are monolithic. A 10900K is a single die, meanwhile a AMD 3900X is three dies: two compute dies and one I/O die (which are sourced from two manufacturers on two different processes afaik). An AMD server CPU has the same compute dies, except more of them, and a very different IO die.

The AMD compute dies are only connected to the IO die and other compute dies (and power). All IO connections exclusively go through the IO die, so the IO die can be customized to change the IO of the CPU without changing anything about the compute dies. It would be entirely feasible to just re-spin the IO die to add support for different memory, Thunderbolt or other IO ports. The IO die is also made on a cheaper, lower-density and performance process (14 nm / 16 nm) than the compute dies (7 nm).



So AMD is back to having a northbridge again but it's on the same package instead of the motherboard for latency reasons? Or could we actually get away with a northbridge on the motherboard again?


Sort of - it allows them to build lots of different sorts of systems - many CPU chiplets and a mem controller, 1 CPU and a mem controller etc etc all from the same basic components - Intel have to spin a new chip for each SKU, AMD can build different SKUs by packaging stuff differently - it gives them a lot more flexibility and means they can spin out new stuff to fit a new market segment much more quickly


I know that motivation but I'm more curious about the hardware architecture angle. Integrating the memory controller in the CPU was supposedly a big gain at the time. Now it's in a different chip and multi-socket motherboards already have to traverse the board to access RAM attached to another chip. Are the interconnects better now and so going back to a single northbridge is workable? Would it simplify the topology in multi-socket systems to have all the RAM together instead of having to take care with process affinity to RAM? I'd love a source for discussion around these kinds of tradeoffs.


On package interconnection latency and power is way lower than going off package, AMD also doubled the L3 cache size to compensate for increased memory latency. The issue of putting too much IO in single die is that perimeter of die/package needs to fit all the IO traces on substrate/motherboard, which means more layers and more costs. Everything is just a performance/cost/power trade off. But I would say that off package controllers dealing with multiple CPU chips are probably less viable now than they were before current core count increase.It's because synchronization traffic would require insanely large busses to those controllers, if you wanted to have lots of sockets, and then you would need a lot more pins to those. On the other hand if you had those on package you would get a lot more bandwidth at much lower power and latency, which is what AMD has done.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: