People keep saying "to compete with Apple" which of course, is nonsense. Apple isn't even second or third place in laptop marketshare last I checked.
So why build powerful laptops? Simple: people want powerful laptops. Remoting to a desktop isn't really a slam dunk experience, so having sufficient local firepower to do real work is a selling point. I do work on both a desktop and a laptop and it's nice being able to bring a portable workstation wherever I might need it, or just around the house.
This is a really good point. It's not easy to use both a laptop and a desktop at the same time. There's challenges around locality, latency, limited throughput, unavailability that software can't easily deal with, so you need to be aware and smart about it, and you'll need to compromise on things.
I'd work from my workstation at all times if I could. Tramp is alright, but not too fast and fundamentally can't make things transparent.
"Mobile CPU" has recently come to mean more than laptops. The Steam Deck validated the market for handheld gaming computers, and other OEMs have joined the fray. Even Microsoft intends to release an XBox-branded portable. I think there's an market opportunity for better-than-800p handheld gaming, and Strix Halo is perfectly positioned for it - I wouldn't bet against the handheld XBox running in this very processor.
I've never seen a good technical comparison showing what's new between "Unified Memory" vs traditional APUs/iGPUs memory subsystems laptops have had for over a decade, only comparisons to dGPU setups which are rarer in laptops. The biggest differences comparing Apple Silicon or Strix Halo to their predecessors seems to be more about the overall performance scale, particularly of the iGPU, than the way memory is shared. Articles and blogposts most commonly reference:
- The CPU/GPU memory are shared (does not have to be dedicated to be used by either).
- You don't need to copy data in memory to move it between the CPU/GPU.
- It still uses separate caches for the CPU & GPU but the two are able to talk to each other directly on the same die instead of an external bus.
But these have long been true of traditional APUs/iGPUs, not new changes. I did even see some claims Apple put the memory on die too and that's what makes it unified but checking that it seems to still actually be "on package", which isn't unique either, and it wouldn't explain any differences in access patterns anyways. I've been particularly confused as to why Strix Halo would now qualify as having Unified Memory when it doesn't seem anything is different than before, save the performance.
If anyone has a deeper understanding of what's new in the Unified Memory approach it'd be appreciated!
You're obviously more familiar with the topic than I so I'll trust your insight into the age of the term and concept, but none of these describe common chipsets for laptops that a consumer would expect. Unless we're talking a very specific user trying to engage in heavy GPGPU computation here (on a laptop?? I can't imagine the market for this is very large). Most users of apple's M* laptops don't know or care they they have a laptop that shares memory in a different way than the intel laptops generally allowed for.
I believe, but don't know for sure, that classic iGPUs still behaved like discrete PCI devices under the hood, and accessed RAM using DMA over PCI(e), which is slower than the RAM is capable of, and also adds some overhead. Whereas, modern unified memory approaches have a direct high-bandwidth connection between RAM and the GPU, which may be shared with the CPU, bypassing PCI entirely and accessing the memory at its full speed.
My understanding is this was true for the original Trinity era APUs in 2012 but by 2014 Kevari APUs had already put them on the same bus so passing the pointer over PCIe was no longer necessary for systems with unified system memory https://en.wikipedia.org/wiki/Heterogeneous_System_Architect...
Reading about HSA again (it's been many years) it seems like ARM was one of the ones originally part of defining it back then and I wonder if Apple actually did anything different on top this at all or just branded/marketed the use of what was already there.
It seems there are 3 key things which distinguish the modern "unified memory architecture" from its predecessors:
1. Pointer passing instead of buffer copying
2. Separate bus for memory access alongside the PCIe bus
3. No partitioning of RAM into exclusive CPU vs. GPU areas
These features seem to have come at different times, and it's the combination of all three that defines the modern approach. Broadly speaking, whereas classic iGPUs were still functionally "peripheral devices", modern UMA/HSA iGPUs are coequal partners with their CPUs.
AMD seems to have delivered the first hardware meeting these criteria about 8-9 years ago, beating Apple to the punch by a couple years. However, AMD's memory bandwidth can be quite a bit behind Apple's. The M1 Pro handily beats anything AMD had out at the time (200 GB/s vs. 120 GB/s), and the M1 Max has double that bandwidth.
The term doesn't just mean "has an igpu" right? I'd guess the figure is higher than 90% if that's how you're defining it—most motherboards come with igpus now and certainly almost all laptop mainboards. Otherwise I'm not sure how you are defining it to find that figure!
There are other aspects IMO like the iGPU has to support memory translation and coherence but AMD and Intel have also had those features for years (even if the drivers don't use them).
128GB is actually a step down. The previous generation (of sorts) Strix Point had maximum memory capacity of 256GB.
The mini-PC market (which basically all uses laptop chips) seems pretty robust (especially in Asia/China). They've basically torn out the bottom of the traditional small form factor market.
IMO, the likely cause is AMD capitalizing on multiple OEMs having Steam-Deck envy and/or setting the foundation for the Steam Deck 2 with near-desktop graphics fidelity rather than 800p medium/low settings users have to put up with.
Way too power hungry for a handheld gaming machine and way too small market. No way AMD spent all this effort designing a brand new SoC architecture just for Steam Deck.