Hacker News new | past | comments | ask | show | jobs | submit login
M2 MacBook Air scores higher on Windows 11 GeekBench than pricier Dell laptop (cultofmac.com)
171 points by eldaisfish on Aug 23, 2022 | hide | past | favorite | 222 comments



I switched from an XPS 13 to an M2 Air recently. The differences are striking.

I have my XPS 13 browsing HN next to my M1 air. That's all the XPS is doing, firefox browsing HN. My M2 has multiple docker containers, a collection of office apps open, browsers, vs code... and so on.

My XPS battery will barely last until 1pm if it isn't plugged in, and the fan kicks on for no apparent reason... It will last a little longer if I manually set it to "best battery life" but even that can be surprisingly ineffective for no apparent reason sometimes.

My M2 Air will last nearly two full working days (I haven't pushed it that far but so far it appears it would make it), the battery life is crazy. I go into the office and I don't even think about plugging it in, I know it will do fine.

I don't know what it is about macOS and using their M chips but the efficiency is amazing to me.

I even had a docker container freak out on my Air recently. It pegged a CPU core to 100% for a good 45 minutes. Temp skyrocketed. But I was on another virtual desktop and I didn't notice a thing. The OS ran smooth, vscode, node and everything else was running smooth. It wasn't until I looked at the status bar that I saw the CPU and temp had skyrocketed. I found the container responsible, killed it, and temp dropped quickly. Even under non ideal situations the Air performed well.

With my XPS the moment I put it down I think about where I can plug it in. If I pick up my XPS to quick order some Tacos and it wasn't plugged in from the start I feel like i"m in a race to get the order in "Come on, hold on, I need tacos!!!"

With my Air I don't even think about plugging it in, that's a completely new experience for a laptop for me.

I don't really understand the benchmarks and such that I've seen, but the every day user experience has been like night and day.


The CPU/SoC's efficiency is likely the core driver behind this difference, but I believe that Windows exacerbates the issue.

I recently installed Fedora 36 on my Tiger Lake (i5-1130G7) ThinkPad X1 Nano, and while the machine has never had particularly good battery life, it's more "calm" under Linux on average. Even while setting things up with various things running its fan didn't kick on and it didn't get warm like it typically would under both the Windows 10 installation it shipped with, as well the current upgraded Windows 11 install.

Additionally, while I haven't yet actually tested it, GNOME under Linux estimates that it'd get somewhere in the ballpark of 8-8.5 hours out of a full charge when running in "battery saver" mode, which is a good hour or two longer than Windows' estimate in its low power mode.

Of course the M-series devices I've used destroy the ThinkPad in terms of battery life, regardless of the OS it's booted into though.


> Windows' estimate in its low power mode.

Am I the only one not bothering with Windows' estimates at all? It will happily tell me I have two hours remaining and shut down after 40 minutes, without me touching the PC at all. I had this or similar experience with every windows laptop I ever used.


I certainly don't trust them. 30% battery life may as well be near death on my windows laptops.

As for hours of use in marketing material I wonder what they'll claim when they actually get close to their current estimates? Just lie about it more? How will we know if they're getting close to the truth?


My laptop does a hard shutdown if the battery goes below 95%.

It's gotten to the point that I've had to configure it to hibernate at 98%.

This is not very nice on the laptop vendor's part. This laptop is less than 2 years old.


>If you’ve been following any of my articles about the performance of the cores in M1 series chips, you’ll have come across the term Quality of Service (QoS), which can have major impact on how fast code runs on processors under macOS.

In addition to each process being given a priority, a number that can be changed using the command tool renice, some years ago Apple introduced another setting, the QoS. This is set for each process, and can be one of four discrete values from 9 (the lowest, for background tasks) to 33 (the highest, for tasks involving user interaction).

https://eclecticlight.co/2022/01/07/how-macos-controls-perfo...


> It pegged a CPU core to 100% for a good 45 minutes. Temp skyrocketed. But I was on another virtual desktop and I didn't notice a thing.

I reckon this is more of a heterogeneous architecture thing than a MacOS/Apple one. When you run a processor-intensive task, Grand Central Dispatch will assign it a relative priority and then, on new M1 chips, delegate the process to a core cluster. While one heavy program runs, another can operate alongside it on the efficiency cores or even another P-core cluster.

I've noticed this behavior as well on Alder Lake. There are several times I've booted up Bitwig with something like Elden Ring running in the background, and had no idea it was open while I worked on music.


Yeah it could absolutely be something that happens elsewhere outside Appple land. I tell that story more as a sort of indirect mention of some of the internet drama over "throttling" / general user experence rather than a story about what MacOS does. I really don't know what was responsible for everything still running smooth when that incident happened. For all I know Docker on Apple Silicon might be doing something there too to keep things from getting entirely out of hand.


Docker does a lot of weird stuff in MacOS, it's totally beyond my wheelhouse. I am glad that Intel/Apple are pivoting to this new architecture though, it kinda reminds me of the jump to quad core systems back in the day. With AMD's switch to heterogeneous designs coming a few years down their roadmap, I'm really curious how they're going to compete. Early Zen 4 benchmarks are wicked scary so far, so I'm pretty excited to see what they come up with.

All-for-all though, this next generation of CPUs is going to be awesome. Everyone is finally doing their own thing, and actually getting somewhere:

- Intel is working on process enhancement and bumping nodes to make fast, heterogeneous x86 CPUs

- Apple is focusing on density above all else, optimizing for a more efficient, streamlined heterogeneous ARM package

- AMD is doing things "the old way", with a tick-tock schedule for process bumps and IPC optimizations to their homogeneous x86 CPU

- Bonus: there's also a bunch of Chinese manufacturers fighting for dominance over the homogeneous ARM CPU market (mostly for use in servers).


> Bonus: there's also a bunch of Chinese manufacturers fighting for dominance over the homogeneous ARM CPU market (mostly for use in servers).

Please tell me more or tell me where i can learn more.


Snow Leopard (which introduced GCD) looks even better in hindsight.


I have a similar experience coming to m1 mba from an xps13. I had some buggy code change monitor/recompiler running that pegged a cpu core at 100%. The palm test would get hot and the battery would go to like 10% after a working day. Those were the only way I knew somerhing was up.


What time do you start browsing with the xps? 9 am making it 4 hours or so of life. That's pretty pathetic and it almost sounds like you have some background processes going on that are sinking your battery life and kicking on the fans. I put in a new battery in my 2012 mbp and I get about 6 hours ish browsing a very light website like HN (sinks like a stone on JS heavy websites of course). No fan ramp up either although I pin them to minimum 2000rpm on this computer. CPU stays at 50-60*c according to my fan controller.


7 or so.

So longer than 4, but for just browsing IMO it should last much longer.


I've a macbook pro, 2019. I find office 365 uses 100% of one cpu so reguarly (at least one of outlook, excel or PowerPoint will be acting up) That I get 1.5 hours battery. How does office behave so well for you? The cpu hardly makes a difference there, is the program is cpu hungry battery will be used


Do you have Defender installed? That’s the culprit for me on my work Mac. Keep ‘top -o cpu’ running and look for wdavdaemon blowing up, or just set a cron job to kill it every hour.


Thanks, will take a look out for that.


That's one thing I haven't installed yet. Just to get up and running quick while I was getting setup I installed the office apps as PWAs rather than go through the application install process and ... kinda left them that way.

But doing that they haven't caused any issues. Granted Teams, while on a video conference, will eat battery, but it handles a few meetings a day just fine overall.

I should install those apps one day ...


This is not a surprise for anyone that has been using Windows under Parallels on an M1 Mac. What's important is for the real experience to be match the benchmark and it seems to fly for core office/productivity tasks.

I setup a VM for work stuff that includes a bunch of admin policies and work-installed junk a few weeks ago to work while traveling. It didn't even break a sweat and Teams felt as responsive as my recent i7 desktop. At the same time I was able to run all my usual dev stuff on the Mac. It felt like it would handle classic Visual Studio as well, but I didn't want to bother with the install.


The classic visual studio is handled well under parallels desktop, tough the M1 CPU cannot do magic and it’s still relatively slow to start.


Ehh... the answer is it kind of works well, but it really depends on what you're doing since you're running an x64 application (Visual Studio) on an ARM version of Windows running on a Mac M1 in parallels.

.NET development and in particular stepping through debugging is particularly slow, though I guess we should be happy it works at all.


Didn't Microsoft just finally announce an ARM native version of Visual Studio?


And Arm64 version of .NET Framework (4.8.1). .NET Framework is considered legacy and 4.8 was supposed to be the final version. Not sure whether it means they foresee faster Windows-on-ARM adoption. Or slower .NET Framework to .NET Core transition. Or both.


Do you mean the MacOS Visual Studio or the ARM windows version? I’ve been using the MacOS apple silicon build for several months now.


Grandparent was referring to Visual Studio on Windows ARM.


> tough the M1 CPU cannot do magic

TBF, I think the M1 CPUs crossed the magic barrier already


Windows running in VM on M2 is faster than native Windows on x86.

Such a shame for Intel, AMD and Qualcomm.


How is this a shame for AMD or Qualcomm, who aren't part of this comparison at all?

> Windows running in VM on M2 is faster than native Windows on x86.

Which x86? 12th gen Intel? 11th gen Intel? Ryzen 6000? These are all wildly different processors. This comment is absurd. It's also "Windows ARM version running in a VM on M2", not x86 windows which is the only thing you'd care about running in VM on MacOS if you were actually going to do this for some reason.


It's not a coherent comment - the part about "shame" is meant to incur an emotional response and bypass logical reasoning systems of the human mind.


I would say it speaks volumes of the crew at Parallels and how polished the software is.


I mean it's both. Wouldn't be possible without absolutely insane hardware.

But also wouldn't be possible without good work from Parallels.


meanwhile vmware feels there is no need to support apple silicon hosts, feeling there is no demand lol


VMware Fusion has a tech preview for Apple Silicon at https://customerconnect.vmware.com/downloads/get-download?do...

Background at: https://blogs.vmware.com/teamfusion/2022/07/just-released-vm...

(Disclaimer: I work at VMware - but not on Fusion)


It's been a tech preview for a long time. I used to have a Fusion license before Apple Silicon but now I have Parallels because it works.


i appreciate it, now im baffled why our support ticket got such an answer, it was recentlyish


Parallels only product is Parallels, it's the entire company.

VMWare Fusion is an afterthought of a multi-billion dollar company.


> Parallels only product is Parallels, it's the entire company.

Not true these days. "In December 2018, Parallels became part of the Corel Corporation and joins an impressive collection of industry-leading brands, including CorelDRAW®, WinZip® and MindManager®. Parallels has offices in North America, Europe, Australia and Asia."


Wow talk about a blast from the past! Didn't know Corel still existed.


> Such a shame for Intel, AMD and Qualcomm.

If the article is true (and that's a big if, given Max Tech's history of tech reporting), instead of blaming the above companies I'd just congratulate Apple and thank them for putting the bar higher. AMD recent processors are still great on the desktop, and an absolute blast on the server, with extremely attractive value for money ratio.


> AMD recent processors are still great on the desktop, and an absolute blast on the server

Yes they are truly great but point to consider is they are just competing against Intel. Don't get wrong I love what AMD has done recently with Ryzen but we need ARM based servers which has perf per watt competing with M1/M2


> we need ARM based servers which has perf per watt competing with M1/M2

We have them? Graviton instances are priced pretty competitively compared to similar x86 EC2 instances. A lot of people don't use Graviton though (myself included) for a number of reasons. A lot of software hasn't been ported over to aarch64 yet, and even when it is it can lack the optimization that x86 enjoys (especially with programs leveraging AVX et. al). Furthermore, the CPUs that Amazon use seem to be bottlenecked by low IO bandwidth, which makes for some janky benchmarks when comparing simple database operations.

I think we'll eventually see a return to RISC architectures, but ARM's value proposition hasn't made a lot of sense on the server. It's still encumbered with proprietary licenses that make it extremely hard for CPU manufacturers to compete in the server market. It still hasn't figured out all of it's hardware-acceleration quirks, and you don't get any Rosetta 2 fairy dust on Linux (not that you'd want to use it in prod anyways). All in all, x86 is still the set-and-forget king, and probably more stable than ARM alternatives. I'm hopeful that RISC-V will finally put x86 in it's grave, but that's going to be a couple years out...


The ARM Graviton instances in AWS are at best 5% cheaper per equivalent process when writing to a DB.

In my own workloads. Which are not burst, but first sustained CPU, and then sustained I/O, at about 5MB/s.

So, it is cheaper for Amazon, as they use half the power, but not cheaper for end users.


I was excited when Hetzner announced their RX line only to discover the entry server is a few more times more expensive than any other entry level server in their offering. But I'm tempted to try it out anyway.


AMD is also stuck on TSMC's 7nm node while Apple hogs up all their 5nm capacity for M1/2 and A15.

Intel has no real excuse since they use their own fabs and spun their wheels on 14nm for half a decade.


You can not compare TSMC's "7nm" with Intels "14 nm". These numbers are just marketing...


Can't we go with MT? (millions of transistors per square millimeter)


Power and speed isn't just about transistor density. The construction of the transistors also has an impact. See FinFET and GAAFET.


The Ryzen 7000 series will be on 5nm, right? So not too stuck, while that of course isn't shipping yet.


Their excuse is that they were ambitious with 10nm and it didn't work for them


I wouldn't go too far..

I was shopping for a laptop a month ago and considering the M2 MacBook. The CPU performance is definitely impressive, but I ended up getting a Razer Blade 14 because it absolutely decimates the MacBook in GPU performance and gaming related stuff, unless you go top of the line MacBook Pro which would have been about $1200 more and couldn't run most games. If you don't need a great GPU I would definitely go with a MacBook, but if you need one you're just not going to beat an RTX 3080 and an AMD Ryzen.


Razer Blades look good on paper but be prepared for bloated batteries. Between the 5 Blades my friend and I own, all 5 had bloated batteries within a year, some catastrophically. I'm personally on battery number 4, in just 3 years of use.


I had that happen (twice) with my macbook unfortunately, and with an iPad. I think it's just an unfortunate thing that happens with lithium.


My solution is a bit different, I have a Ryzen desktop with an NVIDIA GPU when I need to game, as I don’t game on the go, and honestly, gaming on the Mac doesn’t exist.

But I absolutely work on the go, and need the battery life I get from the Mac so I’m not constantly looking for a place to charge.


I use a similar setup, I need CUDA for work. But machine learning on a fast CUDA-capable GPU on a laptop is brutal battery and heat-wise. So, I just SSH into a Ryzen tower with a fast GPU for machine learning. (The MacBook with its AMX matrix multiplication units and Metal Performance Shaders is fast enough for short test runs.)

You can also put a much faster CPU and GPU in a workstation than a laptop, if you have enough headroom for 105W TDP CPU and 350W TDP GPU.


And for how long can you game on it unplugged? I feel like gaming on a laptop is a niche, and is never gonna be too practical.


For me it's actually about game development, so I'm running things like Unity and baking light maps and things like that, which I absolutely do on the go in coffee shops or whatnot. The battery life is actually pretty decent. Nowhere close to a macbook but I don't have to be plugged in all the time either.

I'm not sure I'd call it niche. No I'm not playing counterstrike in a coffee shop, but it can be fun to bring it over to a friends house. I haven't bought a desktop since 2014, I like having the freedom to work from anywhere and not have to move files between computers all the time. When I have it at home I have it hooked into 3 monitors and a full size keyboard, and it runs pretty quiet, so it's not really any different from having a desktop in that regard.

YMMV though. Most of what I do with a computer even besides games needs a very powerful GPU. If you're working with docker and webdev I'd just get a macbook. My only point is that apple doesn't have a pure monopoly on performance, as impressive as their new silicon is there's some things it's still not so good at.


If you need a laptop w/ a strong GPU for desktop applications the Apple product line isn't your best choice.

For mobile computing + gaming I replaced my discreet gpu Macbook Pro with an M2 Air + SteamDeck and I'm enjoying everything a lot more.


I'm in a similar boat, the other thing the gaming laptop lets you do is demo your gamedev work at meetups and similar.


Just depends what you care about. All the games I care about, from Baldur’s Gate to Crusader Kings to emulated games, are best enjoyed in my bed or couch on my laptop. My desktop feels more niche for my preferences since only a few games I care about require sitting at a desk in “FPS position”. I had more fun building the thing than using it.


A ton of folks use laptops for gaming and move them from place to place, plugging in at each.


I am about to run the GeekBench with my main laptop.

https://valid.x86.fr/u8b004

I believe - it will OBLITERATE M2.


No need: https://browser.geekbench.com/v5/cpu/search?utf8=&q=AMD+Ryze...

Also doesn't really obliterate the M2:

https://browser.geekbench.com/macs/macbook-air-2022

Seems like multi-core scores are all over the map (probably depending on the cooling of the laptop), but not really impressive when comparing to a passively cooled CPU. Single-core scores are meh compared to the M2.


Yeah, not what I expected.

Here is my result:

https://browser.geekbench.com/v5/cpu/16820778

Given that I didn't stop anything and was playing Dota 2 while running it ...

Still, I believe at some point once newer iterations of X86-64 drop support for legacy instructions (they still have to support code running on 8080/8086 through 486) - we will have smaller, more efficient X86-64 chips.

Actually, it is darn impressive that the latest Intel/AMD chips keep up with the brand new M1/M2 chips that have zero legacy instructions and the Intel/AMD chips carry 50 years of legacy :)


Noticed that I ran the previously GeekBench on balanced power plan.

Here are the results on Performance Power Plan - https://browser.geekbench.com/v5/cpu/16821908

Summarized results - https://i.imgur.com/X5vKtFn.png

Overall, with the new results - AMD Ryzen 9 5900HX is 18% slower in Single Core, 4% faster in Multi Core.

Not that bad ...


Not that bad? Those are absolutely terrible.

4% is probably close to noise level, so essentially you are a couple of percentage points faster on multicore with TWICE the total power envelope. Think about that.

And almost 1/5th slower in single core.

Also, you went from "It will OBLITERATE M2" to moving the goal post "not that bad" pretty quickly. I love it.


True.

But still, pointless to me - I don't care about power efficiency, battery life or any synthetic benchmarks.

I have two concerns with a laptop:

1. Can it run the latest Visual Studio at blazing fast speed?

2. Can it run Dota 2 on Full HD @ >144 fps?

So far ... M2 doesn't fulfill fully neither point 1 nor point 2 :)


That's not such a good result when you're talking about a chip with a 54w TDP (even more peak) vs a ~20w peak TDP for the M2.

4% faster for 2.5x more power isn't a win. The fact that 95% of users care more about that 18% single-threaded performance makes this even worse.


The TDP of the 5900HX is 45W.

Don't forget we are comparing last-gen AMD vs current gen M2.

M2 on Mac makes 1884/8717, but on Paralles -> 1681/7260.

So, the penalty is 11% on single, 17% on multi because of running on Windows via Paralles.

APPLE M1 average Single Core is 1706, average multi is 7421 on MAC.

If we apply the above penalty becomes -> 1706/7421 -> 1522/6180.

AMD 5900HX average Single Core is 1413, average multi is 7656.

So, AMD - 1413/7656 | Apple M1 - 1522/6180 -> Amd is 8% slower in single, 19% faster in multi.


> The TDP of the 5900HX is 45W.

The TDP according to AMD's website is "45+W" which they clarify to be a cTDP up to 54w TDP.

https://www.amd.com/en/products/apu/amd-ryzen-9-5900hx


> once newer iterations of X86-64 drop support for legacy instructions (...) - we will have smaller, more efficient X86-64 chips.

Don't get too excited about it. The old compatibility is going to be tiny in the die space compared to even basic Intel extensions. And it's not just the old code - new code may well contain a "mov al..." so you can't just drop it. All of those instructions will stay with us for decades.


Non-native Windows (double-checked the video to make sure it wasn’t Windows for Arm) at that. Quite impressive.

(Apparently I’m wrong, leaving my original gaffe in place.)


it is windows for arm.


Are you sure about that it is not Windows for ARM?


It was an additional eye opener after I tried, unsuccessfully, to like Raspberri Pi. "Powerful 64-bit CPU"? After M1 I kept expecting it to be fast and to run non-ARM software :D

Once you try M1/M2 Macs it's very hard to go back to pretty much any hardware.


Raspberry Pi has a distinctively different target audience. If you expect an RPi to run non-ARM code, then I'd say it's not made for you.

I agree that M1/M2 Macs are good, but I implore everyone to not get sucked into and stuck into the Apple ecosystem. Nothing good can come out of tech dominance.


> I agree that M1/M2 Macs are good, but I implore everyone to not get sucked into and stuck into the Apple ecosystem.

So much this. Every time I hear something good about the M1/M2 it's interesting but it's like "aah, such a shame I can never use it without loosing my sanity due to all of the strings it has attached".


I bought an M1 Mac Mini on sale earlier this year when Microcenter had them under $600, now it just runs Asahi Linux. Eventually I'm going to move all of my self-hosted server stuff over to it, minus Jellyfin and whatever else needs hardware transcoding capability.


I'm happy for you, and that would also be pragmatic for me being a Linux user, but my annoying idealistic conscience refuses to fund Apple's activities.


They're saying it was pretty much pointless to buy the Mac mini as they don't use it with Mac OS and the Apple ecosystem


Hmm, i was reading it as the opposite, (that they found it useful specifically because they were able to avoid Mac OS and the Apple ecosystem).

Context: There are some Linux users who want to use the hardware (so they can enjoy the M1/2 whatever) so there are efforts to get it running as a Linux machine (Asahi Linux), which would certainly keep me far more sane - but I still abhor giving money to Apple for the hardware (even 2nd hand which supports resale value and thus retail value)... I also don't trust that they wont simply pull the rug on such efforts in future by doing some kind of over the wire firmware update to brick those machines. Why do business with people who don't like you, it's just going to be a constant fight.


That is what I meant. The other person misread it. My point is that I got a far more efficient home server machine to run Linux on. The Mini as I have it sips power like my Raspberry Pis while having the computational grunt to be far more useful


That was not what I was saying at all. It's got a fantastic power to performance ratio without being locked into Apple's walled garden.


If you just run Linux / *BSD on the hardware, what are the strings you are referring to?

Every time an article comes up reminding me of how great the M1/M2 laptops are, I think about maybe getting a used one in a few years to run Debian on, but the soldered on SSD worries me (especially since early Apple software issues burned through erase cycles).


Running Linux or BSD would be the only possible way for me. But beyond this Apple are the strings for me, as I've mentioned in sibling responses I do not wish to fund the company in any way. 2nd hand unfortunately also funds them indirectly by supporting retail prices through resale value.


> Nothing good can come out of tech dominance.

The dance between hardware, software, and devices, and its consistency, is a measurable good. To me, this is the "magic sauce" of Apple that makes them successful.

But, to agree with your point, my general rule is: Apple hardware, with third party services (besides iCloud). And, a PC or console for gaming. :)


I am hopeful that Qualcomms overtake of Nuvia should help ARM achieve true potential for non apple users. Although they keep on disappointing. They claimed to have M1 performance in 2022, now they have postponed to late 2023. Matching M1 performance in late 2023 won't be that great


I feel like Qualcomm is bloated as a company and also overly corporate/restrictive. I understand they have IP to protect, but other companies have shown it can be done at scale with a much more open mindset. If Qualcomm ends up winning using some kind of proprietary tech, it wouldn't be a big win for everyone. Another Apple, in a way.


> Qualcomm ends up winning using some kind of proprietary tech, it wouldn't be a big win for everyone

Those processors will be able to run Windows and linux. More efficiency for non apple ecosystem. I consider that win for everyone


Mx can run Linux, and can run Windows in a VM. It could run Windows bare metal if Microsoft wanted it to.

Actually, I'm not sure why Apple would be the "supporting tech dominance" choice here as opposed to Intel.


Because they're a more closed ecosystem that encompasses much more of the stack. Intel is a small part of the whole stack and they have a proper competitor. Smaller slice of the stack + more competition = less tech dominance.

The fact that M1/M2 can run something is almost irrelevant. It can because Apple allows you to run it and can stop that at any moment. It wouldn't be the first time they would decide something like that out of the blue.


> The fact that M1/M2 can run something is almost irrelevant. It can because Apple allows you to run it and can stop that at any moment.

The lead Asahi Linux developer has addressed this point.

>Okay, it's been over a year, and it's time to end the nonsense speculation.

I have heard from several Apple employees that:

Apple explicitly engineered 3rd party OS support in, and it is a hard policy requirement that it continue to work.

https://twitter.com/marcan42/status/1554395176025849856


M1/M2 can boot unsigned code and don't have flashable firmware, so I'm not sure it's actually possible to update it in a way it stops booting Linux.


For tech dominance not to occur other players need to get their heads out of their collective asses and offer something competitive.

Yes, I shouldn't expect Raspberry to run non-ARM software... but why exactly shouldn't I? Where are the trillion-upon-trillion-dollar players like Google, Samsung and others with their Rosetta-like translation layers? And Qualcomm with competitive CPUs?


This is frankly a ludicrous ask of a 35 dollar hobbyist single board education computer with an old CPU. I hate the dreaded car analogies too, but this is akin to buying a cheap Toyota saloon and expecting it to accelerate like a Ferrari.

For what its worth, there are some rosetta like options on RPi - you can run x86 containers in QEMU, for example. Its just again you've bought a 35 dollar computer - its not going to be fast enough to be performant for most tasks when translating (in real time!) software written for a completely different CPU architecture.

> https://gist.github.com/Sitin/bfa5e770b80ab4b8740c88e648666c...


> This is frankly a ludicrous ask of a 35 dollar hobbyist single board education computer with an old CPU.

Linux is being developed mostly by trillion-dollar corporations. Intel. Google. Samsung. etc.

ARM chips are designed and produced by Samsung, Qualcomm etc.

You'd think they would:

- come up with a competitive chip, and

- Rosetta-like software

that even "$35 dollar hobbyist computer" would be able to do this with, quote "powerful 64-bit CPU".

Alas.

And no, I'm not buying the whole "you shouldn't expect". Because, as it turns out, I can't expect this from any computer, be it hobbyist, educational, family, gamer, professional, or whatever adjectives you can put in front of it. Except Apple.


That $35 SBC is running a CPU that costs a comparatively lower amount to produce than an M1. Putting 20 billion transistors on a single die reliably at 5nm is extremely more costly than a billion or so at 28nm (the BCM2711).

If you want Broadcom to release a CPU that's on par with the M1, half of your Pi's circuit board will be just the CPU die's BGA pins.

Intel's (and others') price gouging and market segmentation does not in any way imply that it's possible to manufacture an i7/i9/R7/R9/M1 at $20 a piece.


Everyone continues to willingly miss the point. Let me spell it out again.

--- start quote ---

Because, as it turns out, I can't expect this from any computer, be it hobbyist, educational, family, gamer, professional, or whatever adjectives you can put in front of it. Except Apple.

--- end quote ---


you know the pi is pretty amazing for what it is. the hdmi 1080p output alone was groundbreaking with the first pi

The other side of the puzzle is software. I'm pretty sure there is a lot of headroom in the rpi hardware, if only someone re-write and re-optimize the os and software for it.

That's part of the magic of Mac - they got to hire as many people to optimize the os as they did to build the chip / computer itself.



You can also run x86 32bit binaries on the Pi with box86: https://box86.org/

Surprisingly performant and very impressive.


The Raspberry Pi has found great success as a console and coinop arcade game emulator, so of course, a RPi is a turing device meaning it can run all the things that any other turing device can run.

It's great at emulating arcades! that's totally non-arm software.


The Apple ecosystem is the best consumer tech that exists right now. Things just work, and they all work together. Beautiful!


The Raspberry Pi is a $35 computer built on a 10 year old node.


They went with 28nm because it was the last planar (aka cheap) node around. GlobalFoundries recently launched their 22nm planar (22FDX), so it seems like there's now a cheap path forward for making a new chip with an updated CPU design (and hopefully an updated GPU too).



I can answer the question posed in that thread: Because the performance would be abysmal. The Raspberry Pi is a neat device for $35, but even running native ARM code it sometimes struggles to provide a good desktop user experience by 2022 standards.

The major use-case I can think of for building a rosetta-like x86 translation layer for the Pi is to run Windows software. All the relevant Linux software can, or has been, ported to ARM. Do you think the average user of Windows software is going to be happy with the performance of an x86 emulator running on a $35 ARM computer with barely enough memory to run a few Chrome tabs?


The Raspberry Pi is so cheap because they intentionally use really old phone chips they can get incredibly cheap.


How do they repurpose them?


Are they seriously reporting Max Tech as news now?

Those guys have absolutely no clue what they're talking about and just make bullshit click bait.


Please provide at least a single piece of evidence when blaming. I don't know them, but if you're saying I should not trust them - please explain why (not just “they are wrong”).


For everyone saying this must be the Windows Insider ARM based Windows version - the linked YouTube video shows him using (and paying for) the full Windows 11 x86. He even gives a referral code. It's at 48 seconds into the video.


Took me a while to get the joke...


He's using an x86 license key for ARM Windows.


Isn't this expected given that Dell is using Intel instead of AMD?

AMD's current offerings are pretty competitive with Apple. Unfortunately, they are hard to find. Yes, the real good stuff is almost unobtainium


Intel Alder Lake is decent for vs latest Ryzen, but next Ryzen is coming soon.


Depends which next Ryzen you mean. 6000 series has been out for almost 6 months. CPU is the same, gpu is much better. The 7000 are expected to be much better - zen 4 + rdna 3. Yummy


Not that hard to find. Dell Inspiron and Alienware have AMD options. Lenovo's main X13 line has AMD variants. And of course all the Asus, MSI, Acers, etc.. of the world have AMD offerings. So does HP.


I've been looking for the new 6800u based Z13 but it's nowhere to be found. Guess my wife would have to keep her old machine for a while longer


... while unplugged ...

Nothing new here, this is just better 5nm process advantage (which M2 undoubtedly has).


“There are three kinds of lies: Lies, Damned Lies, and Statistics”.

I really wish thermal throttling was the first major point of discussion for ALL laptop CPU's. One of my work laptops never reached full cpu speeds in the real world because it would get thermally throttled!


> I really wish thermal throttling was the first major point of discussion for ALL laptop CPU's.

Exactly. Intel claims performance leadership based on their P series chips, which are always going to be throttled in a slim and light laptop.

For instance, Lenovo's thin and light ThinkPad X1 Yoga has two fans but still throttles:

>Unfortunately, the laptop got uncomfortably hot in its Best performance mode during testing, even with light workloads.

https://arstechnica.com/gadgets/2022/07/review-lenovos-think...

Or the Dell XPS 13 Plus:

>the XPS 13 Plus’ fan was really struggling here because, boy oh boy, did this thing get hot.

After a few hours of regular use (which, in my case, is a dozen or so Chrome tabs with Slack running over top), this laptop was boiling. I was getting uncomfortable keeping my hands on the palm rests and typing on the keyboard. Putting it on my lap was off the table.

https://www.theverge.com/23284276/dell-xps-13-plus-intel-202...


I had the same experience, and it's incredibly frustrating -- both when you don't know about thermal throttling and when you do.

I'd do something that caused high load, and it would be briefly fast then suddenly everything would get sluggish. I naively thought this was normal ("under load") until some time I was doing the same workload on a similar-spec desktop CPU and it didn't happen. In trying to discovery why is when I first learned about thermal throttling.


At least with windows you can turn off turboboost and other such features that basically temporarily overclock the cpu beyond what the thermals are designed for evidently. I had a late model intel mac that was horrible for this. Games would become unplayable because you'd have 30 seconds of reasonable FPS followed by 5 seconds of terrible awful drops then right back to reasonable fps. My 2012 era mac was better in this sense because the thermal design allowed the CPU to go all out at room temp, no throttling at all and consistent fps from the game (albeit low because of integrated graphics, but consistent nonetheless which is at least playable).


My old work laptop was an i7 XPS-13. On bootup, it would spam the console with messages about the processor cores thermal throttling (partly due to cpu load from disk encryption)-- this was within a few seconds of power on in a too cold air conditioned office.


And in the case of M1/M2 Airs it's quite contrary - you need to load them really hard to cause throttling.

With Apple Silicon MBP I’m not sure what level of load should be to cause throttling with full-speed fans.


It's going to be partially this, but with AMD launching Ryzen 7000 mobile on 5nm we can see how close we get in performance per watt. I don't believe it will be enough to catch up.

Also remember this is running in a VM (guaranteed overhead) and Windows for ARM (this is the red head step child of Microsoft). Even if they closed the gap, this is still impressive.


I read and watch a decent number of PC hardware sites/channels and they don't seem to use geekbench. The only time I hear people mention geekbench is in relation to apple. Is there a reason why pc hardware sites don't seem to use geekbench?


It is always better to benchmark with actual programs than it is with benchmark utilities. Geekbench's claims as a cross-platform test are also not really validated.

For example, let's check out the M1 Mini review that Anandtech did, specifically these two pages:

https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste... https://www.anandtech.com/show/16252/mac-mini-apple-m1-teste...

According to Geekbench, the M1 handily beats the 5950X in single-threaded work. However, according to Cinebench 1T and spec2017, the 5950X is faster at single-threaded. Who is "correct"?

The answer is a much more simple "look at the benchmark results that matter for your workloads." A good review will cover benchmarks of a variety of workloads as a result so you can figure out which matters to you. Which is never geekbench, since nobodies workload is ever geekbench. So a good review will tend to not include geekbench rather than include it, unless they just don't have better ways to compare whatever they're testing.


GB4/5 are... not a particularly high quality codebases. I can't offer anything more than that, but, well, lets just say that GB4/5 is my least favorite benchmark as a compiler engineer for AMD. It shouldn't be given much weight as a benchmark.

These opinions are my own, not of AMD.


GeekBench is intended as a cross-platform benchmark.

> Designed from the ground-up for cross-platform comparisons, Geekbench 5 allows you to compare system performance across devices, operating systems, and processor architectures. Geekbench 5 supports Android, iOS, macOS, Windows, and Linux.

But it seems like there are a lot of other options.

https://www.phoronix.com/review/macos12-windows-linux

PC hardware sites are probably less concerned with "cross-platform" benchmarks. As long as the system hardware and operating system are the same, varying only the component you're testing, you can test with any benchmark that runs on the system.


This is probably not an accurate comparison. Here's the benchmarks for the two processors:

https://nanoreview.net/en/cpu-compare/intel-core-i7-1280p-vs...

That has the M2 winning single core but losing multicore. I didn't watch the video because 8 minutes is ridiculous so I don't know what they're doing. But it's probably not apples to apples.


First of all they’re comparing the intel cpu in an actual customer device. Second they do the test with both device unplugged


> 8 minutes is ridiculous

Adjustable playback speed is your friend


I think they compared it to a i5 1240p or did the dell have different specs?


The article links to the default (cheaper) configuration. The video simply says it's the $1,849 configuration of the Dell XPS 13 Plus (I didn't listen to all the audio, but that section was very light on details.) Playing with the configurator, I could get $1,859 by bumping up to the i7-1280P with Windows 11 Pro and a few other tweaks.


You are right, seems to be compared to the i7.


I'd like to see a phoronix comparison. Seems much richer.


Ugh, there’s so little information in either the linked article or the original video or even the Parallels website, but apparently, Parallels does (not*, in fact, run Intel Windows on Apple Silicon according to this page: https://machow2.com/windows-on-m1-mac/

(and even that page had confusing wording that made me think the opposite for a while).


How does Parallels Windows 11 run Elden Ring or Red Dead Redemption 2 for example? To me that would be a better comparison.


But can you use it outside?

Looking at unboxing videos of Apple laptops gives me goosebumps, seeing all the reflections in the screen:

https://youtu.be/eSlAJMsM6CM?t=321

The Dell XPS to which it is compared here is available as a matte version and is nicely suitable to work outside.


It's trivial and cheap to apply a matte screen protector if that's your preference.

But you can't convert a matte screen to glossy.

Also, matte doesn't necessarily translate to more legible outdoors either. Matte removes reflections by reducing contrast which has its own drawbacks.


Not on the new macbooks. You will break your display over time. The distance between screen and keyboard when closed is 0.1mm


If we keep talking about quality, you lose much more clarity/contrast by some 3rd party sticked matte layer than what manufacturer actually puts into screen itself. But I guess anything is better than cheap looking glossy screen intended for any serious work.


I would love to get a recommendation of a good glossy screen matte-er.


Honestly, and I'm not sure of the technical specifications, but the screen is so bright that I really never notice the reflections unless I'm in the absolute worst conditions- where I probably shouldn't be running my laptop outside in direct sunlight.

It's an interesting issue, because I have the same feelings about the recent launch of the SteamDeck. The highest priced model has an etched mat screen and the lower models have a reflective one. I'd say that it's far more noticeable in the steamdeck than it is on my macbook screen, even though they look very similar.

My guess is the adaptive brightness and sensors are very well tuned.


I've got a MB Pro 14", can't really use it outside at normal max brightness (500 nits); there's a program called Lunar which let's you blast the full display up to the normally-HDR-content-only brightness levels (so probably somewhere around 1000 nits); easily usable outside if not in direct sun; even in direct sun still somewhat usable. (Drains battery pretty fast though.)

Still not as nice as e.g. an e-ink display would be.

I don't think the M2 Air has this option, so I would say: not very usable outside.


Yes, the glossy screen means that more light reaching your eyeballs.


I use a 2012 mac outside. In the direct sun its fine if you use white background and black text for your editor.


Can you use it in an office?

My m1 is unusable at 80% of the comfortable angles, and not at all at the optimal one.


This is such a weird claim that doesn't stack up at all with my experience.

Unusable at 80% of the comfortable angles? Come on now.


My Lenovo has a matte screen and has an a lot less visible screen than my wifes glossy macbook air. Unfortunately glossy or matte no longer is an indicator on how readable a screen is.


The glossy Air works better outside than the matte Lenovo? I find that hard to imagine.

Which Lenovo is it?


Lol can you describe some of these angles? It works fine on my desk and lap. I could see the keyboard being too small, but that's not unique to Macs.


Windows in Parallels runs well except with every update IIS Express is broken so I have to uninstall and reinstall the version downloaded from Microsoft's website.


I’ve noticed a big decline in performance recently on my 2020 MacBook Air (pre-M1) — a lot more fan activity and some apps completely freezing (iMessages and Slack in particular, but also StarCraft a few times).

I’ve been wondering if Apple’s latest software updates are optimized for the new machines at the expense of the old ones?

Not sure how Apple could optimize for both architectures simultaneously, and if they faced a tradeoff, I’m pretty sure what they’d do…


More like third-party software (Slack, Figma, VSCode, anything web-based) is getting even more bloated as their engineers' machines get upgraded to M1 Pros.


Other publications show the i7 1280p in the XPS 13 Plus getting more like 10,000 in Geekbench 5 multicore, well ahead of the M2's 7,360 score. https://www.windowscentral.com/hardware/laptops/dell-xps-13-...


did they really run a geekbench or do they just use the reference number? with the heat display, I'm not sure if it can sustain the numbers...


They have all the results of their benchmarks graphed in the article. It looks like Max Tech didn't know how to use the various performance modes in the XPS 13 Plus.


I am seriously curious on how/if MSFT/Dell/Google/etc will answer.


I don't think MSFT/Dell/Google could have done anything. Perf difference is because of AMD/Intel


Well TSMC's 5nm fab is also obviously a huge factor, but the performance difference is largely due to the ARM vs x86 ISA as well as all the optimizations that Apple has put into running MacOS well on their chips.

Microsoft will certainly continue to put effort into developing the ARM version of Windows, and try to follow Apple's path in getting a version of Rosetta to run the legacy Windows applications.

I'm certain that Dell already has ARM-based laptops and servers in development. Until the Windows version of Rosetta works as well as Apple's, there will likely continue to be a demand for x86 machines, but pretty much every PC vendor can see the transition coming. I'm hoping Framework comes out with something soon.

Not sure that this does much for Google. Their datacenters already run custom CPUs, and web applications should be agnostic to the CPU powering the web browser. Android phones are already running ARM.


A more precise question is how long it will take for others to catch up: 2 years? I remember when the iPhone 1 came (2007) it took ~4 years for Samsung to came up with the Galaxy S II which was a great competitor. I am aware this is comparing apples with oranges but thinking about business time.


> 2 years?

Let's assume that Apple is 2 years ahead now, and that their rate of innovation is 20% higher than any competitor. The answer is therefore that the competitors will never catch up. This is why I am long APPL.


I think we can apply compound interest in your "never catch up" model under a certain range of time. Also, if we do fundamental analysis of the AAPL stock we can say that they are under the Taiwan/China risk?


I could not parse your second sentence.


But can it run excel with windows hot keys...


i thought we were all skipping windows 11?


Unfortunately 11 for ARM has major improvements that aren't in 10.


Unfortunately, given how terrible MacOS Outlook is, running a Windows VM is almost a necessity.


You don't need to use Outlook on macOS. You can just sign into Exchange in System Preferences > Internet Accounts.


Unfortunately that doesn't work with some custom auth systems.


That must be something extremely and unnecessarily custom. At my org we enforce MFA with Microsoft Authenticator and that works just fine with macOS' built-in Exchange support.


Do they have a web version? I honestly have no idea, I’m not trolling.


The issue is that the windows version is infinitely customizable via the gui, custom views, rules, conditional formatting, vba, etc.

Almost all of this is lost on MacOS. It's odd. Word and Excel are almost 1:1, but Outlook is a dumpster fire.


If you're on Microsoft 365, there's outlook.office.com, though that probably doesn't compare to the Windows desktop version either.


Of Outlook yes. I prefer it to the native version, even on Windows.


Steve Ballmer is that you?


1, I guess it's a standard x86 version of Windows, not some Windows for ARM port. But imagine how fast native Windows port for M2 would be?!

2. Is this only Windows/graphics specific, or Parallels on M2 can also run Ubuntu x86 OS (not Asahi Linux) and code faster than the latest x86 laptops?


Parallels can't run x86 code on Apple Silicon. It runs ARM Windows, which includes Microsoft's own x86 emulator: https://docs.microsoft.com/en-us/windows/arm/apps-on-arm-x86...


Parallels does not support x86_64 Windows on M1/2 Macs:

https://kb.parallels.com/125343

You can install the Windows ARM insider builds and run x86_64 applications through the emulation in Windows.


It's almost certainly Windows for ARM, which is available from Microsoft as a Windows Insider Preview. Parallels will also be able to run non-asahi ubuntu, and it likely will be pretty fast, but again it will be the ARM version.


It's windows for arm. linux for arm, etc etc

Parallels on Apple Silicon cannot run x86 OS's.

If you want to run x86, you can use qemu which works fine.


> If you want to run x86, you can use qemu which works fine.

It works for sure. It has a very noticeable performance impact though.


thanks, for a moment I was disappointed that I just ordered a fully-loaded Thinkpad X1 Carbon Gen 10 since I need to run x86 Ubuntu

> If you want to run x86, you can use qemu which works fine.

Are there any benchmarks for x86 Ubuntu under QEMU on M1/M2? Will it be usable?


Hi rivertech, I have no benchmarks but tried Alpine x86, it's a tad slow. I haven't tried a GUI and figured it's not worth the penalty in speed and battery life. Why do you need x86 ubuntu instead of arm ubuntu?, is it because of running x86 windows programs under Wine? Box86 is perhaps a faster solution.


> Why do you need x86 ubuntu instead of arm ubuntu?

- I dislike macOS (when I had Intel MBA in the past it was collecting dust until I installed Ubuntu on it).

- I don't want to have 2 sets of instructions how to do something (i.e. once on macOS on AppleSilicon and once on Ubuntu x86)

- I want my local dev env to be as similar as possible to the deployment env.

Not sure how Ubuntu for ARM with x86 JIT for user code is the answer, judging by your reply it might be slow also.


I understand, I'd try box86/64. Depending on what you develop, instructions between Ubuntu x86 shouldn't differ that much compared to Ubuntu arm64, which can run with near native speeds within MacOS arm64. I have a similar issue, I want to run the occasional windows 32-bit programs on MacOS arm64 which necessitates linux x86/x64 with wine. I'm not sure if I can run wine for x64 on macos arm64 directly, appearently codeweavers crossover can. Ideally I'd do that in a virtualized box I can backup and move around.


This is coming : "Running Intel Binaries in Linux VMs with Rosetta" https://developer.apple.com/documentation/virtualization/run... And Ubuntu for ARM is already available.


I can highly recommend Arch AMD64 under Parallels on M1. GPU acceleration, clipboard integration, stuff generally works


Those are great machines too! And what dreamy keyboards.


It is the ARM version of Windows. Parallels on Apple Silicon can't run x86 OSs. Only ARM ones. Not sure if Geekbench has a Windows ARM port though.


Actually I think you have to use the Arm version of Windows with Parallels. But it has been a while that I tried it, may remember incorrectly.


If feel like at this point, Windows is eventually going to switch to ARM it's just a matter of overcoming a lot inertia. ARM and RISC processors in general are now proven to be better than x86. If Microsoft invests in making Windows for ARM run well and provides a translation tool like Rosetta, it's game over for x86. That's no easy feat but at a fundamental level I don't really see a way for x86 to become actually competitive again against RISC processors.


No, it's a matter of someone making an ARM CPU that's worth using in a laptop/desktop. So far only Apple has, and they aren't sharing. The broader ARM market is still pretty weak & broadly uncompetitive with current x86 CPUs.

In the server side of things there's Neoverse, graviton, etc.. which might have "trickle-down" effects but since it's primarily focused on just having more cores rather than faster cores that seems unlikely. And that also hasn't "proven to be better than x86" either, Epyc is a beast and AMD doesn't seem to be slowing down or hitting any limits in scaling up.


> ARM and RISC processors in general are now proven to be better than x86.

Certain ARM implementations (M1, M2) have proven better than certain implementations of x86 (basically all of them). The ARM implementations in question are not available for general purchase. It's only worth the effort to switch if a competitive processor becomes available. It doesn't look like the MS teamup with qualcomm that resulted in the sq1/sq2 procs are that.

In the server space, that's different given that graviton/cavium have proven competitive.


Cavium has been proven competitive then killed. Something didn't add up there, but at least we have the Ampere processors left.


Microsoft has shipped devices that run ARM for many years (10?), and include translation software to run x86 applications transparently. They are hindered by two things though:

1. They have been locked into using chips from Qualcomm, which are just slower than Apple's ARM designs. 2. Qualcomm has not implemented the hardware necessary to speed up the x86 translation. The basic problem with running x86 applications on ARM is handling differences in the memory model (x86 uses a strong memory model, ARM uses a weak one). This is slow to handle in software, but IIUC Apple customized their hardware to speed up that operation, which puts them in a much better place for emulating x86.


For those of us whose CS background doesn't go this deep, what about ARM makes it more performant than x86? Is it something at a theoretical level, or does it have to do with CPU manufacturing constraints/tradeoffs, or a simpler/more efficient instruction set?


The surface-level argument is that the instruction set is simpler and more efficient.

Certainly the floor for the complexity and size of an ARM processor is lower than for x86 - the ISA is smaller, easier to decode, and suffers from fewer silly legacy baggage items.

However, the reality is that all modern out-of-order microarchitectures are fiendishly complex and that at the "top of the game," implementation details at every level of implementation matter more than the fundamental ISA at play. With x86 there's a certain die size and complexity "tax" in terms of instruction decoding and support that majorly affects small designs. For example, an x86 microcontroller would be a bad idea compared to an ARM one, full stop. But, once you've paid that top-line x86 tax and you're building a huge out-of-order microarchitecture, the differences are minimal and how well you build the rest of the CPU's machinery matters much more than the ISA you started with.


For the most part, yes.

One difference is having constant instruction length, which allows highly parallel decoding and a wider machine in general. This is part of what makes Apple's CPUs faster and cannot be replicated by x86-64, ARM32 or RISC-V with compressed instructions.


> what about ARM makes it more performant than x86?

It isn't. ARM's fixed-length instructions helped Apple achieve a wider front-end than contemporary x86 CPUs which helped, but that's about the end of the ISA differences.

Which is why all the other ARM CPUs are slower than x86 CPUs. ARM doesn't really provide an advantage. Apple's stonking huge cash and R&D budget along with Intel struggling right as TSMC is firing on all cylinders is what gives M1/M2 an advantage.


From my understanding, there are three things to keep in mind:

1. Whenever you do an x86 vs ARM comparison, there's a number of variables that need to be considered, the node size of the CPU, the power envelope, number of cores, cooling, etc... that make it very difficult to do 1 to 1 comparisons.

2. The main issue x86 wise is that every x86 CPU needs support instructions going back to the 1980s for backwards compatibility, which "wastes" a lot of silicon on functions which are rarely used but need to be in there.

3. Apple has the ability to have their products focus on a few specific devices and just one operating system. This helps design CPUs that they know 100% what they need to handle. You couldn't say that a Qualcomm Snapdragon, just because it's ARM, is better than x86.

All that being said, I'm very much someone who prefers x86 hardware and Windows, but the M1/M2 and Rosetta are very impressive pieces of hardware/software that hopefully kick Microsoft/Linux/Intel/AMD to innovate.


> I'm very much someone who prefers x86 hardware

Curious, is this due to something about x86 design (eg: technical benefits over ARM), or are you just referring to the "PC" hardware ecosystem in general (as opposed to Apple/macOS)?


One of the factors is that all ARM64 instructions are the same length which makes decoding simpler.


This is the correct answer.

RISC has had its moments. Way back, it was better than CISC because the simpler instructions allowed a higher clock rate. Then CISC CPUs turned into RISC CPUs with a CISC-to-RISC translation on the front end. With that in place, there was a whole stretch of time where the only real advantage that RISC had over CISC was that RISC didn't have to have a lump of silicon that translates CISC into RISC instructions.

However, now there's enough space on the die for a CPU to have lots of parallel execution units, and the part of it that became really difficult to scale was the CISC-to-RISC translation unit, because each CISC instruction had an unpredictable length, making working out which instructions you can translate tricky and silicon-consuming.

And so RISC has a significant advantage once more, and that is the ability to vastly-simplify the part of the CPU that feeds instructions into the execution pool, compared to a CISC CPU, because the instructions are fixed-length. This allows this part to translate more instructions per clock cycle than a typical CISC CPU can, and this is what gives the improved performance.


I'm a dummy. Do I have this right?

> CISC-to-RISC translation on the front end

I would naively assume that this would be an advantage, since you could easily change the hardware used for any CISC instruction, finding better ways to make it faster. The "work unit" is more abstract, so you could throw the whole problem at dedicated silicon. Or, you could remove dedicated silicon, and just have the CIST spit out a list of RISC instructions.

It seems that, for RISC, you could never throw a more abstract "work unit" at dedicated silicon, without buffering instructions, to see if the intent matches the accelerators. Chip specific compilers would almost be required, to handle the abstraction.


It was an advantage. Now it's a liability, because it is hard to scale the CISC-to-RISC translation up to decode many instructions per clock cycle. The translation unit has become the bottleneck.


Why? Wouldn’t it be adding a list of instructions to the queue, for the RISC side? Why would that RISC side have to be slower? I would assume it would be mostly independent. Or is that queue pollution the problem, rather than the execution of it?


The problem is that CISC instructions are variable-length, so it's easy to work out how to decode the first instruction, but the second instruction depends on the length of the first instruction, and if you try to decode four instructions at once in a single clock cycle then it all gets a bit too much, which reduces your maximum clock speed.

In comparison, a RISC instruction decoder knows that each instruction is the same length, so each instruction can be decoded without depending on the ones before it. This simplifies the decoder so much that it makes it possible to decode four instructions per clock cycle without investing in too much silicon to do it, and while keeping a high clock speed.


CISC-to-RISC overhead is not the only factor probably. Snapdragons for example don't have CISC-to-RISC translation, yet they seem to under perform both Intel and M1.


Snapdragons aren't aiming for 4 instructions decoded per clock cycle.


I was trying to say that if CISC-to-RISK overhead is the main contributing factor then other laptops without the overhead would be competitive. ThinkPad X13s with a 4 Core Snapdragon for example. It is a pure-RISK machine and it seem to under perform Intel and is almost twice as slow as M1.


RISC is the secret sauce that allows M2 to decode four instructions per clock cycle, due to the constant instruction length, and that is what gives the M2 a speed advantage over CISC instruction sets. The snapdragon CPUs aren't trying to decode four instructions per clock cycle, so they aren't taking advantage of that feature of RISC instructions.


Got it. It is not enough to have a fixed width instruction set, you also have to actually decode and execute them faster after decoding. Wonder at what point we will start seeing competitive ARM CPUs.


>For those of us whose CS background doesn't go this deep, what about ARM makes it more performant than x86?

It is more performant because the silicon is made by a company that didn't botch their last process node.


It's nothing to do with ARM. The same design team, targeting the same manufacturing process, would have made a very fast x86 CPU.


No doubt they would have made a faster x86 CPU. But how would this CPU compare to M1? M1 superiority must have something to do with ARM's fixed instruction length according to this article: https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...


> and provides a translation tool like Rosetta

They already have that.


The original Surface from 2012 was the first mainstream ARM Windows device.


I had a friend studying law buy one of those. She quickly demurred and switched to a laptop, even her workload was too much for the device.


Present day ARM Surface devices (well...device) are very fast.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: