I switched from an XPS 13 to an M2 Air recently. The differences are striking.
I have my XPS 13 browsing HN next to my M1 air. That's all the XPS is doing, firefox browsing HN. My M2 has multiple docker containers, a collection of office apps open, browsers, vs code... and so on.
My XPS battery will barely last until 1pm if it isn't plugged in, and the fan kicks on for no apparent reason... It will last a little longer if I manually set it to "best battery life" but even that can be surprisingly ineffective for no apparent reason sometimes.
My M2 Air will last nearly two full working days (I haven't pushed it that far but so far it appears it would make it), the battery life is crazy. I go into the office and I don't even think about plugging it in, I know it will do fine.
I don't know what it is about macOS and using their M chips but the efficiency is amazing to me.
I even had a docker container freak out on my Air recently. It pegged a CPU core to 100% for a good 45 minutes. Temp skyrocketed. But I was on another virtual desktop and I didn't notice a thing. The OS ran smooth, vscode, node and everything else was running smooth. It wasn't until I looked at the status bar that I saw the CPU and temp had skyrocketed. I found the container responsible, killed it, and temp dropped quickly. Even under non ideal situations the Air performed well.
With my XPS the moment I put it down I think about where I can plug it in. If I pick up my XPS to quick order some Tacos and it wasn't plugged in from the start I feel like i"m in a race to get the order in "Come on, hold on, I need tacos!!!"
With my Air I don't even think about plugging it in, that's a completely new experience for a laptop for me.
I don't really understand the benchmarks and such that I've seen, but the every day user experience has been like night and day.
The CPU/SoC's efficiency is likely the core driver behind this difference, but I believe that Windows exacerbates the issue.
I recently installed Fedora 36 on my Tiger Lake (i5-1130G7) ThinkPad X1 Nano, and while the machine has never had particularly good battery life, it's more "calm" under Linux on average. Even while setting things up with various things running its fan didn't kick on and it didn't get warm like it typically would under both the Windows 10 installation it shipped with, as well the current upgraded Windows 11 install.
Additionally, while I haven't yet actually tested it, GNOME under Linux estimates that it'd get somewhere in the ballpark of 8-8.5 hours out of a full charge when running in "battery saver" mode, which is a good hour or two longer than Windows' estimate in its low power mode.
Of course the M-series devices I've used destroy the ThinkPad in terms of battery life, regardless of the OS it's booted into though.
Am I the only one not bothering with Windows' estimates at all? It will happily tell me I have two hours remaining and shut down after 40 minutes, without me touching the PC at all. I had this or similar experience with every windows laptop I ever used.
I certainly don't trust them. 30% battery life may as well be near death on my windows laptops.
As for hours of use in marketing material I wonder what they'll claim when they actually get close to their current estimates? Just lie about it more? How will we know if they're getting close to the truth?
>If you’ve been following any of my articles about the performance of the cores in M1 series chips, you’ll have come across the term Quality of Service (QoS), which can have major impact on how fast code runs on processors under macOS.
In addition to each process being given a priority, a number that can be changed using the command tool renice, some years ago Apple introduced another setting, the QoS. This is set for each process, and can be one of four discrete values from 9 (the lowest, for background tasks) to 33 (the highest, for tasks involving user interaction).
> It pegged a CPU core to 100% for a good 45 minutes. Temp skyrocketed. But I was on another virtual desktop and I didn't notice a thing.
I reckon this is more of a heterogeneous architecture thing than a MacOS/Apple one. When you run a processor-intensive task, Grand Central Dispatch will assign it a relative priority and then, on new M1 chips, delegate the process to a core cluster. While one heavy program runs, another can operate alongside it on the efficiency cores or even another P-core cluster.
I've noticed this behavior as well on Alder Lake. There are several times I've booted up Bitwig with something like Elden Ring running in the background, and had no idea it was open while I worked on music.
Yeah it could absolutely be something that happens elsewhere outside Appple land. I tell that story more as a sort of indirect mention of some of the internet drama over "throttling" / general user experence rather than a story about what MacOS does. I really don't know what was responsible for everything still running smooth when that incident happened. For all I know Docker on Apple Silicon might be doing something there too to keep things from getting entirely out of hand.
Docker does a lot of weird stuff in MacOS, it's totally beyond my wheelhouse. I am glad that Intel/Apple are pivoting to this new architecture though, it kinda reminds me of the jump to quad core systems back in the day. With AMD's switch to heterogeneous designs coming a few years down their roadmap, I'm really curious how they're going to compete. Early Zen 4 benchmarks are wicked scary so far, so I'm pretty excited to see what they come up with.
All-for-all though, this next generation of CPUs is going to be awesome. Everyone is finally doing their own thing, and actually getting somewhere:
- Intel is working on process enhancement and bumping nodes to make fast, heterogeneous x86 CPUs
- Apple is focusing on density above all else, optimizing for a more efficient, streamlined heterogeneous ARM package
- AMD is doing things "the old way", with a tick-tock schedule for process bumps and IPC optimizations to their homogeneous x86 CPU
- Bonus: there's also a bunch of Chinese manufacturers fighting for dominance over the homogeneous ARM CPU market (mostly for use in servers).
I have a similar experience coming to m1 mba from an xps13. I had some buggy code change monitor/recompiler running that pegged a cpu core at 100%. The palm test would get hot and the battery would go to like 10% after a working day. Those were the only way I knew somerhing was up.
What time do you start browsing with the xps? 9 am making it 4 hours or so of life. That's pretty pathetic and it almost sounds like you have some background processes going on that are sinking your battery life and kicking on the fans. I put in a new battery in my 2012 mbp and I get about 6 hours ish browsing a very light website like HN (sinks like a stone on JS heavy websites of course). No fan ramp up either although I pin them to minimum 2000rpm on this computer. CPU stays at 50-60*c according to my fan controller.
I've a macbook pro, 2019. I find office 365 uses 100% of one cpu so reguarly (at least one of outlook, excel or PowerPoint will be acting up)
That I get 1.5 hours battery.
How does office behave so well for you? The cpu hardly makes a difference there, is the program is cpu hungry battery will be used
Do you have Defender installed? That’s the culprit for me on my work Mac. Keep ‘top -o cpu’ running and look for wdavdaemon blowing up, or just set a cron job to kill it every hour.
That's one thing I haven't installed yet. Just to get up and running quick while I was getting setup I installed the office apps as PWAs rather than go through the application install process and ... kinda left them that way.
But doing that they haven't caused any issues. Granted Teams, while on a video conference, will eat battery, but it handles a few meetings a day just fine overall.
This is not a surprise for anyone that has been using Windows under Parallels on an M1 Mac. What's important is for the real experience to be match the benchmark and it seems to fly for core office/productivity tasks.
I setup a VM for work stuff that includes a bunch of admin policies and work-installed junk a few weeks ago to work while traveling. It didn't even break a sweat and Teams felt as responsive as my recent i7 desktop. At the same time I was able to run all my usual dev stuff on the Mac. It felt like it would handle classic Visual Studio as well, but I didn't want to bother with the install.
Ehh... the answer is it kind of works well, but it really depends on what you're doing since you're running an x64 application (Visual Studio) on an ARM version of Windows running on a Mac M1 in parallels.
.NET development and in particular stepping through debugging is particularly slow, though I guess we should be happy it works at all.
And Arm64 version of .NET Framework (4.8.1). .NET Framework is considered legacy and 4.8 was supposed to be the final version. Not sure whether it means they foresee faster Windows-on-ARM adoption. Or slower .NET Framework to .NET Core transition. Or both.
How is this a shame for AMD or Qualcomm, who aren't part of this comparison at all?
> Windows running in VM on M2 is faster than native Windows on x86.
Which x86? 12th gen Intel? 11th gen Intel? Ryzen 6000? These are all wildly different processors. This comment is absurd. It's also "Windows ARM version running in a VM on M2", not x86 windows which is the only thing you'd care about running in VM on MacOS if you were actually going to do this for some reason.
> Parallels only product is Parallels, it's the entire company.
Not true these days. "In December 2018, Parallels became part of the Corel Corporation and joins an impressive collection of industry-leading brands, including CorelDRAW®, WinZip® and MindManager®. Parallels has offices in North America, Europe, Australia and Asia."
If the article is true (and that's a big if, given Max Tech's history of tech reporting), instead of blaming the above companies I'd just congratulate Apple and thank them for putting the bar higher. AMD recent processors are still great on the desktop, and an absolute blast on the server, with extremely attractive value for money ratio.
> AMD recent processors are still great on the desktop, and an absolute blast on the server
Yes they are truly great but point to consider is they are just competing against Intel. Don't get wrong I love what AMD has done recently with Ryzen but we need ARM based servers which has perf per watt competing with M1/M2
> we need ARM based servers which has perf per watt competing with M1/M2
We have them? Graviton instances are priced pretty competitively compared to similar x86 EC2 instances. A lot of people don't use Graviton though (myself included) for a number of reasons. A lot of software hasn't been ported over to aarch64 yet, and even when it is it can lack the optimization that x86 enjoys (especially with programs leveraging AVX et. al). Furthermore, the CPUs that Amazon use seem to be bottlenecked by low IO bandwidth, which makes for some janky benchmarks when comparing simple database operations.
I think we'll eventually see a return to RISC architectures, but ARM's value proposition hasn't made a lot of sense on the server. It's still encumbered with proprietary licenses that make it extremely hard for CPU manufacturers to compete in the server market. It still hasn't figured out all of it's hardware-acceleration quirks, and you don't get any Rosetta 2 fairy dust on Linux (not that you'd want to use it in prod anyways). All in all, x86 is still the set-and-forget king, and probably more stable than ARM alternatives. I'm hopeful that RISC-V will finally put x86 in it's grave, but that's going to be a couple years out...
I was excited when Hetzner announced their RX line only to discover the entry server is a few more times more expensive than any other entry level server in their offering. But I'm tempted to try it out anyway.
I was shopping for a laptop a month ago and considering the M2 MacBook. The CPU performance is definitely impressive, but I ended up getting a Razer Blade 14 because it absolutely decimates the MacBook in GPU performance and gaming related stuff, unless you go top of the line MacBook Pro which would have been about $1200 more and couldn't run most games. If you don't need a great GPU I would definitely go with a MacBook, but if you need one you're just not going to beat an RTX 3080 and an AMD Ryzen.
Razer Blades look good on paper but be prepared for bloated batteries. Between the 5 Blades my friend and I own, all 5 had bloated batteries within a year, some catastrophically. I'm personally on battery number 4, in just 3 years of use.
My solution is a bit different, I have a Ryzen desktop with an NVIDIA GPU when I need to game, as I don’t game on the go, and honestly, gaming on the Mac doesn’t exist.
But I absolutely work on the go, and need the battery life I get from the Mac so I’m not constantly looking for a place to charge.
I use a similar setup, I need CUDA for work. But machine learning on a fast CUDA-capable GPU on a laptop is brutal battery and heat-wise. So, I just SSH into a Ryzen tower with a fast GPU for machine learning. (The MacBook with its AMX matrix multiplication units and Metal Performance Shaders is fast enough for short test runs.)
You can also put a much faster CPU and GPU in a workstation than a laptop, if you have enough headroom for 105W TDP CPU and 350W TDP GPU.
For me it's actually about game development, so I'm running things like Unity and baking light maps and things like that, which I absolutely do on the go in coffee shops or whatnot. The battery life is actually pretty decent. Nowhere close to a macbook but I don't have to be plugged in all the time either.
I'm not sure I'd call it niche. No I'm not playing counterstrike in a coffee shop, but it can be fun to bring it over to a friends house. I haven't bought a desktop since 2014, I like having the freedom to work from anywhere and not have to move files between computers all the time. When I have it at home I have it hooked into 3 monitors and a full size keyboard, and it runs pretty quiet, so it's not really any different from having a desktop in that regard.
YMMV though. Most of what I do with a computer even besides games needs a very powerful GPU. If you're working with docker and webdev I'd just get a macbook. My only point is that apple doesn't have a pure monopoly on performance, as impressive as their new silicon is there's some things it's still not so good at.
Just depends what you care about. All the games I care about, from Baldur’s Gate to Crusader Kings to emulated games, are best enjoyed in my bed or couch on my laptop.
My desktop feels more niche for my preferences since only a few games I care about require sitting at a desk in “FPS position”. I had more fun building the thing than using it.
Seems like multi-core scores are all over the map (probably depending on the cooling of the laptop), but not really impressive when comparing to a passively cooled CPU. Single-core scores are meh compared to the M2.
Given that I didn't stop anything and was playing Dota 2 while running it ...
Still, I believe at some point once newer iterations of X86-64 drop support for legacy instructions (they still have to support code running on 8080/8086 through 486) - we will have smaller, more efficient X86-64 chips.
Actually, it is darn impressive that the latest Intel/AMD chips keep up with the brand new M1/M2 chips that have zero legacy instructions and the Intel/AMD chips carry 50 years of legacy :)
4% is probably close to noise level, so essentially you are a couple of percentage points faster on multicore with TWICE the total power envelope. Think about that.
And almost 1/5th slower in single core.
Also, you went from "It will OBLITERATE M2" to moving the goal post "not that bad" pretty quickly. I love it.
> once newer iterations of X86-64 drop support for legacy instructions (...) - we will have smaller, more efficient X86-64 chips.
Don't get too excited about it. The old compatibility is going to be tiny in the die space compared to even basic Intel extensions. And it's not just the old code - new code may well contain a "mov al..." so you can't just drop it. All of those instructions will stay with us for decades.
It was an additional eye opener after I tried, unsuccessfully, to like Raspberri Pi. "Powerful 64-bit CPU"? After M1 I kept expecting it to be fast and to run non-ARM software :D
Once you try M1/M2 Macs it's very hard to go back to pretty much any hardware.
Raspberry Pi has a distinctively different target audience. If you expect an RPi to run non-ARM code, then I'd say it's not made for you.
I agree that M1/M2 Macs are good, but I implore everyone to not get sucked into and stuck into the Apple ecosystem. Nothing good can come out of tech dominance.
> I agree that M1/M2 Macs are good, but I implore everyone to not get sucked into and stuck into the Apple ecosystem.
So much this. Every time I hear something good about the M1/M2 it's interesting but it's like "aah, such a shame I can never use it without loosing my sanity due to all of the strings it has attached".
I bought an M1 Mac Mini on sale earlier this year when Microcenter had them under $600, now it just runs Asahi Linux. Eventually I'm going to move all of my self-hosted server stuff over to it, minus Jellyfin and whatever else needs hardware transcoding capability.
I'm happy for you, and that would also be pragmatic for me being a Linux user, but my annoying idealistic conscience refuses to fund Apple's activities.
Hmm, i was reading it as the opposite, (that they found it useful specifically because they were able to avoid Mac OS and the Apple ecosystem).
Context: There are some Linux users who want to use the hardware (so they can enjoy the M1/2 whatever) so there are efforts to get it running as a Linux machine (Asahi Linux), which would certainly keep me far more sane - but I still abhor giving money to Apple for the hardware (even 2nd hand which supports resale value and thus retail value)... I also don't trust that they wont simply pull the rug on such efforts in future by doing some kind of over the wire firmware update to brick those machines. Why do business with people who don't like you, it's just going to be a constant fight.
That is what I meant. The other person misread it. My point is that I got a far more efficient home server machine to run Linux on. The Mini as I have it sips power like my Raspberry Pis while having the computational grunt to be far more useful
If you just run Linux / *BSD on the hardware, what are the strings you are referring to?
Every time an article comes up reminding me of how great the M1/M2 laptops are, I think about maybe getting a used one in a few years to run Debian on, but the soldered on SSD worries me (especially since early Apple software issues burned through erase cycles).
Running Linux or BSD would be the only possible way for me. But beyond this Apple are the strings for me, as I've mentioned in sibling responses I do not wish to fund the company in any way. 2nd hand unfortunately also funds them indirectly by supporting retail prices through resale value.
The dance between hardware, software, and devices, and its consistency, is a measurable good. To me, this is the "magic sauce" of Apple that makes them successful.
But, to agree with your point, my general rule is: Apple hardware, with third party services (besides iCloud). And, a PC or console for gaming. :)
I am hopeful that Qualcomms overtake of Nuvia should help ARM achieve true potential for non apple users. Although they keep on disappointing. They claimed to have M1 performance in 2022, now they have postponed to late 2023. Matching M1 performance in late 2023 won't be that great
I feel like Qualcomm is bloated as a company and also overly corporate/restrictive. I understand they have IP to protect, but other companies have shown it can be done at scale with a much more open mindset. If Qualcomm ends up winning using some kind of proprietary tech, it wouldn't be a big win for everyone. Another Apple, in a way.
Because they're a more closed ecosystem that encompasses much more of the stack. Intel is a small part of the whole stack and they have a proper competitor. Smaller slice of the stack + more competition = less tech dominance.
The fact that M1/M2 can run something is almost irrelevant. It can because Apple allows you to run it and can stop that at any moment. It wouldn't be the first time they would decide something like that out of the blue.
For tech dominance not to occur other players need to get their heads out of their collective asses and offer something competitive.
Yes, I shouldn't expect Raspberry to run non-ARM software... but why exactly shouldn't I? Where are the trillion-upon-trillion-dollar players like Google, Samsung and others with their Rosetta-like translation layers? And Qualcomm with competitive CPUs?
This is frankly a ludicrous ask of a 35 dollar hobbyist single board education computer with an old CPU. I hate the dreaded car analogies too, but this is akin to buying a cheap Toyota saloon and expecting it to accelerate like a Ferrari.
For what its worth, there are some rosetta like options on RPi - you can run x86 containers in QEMU, for example. Its just again you've bought a 35 dollar computer - its not going to be fast enough to be performant for most tasks when translating (in real time!) software written for a completely different CPU architecture.
> This is frankly a ludicrous ask of a 35 dollar hobbyist single board education computer with an old CPU.
Linux is being developed mostly by trillion-dollar corporations. Intel. Google. Samsung. etc.
ARM chips are designed and produced by Samsung, Qualcomm etc.
You'd think they would:
- come up with a competitive chip, and
- Rosetta-like software
that even "$35 dollar hobbyist computer" would be able to do this with, quote "powerful 64-bit CPU".
Alas.
And no, I'm not buying the whole "you shouldn't expect". Because, as it turns out, I can't expect this from any computer, be it hobbyist, educational, family, gamer, professional, or whatever adjectives you can put in front of it. Except Apple.
That $35 SBC is running a CPU that costs a comparatively lower amount to produce than an M1. Putting 20 billion transistors on a single die reliably at 5nm is extremely more costly than a billion or so at 28nm (the BCM2711).
If you want Broadcom to release a CPU that's on par with the M1, half of your Pi's circuit board will be just the CPU die's BGA pins.
Intel's (and others') price gouging and market segmentation does not in any way imply that it's possible to manufacture an i7/i9/R7/R9/M1 at $20 a piece.
Everyone continues to willingly miss the point. Let me spell it out again.
--- start quote ---
Because, as it turns out, I can't expect this from any computer, be it hobbyist, educational, family, gamer, professional, or whatever adjectives you can put in front of it. Except Apple.
you know the pi is pretty amazing for what it is. the hdmi 1080p output alone was groundbreaking with the first pi
The other side of the puzzle is software. I'm pretty sure there is a lot of headroom in the rpi hardware, if only someone re-write and re-optimize the os and software for it.
That's part of the magic of Mac - they got to hire as many people to optimize the os as they did to build the chip / computer itself.
The Raspberry Pi has found great success as a console and coinop arcade game emulator, so of course, a RPi is a turing device meaning it can run all the things that any other turing device can run.
It's great at emulating arcades! that's totally non-arm software.
They went with 28nm because it was the last planar (aka cheap) node around. GlobalFoundries recently launched their 22nm planar (22FDX), so it seems like there's now a cheap path forward for making a new chip with an updated CPU design (and hopefully an updated GPU too).
I can answer the question posed in that thread: Because the performance would be abysmal. The Raspberry Pi is a neat device for $35, but even running native ARM code it sometimes struggles to provide a good desktop user experience by 2022 standards.
The major use-case I can think of for building a rosetta-like x86 translation layer for the Pi is to run Windows software. All the relevant Linux software can, or has been, ported to ARM. Do you think the average user of Windows software is going to be happy with the performance of an x86 emulator running on a $35 ARM computer with barely enough memory to run a few Chrome tabs?
Please provide at least a single piece of evidence when blaming. I don't know them, but if you're saying I should not trust them - please explain why (not just “they are wrong”).
For everyone saying this must be the Windows Insider ARM based Windows version - the linked YouTube video shows him using (and paying for) the full Windows 11 x86. He even gives a referral code. It's at 48 seconds into the video.
Depends which next Ryzen you mean. 6000 series has been out for almost 6 months. CPU is the same, gpu is much better. The 7000 are expected to be much better - zen 4 + rdna 3. Yummy
Not that hard to find. Dell Inspiron and Alienware have AMD options. Lenovo's main X13 line has AMD variants. And of course all the Asus, MSI, Acers, etc.. of the world have AMD offerings. So does HP.
“There are three kinds of lies: Lies, Damned Lies, and Statistics”.
I really wish thermal throttling was the first major point of discussion for ALL laptop CPU's.
One of my work laptops never reached full cpu speeds in the real world because it would get thermally throttled!
>the XPS 13 Plus’ fan was really struggling here because, boy oh boy, did this thing get hot.
After a few hours of regular use (which, in my case, is a dozen or so Chrome tabs with Slack running over top), this laptop was boiling. I was getting uncomfortable keeping my hands on the palm rests and typing on the keyboard. Putting it on my lap was off the table.
I had the same experience, and it's incredibly frustrating -- both when you don't know about thermal throttling and when you do.
I'd do something that caused high load, and it would be briefly fast then suddenly everything would get sluggish. I naively thought this was normal ("under load") until some time I was doing the same workload on a similar-spec desktop CPU and it didn't happen. In trying to discovery why is when I first learned about thermal throttling.
At least with windows you can turn off turboboost and other such features that basically temporarily overclock the cpu beyond what the thermals are designed for evidently. I had a late model intel mac that was horrible for this. Games would become unplayable because you'd have 30 seconds of reasonable FPS followed by 5 seconds of terrible awful drops then right back to reasonable fps. My 2012 era mac was better in this sense because the thermal design allowed the CPU to go all out at room temp, no throttling at all and consistent fps from the game (albeit low because of integrated graphics, but consistent nonetheless which is at least playable).
My old work laptop was an i7 XPS-13. On bootup, it would spam the console with messages about the processor cores thermal throttling (partly due to cpu load from disk encryption)-- this was within a few seconds of power on in a too cold air conditioned office.
It's going to be partially this, but with AMD launching Ryzen 7000 mobile on 5nm we can see how close we get in performance per watt. I don't believe it will be enough to catch up.
Also remember this is running in a VM (guaranteed overhead) and Windows for ARM (this is the red head step child of Microsoft). Even if they closed the gap, this is still impressive.
I read and watch a decent number of PC hardware sites/channels and they don't seem to use geekbench. The only time I hear people mention geekbench is in relation to apple.
Is there a reason why pc hardware sites don't seem to use geekbench?
It is always better to benchmark with actual programs than it is with benchmark utilities. Geekbench's claims as a cross-platform test are also not really validated.
For example, let's check out the M1 Mini review that Anandtech did, specifically these two pages:
According to Geekbench, the M1 handily beats the 5950X in single-threaded work. However, according to Cinebench 1T and spec2017, the 5950X is faster at single-threaded. Who is "correct"?
The answer is a much more simple "look at the benchmark results that matter for your workloads." A good review will cover benchmarks of a variety of workloads as a result so you can figure out which matters to you. Which is never geekbench, since nobodies workload is ever geekbench. So a good review will tend to not include geekbench rather than include it, unless they just don't have better ways to compare whatever they're testing.
GB4/5 are... not a particularly high quality codebases. I can't offer anything more than that, but, well, lets just say that GB4/5 is my least favorite benchmark as a compiler engineer for AMD. It shouldn't be given much weight as a benchmark.
GeekBench is intended as a cross-platform benchmark.
> Designed from the ground-up for cross-platform comparisons, Geekbench 5 allows you to compare system performance across devices, operating systems, and processor architectures. Geekbench 5 supports Android, iOS, macOS, Windows, and Linux.
But it seems like there are a lot of other options.
PC hardware sites are probably less concerned with "cross-platform" benchmarks. As long as the system hardware and operating system are the same, varying only the component you're testing, you can test with any benchmark that runs on the system.
That has the M2 winning single core but losing multicore. I didn't watch the video because 8 minutes is ridiculous so I don't know what they're doing. But it's probably not apples to apples.
The article links to the default (cheaper) configuration. The video simply says it's the $1,849 configuration of the Dell XPS 13 Plus (I didn't listen to all the audio, but that section was very light on details.) Playing with the configurator, I could get $1,859 by bumping up to the i7-1280P with Windows 11 Pro and a few other tweaks.
Ugh, there’s so little information in either the linked article or the original video or even the Parallels website, but apparently, Parallels does (not*, in fact, run Intel Windows on Apple Silicon according to this page: https://machow2.com/windows-on-m1-mac/
(and even that page had confusing wording that made me think the opposite for a while).
If we keep talking about quality, you lose much more clarity/contrast by some 3rd party sticked matte layer than what manufacturer actually puts into screen itself.
But I guess anything is better than cheap looking glossy screen intended for any serious work.
Honestly, and I'm not sure of the technical specifications, but the screen is so bright that I really never notice the reflections unless I'm in the absolute worst conditions- where I probably shouldn't be running my laptop outside in direct sunlight.
It's an interesting issue, because I have the same feelings about the recent launch of the SteamDeck. The highest priced model has an etched mat screen and the lower models have a reflective one. I'd say that it's far more noticeable in the steamdeck than it is on my macbook screen, even though they look very similar.
My guess is the adaptive brightness and sensors are very well tuned.
I've got a MB Pro 14", can't really use it outside at normal max brightness (500 nits); there's a program called Lunar which let's you blast the full display up to the normally-HDR-content-only brightness levels (so probably somewhere around 1000 nits); easily usable outside if not in direct sun; even in direct sun still somewhat usable. (Drains battery pretty fast though.)
Still not as nice as e.g. an e-ink display would be.
I don't think the M2 Air has this option, so I would say: not very usable outside.
My Lenovo has a matte screen and has an a lot less visible screen than my wifes glossy macbook air. Unfortunately glossy or matte no longer is an indicator on how readable a screen is.
Windows in Parallels runs well except with every update IIS Express is broken so I have to uninstall and reinstall the version downloaded from Microsoft's website.
I’ve noticed a big decline in performance recently on my 2020 MacBook Air (pre-M1) — a lot more fan activity and some apps completely freezing (iMessages and Slack in particular, but also StarCraft a few times).
I’ve been wondering if Apple’s latest software updates are optimized for the new machines at the expense of the old ones?
Not sure how Apple could optimize for both architectures simultaneously, and if they faced a tradeoff, I’m pretty sure what they’d do…
More like third-party software (Slack, Figma, VSCode, anything web-based) is getting even more bloated as their engineers' machines get upgraded to M1 Pros.
They have all the results of their benchmarks graphed in the article. It looks like Max Tech didn't know how to use the various performance modes in the XPS 13 Plus.
Well TSMC's 5nm fab is also obviously a huge factor, but the performance difference is largely due to the ARM vs x86 ISA as well as all the optimizations that Apple has put into running MacOS well on their chips.
Microsoft will certainly continue to put effort into developing the ARM version of Windows, and try to follow Apple's path in getting a version of Rosetta to run the legacy Windows applications.
I'm certain that Dell already has ARM-based laptops and servers in development. Until the Windows version of Rosetta works as well as Apple's, there will likely continue to be a demand for x86 machines, but pretty much every PC vendor can see the transition coming. I'm hoping Framework comes out with something soon.
Not sure that this does much for Google. Their datacenters already run custom CPUs, and web applications should be agnostic to the CPU powering the web browser. Android phones are already running ARM.
A more precise question is how long it will take for others to catch up: 2 years? I remember when the iPhone 1 came (2007) it took ~4 years for Samsung to came up with the Galaxy S II which was a great competitor. I am aware this is comparing apples with oranges but thinking about business time.
Let's assume that Apple is 2 years ahead now, and that their rate of innovation is 20% higher than any competitor. The answer is therefore that the competitors will never catch up. This is why I am long APPL.
I think we can apply compound interest in your "never catch up" model under a certain range of time. Also, if we do fundamental analysis of the AAPL stock we can say that they are under the Taiwan/China risk?
That must be something extremely and unnecessarily custom. At my org we enforce MFA with Microsoft Authenticator and that works just fine with macOS' built-in Exchange support.
1, I guess it's a standard x86 version of Windows, not some Windows for ARM port. But imagine how fast native Windows port for M2 would be?!
2. Is this only Windows/graphics specific, or Parallels on M2 can also run Ubuntu x86 OS (not Asahi Linux) and code faster than the latest x86 laptops?
It's almost certainly Windows for ARM, which is available from Microsoft as a Windows Insider Preview. Parallels will also be able to run non-asahi ubuntu, and it likely will be pretty fast, but again it will be the ARM version.
Hi rivertech, I have no benchmarks but tried Alpine x86, it's a tad slow. I haven't tried a GUI and figured it's not worth the penalty in speed and battery life. Why do you need x86 ubuntu instead of arm ubuntu?, is it because of running x86 windows programs under Wine? Box86 is perhaps a faster solution.
I understand, I'd try box86/64.
Depending on what you develop, instructions between Ubuntu x86 shouldn't differ that much compared to Ubuntu arm64, which can run with near native speeds within MacOS arm64. I have a similar issue, I want to run the occasional windows 32-bit programs on MacOS arm64 which necessitates linux x86/x64 with wine. I'm not sure if I can run wine for x64 on macos arm64 directly, appearently codeweavers crossover can. Ideally I'd do that in a virtualized box I can backup and move around.
If feel like at this point, Windows is eventually going to switch to ARM it's just a matter of overcoming a lot inertia. ARM and RISC processors in general are now proven to be better than x86. If Microsoft invests in making Windows for ARM run well and provides a translation tool like Rosetta, it's game over for x86. That's no easy feat but at a fundamental level I don't really see a way for x86 to become actually competitive again against RISC processors.
No, it's a matter of someone making an ARM CPU that's worth using in a laptop/desktop. So far only Apple has, and they aren't sharing. The broader ARM market is still pretty weak & broadly uncompetitive with current x86 CPUs.
In the server side of things there's Neoverse, graviton, etc.. which might have "trickle-down" effects but since it's primarily focused on just having more cores rather than faster cores that seems unlikely. And that also hasn't "proven to be better than x86" either, Epyc is a beast and AMD doesn't seem to be slowing down or hitting any limits in scaling up.
> ARM and RISC processors in general are now proven to be better than x86.
Certain ARM implementations (M1, M2) have proven better than certain implementations of x86 (basically all of them). The ARM implementations in question are not available for general purchase. It's only worth the effort to switch if a competitive processor becomes available. It doesn't look like the MS teamup with qualcomm that resulted in the sq1/sq2 procs are that.
In the server space, that's different given that graviton/cavium have proven competitive.
Microsoft has shipped devices that run ARM for many years (10?), and include translation software to run x86 applications transparently. They are hindered by two things though:
1. They have been locked into using chips from Qualcomm, which are just slower than Apple's ARM designs.
2. Qualcomm has not implemented the hardware necessary to speed up the x86 translation. The basic problem with running x86 applications on ARM is handling differences in the memory model (x86 uses a strong memory model, ARM uses a weak one). This is slow to handle in software, but IIUC Apple customized their hardware to speed up that operation, which puts them in a much better place for emulating x86.
For those of us whose CS background doesn't go this deep, what about ARM makes it more performant than x86? Is it something at a theoretical level, or does it have to do with CPU manufacturing constraints/tradeoffs, or a simpler/more efficient instruction set?
The surface-level argument is that the instruction set is simpler and more efficient.
Certainly the floor for the complexity and size of an ARM processor is lower than for x86 - the ISA is smaller, easier to decode, and suffers from fewer silly legacy baggage items.
However, the reality is that all modern out-of-order microarchitectures are fiendishly complex and that at the "top of the game," implementation details at every level of implementation matter more than the fundamental ISA at play. With x86 there's a certain die size and complexity "tax" in terms of instruction decoding and support that majorly affects small designs. For example, an x86 microcontroller would be a bad idea compared to an ARM one, full stop. But, once you've paid that top-line x86 tax and you're building a huge out-of-order microarchitecture, the differences are minimal and how well you build the rest of the CPU's machinery matters much more than the ISA you started with.
One difference is having constant instruction length, which allows highly parallel decoding and a wider machine in general. This is part of what makes Apple's CPUs faster and cannot be replicated by x86-64, ARM32 or RISC-V with compressed instructions.
> what about ARM makes it more performant than x86?
It isn't. ARM's fixed-length instructions helped Apple achieve a wider front-end than contemporary x86 CPUs which helped, but that's about the end of the ISA differences.
Which is why all the other ARM CPUs are slower than x86 CPUs. ARM doesn't really provide an advantage. Apple's stonking huge cash and R&D budget along with Intel struggling right as TSMC is firing on all cylinders is what gives M1/M2 an advantage.
From my understanding, there are three things to keep in mind:
1. Whenever you do an x86 vs ARM comparison, there's a number of variables that need to be considered, the node size of the CPU, the power envelope, number of cores, cooling, etc... that make it very difficult to do 1 to 1 comparisons.
2. The main issue x86 wise is that every x86 CPU needs support instructions going back to the 1980s for backwards compatibility, which "wastes" a lot of silicon on functions which are rarely used but need to be in there.
3. Apple has the ability to have their products focus on a few specific devices and just one operating system. This helps design CPUs that they know 100% what they need to handle. You couldn't say that a Qualcomm Snapdragon, just because it's ARM, is better than x86.
All that being said, I'm very much someone who prefers x86 hardware and Windows, but the M1/M2 and Rosetta are very impressive pieces of hardware/software that hopefully kick Microsoft/Linux/Intel/AMD to innovate.
Curious, is this due to something about x86 design (eg: technical benefits over ARM), or are you just referring to the "PC" hardware ecosystem in general (as opposed to Apple/macOS)?
RISC has had its moments. Way back, it was better than CISC because the simpler instructions allowed a higher clock rate. Then CISC CPUs turned into RISC CPUs with a CISC-to-RISC translation on the front end. With that in place, there was a whole stretch of time where the only real advantage that RISC had over CISC was that RISC didn't have to have a lump of silicon that translates CISC into RISC instructions.
However, now there's enough space on the die for a CPU to have lots of parallel execution units, and the part of it that became really difficult to scale was the CISC-to-RISC translation unit, because each CISC instruction had an unpredictable length, making working out which instructions you can translate tricky and silicon-consuming.
And so RISC has a significant advantage once more, and that is the ability to vastly-simplify the part of the CPU that feeds instructions into the execution pool, compared to a CISC CPU, because the instructions are fixed-length. This allows this part to translate more instructions per clock cycle than a typical CISC CPU can, and this is what gives the improved performance.
I would naively assume that this would be an advantage, since you could easily change the hardware used for any CISC instruction, finding better ways to make it faster. The "work unit" is more abstract, so you could throw the whole problem at dedicated silicon. Or, you could remove dedicated silicon, and just have the CIST spit out a list of RISC instructions.
It seems that, for RISC, you could never throw a more abstract "work unit" at dedicated silicon, without buffering instructions, to see if the intent matches the accelerators. Chip specific compilers would almost be required, to handle the abstraction.
It was an advantage. Now it's a liability, because it is hard to scale the CISC-to-RISC translation up to decode many instructions per clock cycle. The translation unit has become the bottleneck.
Why? Wouldn’t it be adding a list of instructions to the queue, for the RISC side? Why would that RISC side have to be slower? I would assume it would be mostly independent. Or is that queue pollution the problem, rather than the execution of it?
The problem is that CISC instructions are variable-length, so it's easy to work out how to decode the first instruction, but the second instruction depends on the length of the first instruction, and if you try to decode four instructions at once in a single clock cycle then it all gets a bit too much, which reduces your maximum clock speed.
In comparison, a RISC instruction decoder knows that each instruction is the same length, so each instruction can be decoded without depending on the ones before it. This simplifies the decoder so much that it makes it possible to decode four instructions per clock cycle without investing in too much silicon to do it, and while keeping a high clock speed.
CISC-to-RISC overhead is not the only factor probably. Snapdragons for example don't have CISC-to-RISC translation, yet they seem to under perform both Intel and M1.
I was trying to say that if CISC-to-RISK overhead is the main contributing factor then other laptops without the overhead would be competitive. ThinkPad X13s with a 4 Core Snapdragon for example. It is a pure-RISK machine and it seem to under perform Intel and is almost twice as slow as M1.
RISC is the secret sauce that allows M2 to decode four instructions per clock cycle, due to the constant instruction length, and that is what gives the M2 a speed advantage over CISC instruction sets. The snapdragon CPUs aren't trying to decode four instructions per clock cycle, so they aren't taking advantage of that feature of RISC instructions.
Got it. It is not enough to have a fixed width instruction set, you also have to actually decode and execute them faster after decoding. Wonder at what point we will start seeing competitive ARM CPUs.
No doubt they would have made a faster x86 CPU. But how would this CPU compare to M1? M1 superiority must have something to do with ARM's fixed instruction length according to this article: https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...
I have my XPS 13 browsing HN next to my M1 air. That's all the XPS is doing, firefox browsing HN. My M2 has multiple docker containers, a collection of office apps open, browsers, vs code... and so on.
My XPS battery will barely last until 1pm if it isn't plugged in, and the fan kicks on for no apparent reason... It will last a little longer if I manually set it to "best battery life" but even that can be surprisingly ineffective for no apparent reason sometimes.
My M2 Air will last nearly two full working days (I haven't pushed it that far but so far it appears it would make it), the battery life is crazy. I go into the office and I don't even think about plugging it in, I know it will do fine.
I don't know what it is about macOS and using their M chips but the efficiency is amazing to me.
I even had a docker container freak out on my Air recently. It pegged a CPU core to 100% for a good 45 minutes. Temp skyrocketed. But I was on another virtual desktop and I didn't notice a thing. The OS ran smooth, vscode, node and everything else was running smooth. It wasn't until I looked at the status bar that I saw the CPU and temp had skyrocketed. I found the container responsible, killed it, and temp dropped quickly. Even under non ideal situations the Air performed well.
With my XPS the moment I put it down I think about where I can plug it in. If I pick up my XPS to quick order some Tacos and it wasn't plugged in from the start I feel like i"m in a race to get the order in "Come on, hold on, I need tacos!!!"
With my Air I don't even think about plugging it in, that's a completely new experience for a laptop for me.
I don't really understand the benchmarks and such that I've seen, but the every day user experience has been like night and day.