Hacker News new | past | comments | ask | show | jobs | submit login
Apple Silicon M1 chip in MacBook Air outperforms high-end 16-inch MacBook Pro (macrumors.com)
1080 points by antipaul on Nov 12, 2020 | hide | past | favorite | 1137 comments



All: there are multiple pages of comments; if you're curious to read them, click More at the bottom of the page, or like this:

https://news.ycombinator.com/item?id=25065026&p=2

https://news.ycombinator.com/item?id=25065026&p=3

https://news.ycombinator.com/item?id=25065026&p=4


> The M1 chip, which belongs to a MacBook Air with 8GB RAM, features a single-core score of 1687 and a multi-core score of 7433. According to the benchmark, the M1 has a 3.2GHz base frequency.

> The Mac mini with M1 chip that was benchmarked earned a single-core score of 1682 and a multi-core score of 7067.

> Update: There's also a benchmark for the 13-inch MacBook Pro with M1 chip and 16GB RAM that has a single-core score of 1714 and a multi-core score of 6802. Like the MacBook Air , it has a 3.2GHz base frequency.

So single core we have: Air 1687, Mini 1682, Pro 1714

And multi core we have: Air 7433, Mini 7067, Pro 6802

I’m not sure what to make of these scores, but it seems wrong that the Mini and Pro significantly underperform the Air in multi core. I find it hard to imagine this benchmark is going to be representative of actual usage given the way the products are positioned, which makes it hard to know how seriously to take the comparisons to other products too.

> When compared to existing devices, the M1 chip in the MacBook Air outperforms all iOS devices. For comparison's sake, the iPhone 12 Pro earned a single-core score of 1584 and a multi-core score of 3898, while the highest ranked iOS device on Geekbench's charts, the A14 iPad Air, earned a single-core score of 1585 and a multi-core score of 4647.

This seems a bit odd too - the A14 iPad Air outperforms all iPad Pro devices?


AFAIK it's pretty common for new macs to spend a while creating an index of its hard drive. For that reason, if you want to run benchmarks, you should generally wait until it's done with that (e.g. an hour or probably less with these speedybois). It might be that the people running the Pro benchmarks didn't wait for that, in their rush to publish the first benchmark. This would be consistent with what we're seeing - the Pro has faster single core performance, but slightly lower multicore, because some of its "background" cores were busy creating the index, while the Air was done with that task.


Quite likely what happened, a second geekbench score has shown up for the pro and it matches the air: https://browser.geekbench.com/v5/cpu/search?utf8=&q=MacBookP...

My guess: in geekbench air and pro score the same, because geekbench is shortlived and not thermally constrained. In cinebench you'll see the pro pulling ahead.


Who uploads these results, are they unverified?


Anyone on the Internet can upload results. They are not verified.


The geekbench software uploads them automatically.


It may be possible the variations are due to differences in the thermal environment when the tests were conducted. I would expect the pro and mini to beat the air as they should have better thermals, but that may only show up over longer term tests and environmental factors could win out in shorter tests. Just a theory.


if I recall correctly, the geekbench score does run on small bursts and it's designed to find the peak performance without taking the thermal limitations in account.


SPEC, on the other hand, takes hours to run.

Of course the iPhone chip isn't as beefy as the M1, but the results still speak for themselves.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


Apple has since explained that M1s are slightly different between the Air, Pro and Mini, accounting for the different thermal chassis. (In the case of the Pro they enable an 8th GPU core.) It sounds like they are three different chips rather than the same chip in different configurations --I think he said that in marketing speak. https://www.youtube.com/watch?v=2lK0ySxQyrs

Apple makes it clearer that in the real world, these machines are only going to offer their incredible performance on Metal, iPad/iPhone apps and for any Mac apps that happen to have been ported over to M1 by their developers (using Xcode). These machines will only offer similar performance to existing Intel Macs when running existing Intel Mac apps because the incredible performance will be reserved for Apple's Rosetta2 software to make those unmodified apps compatible.

But what went unsaid, except during the part where they say they 'learned from their experience in the past processor transitions', is that by introducing the chip at the low-end of the lineup first, they create a market for the (few remaining relevant) Mac developers to invest in porting their code over to ARM and likewise, because these new machines run iPad apps at full speed on upto 6K displays, there is incentive for the iPad/iOS-only devs to expand the functionality beyond what their wares can do on a tablet/phone. (Any Mac dev that drags their feet porting may find that there are 50 iPad apps that now run fullscreen performing 75% of their functionality, costing them sales in the big volume accounts where they buy licenses by the thousands.) Meanwhile, the type of users who can get by with two USB ports, 16GB of RAM and a single external monitor probably don't run many third-party Mac apps and are going to have an awesome experience with the iPad apps and Apple's native apps.


Hence why Cinebench is often used these days when evaluating real-world performance with sustained workloads.


Different tests. Different purposes.

GB deliberately avoids running up the heat because it is focused on testing the chip, not the machine's cooling ability.

Cinebench, as you say, tests "real-world" conditions, meaning the entire machine, not just the chip.


The chip's ability to run at sustained load is a part of its design also. Precisely because modern chips has to throttle in order to meet power and thermal envelopes, we should be looking at sustained performance as a more accurate measure.

In a majority of cases, burst performance only affects things like responsiveness, and those things should be measured instead for a better reflection of the benefits.


If you perform an integrated test, would you not perform unit tests? An unit test may show areas for easy improvement if other aspects of the total package are changed.

For example, if someone thought M1 was thermally constrained, they might decide to rip mini out of the case and attach a different cooling method.


Not saying that burst performance shouldn't be measured, but it shouldn't be the de-facto go-to performance measure like it is now with geekbench.


If you run only unit tests, you don't get useful data.

> they might decide to rip mini out of the case and attach a different cooling method.

99% of customers will never do this.


That's not true at all about Geekbench.

"Geekbench 5 is a cross-platform benchmark that measures your system's performance with the press of a button. How will your mobile device or desktop computer perform when push comes to crunch? How will it compare to the newest devices on the market? Find out today with Geekbench 5"


I don't see anything in this quote that discounts the parent.


The new R23 release even does multiple runs by default. Excitedly waiting for results for the M1 to start popping up now that its released and has support.


The A14 Air just came out and has a brand new CPU. The Pros have much fancier displays, lower pen latency, etc. Subjectively, in most typical use, the Pros already feel like they have more available cycles than iOS apps have got around to soaking up.


Thanks, that makes sense - I didn't realise there was no Pro line on the newest chips yet. It'll be interesting to see how the next iPad Pros compare to these M1 Macbooks.


The iPad Pros never even used the A13 series - they're still back on A12 (though with some variants & binning), so understandable that it could be a fairly big jump


This is about the new line of Macs and the M1 chip in them, not iPads


My comment also mentioned the part of the article that mentioned the A14 iPad Air being the best performing iOS device - I wasn't sure why that was the case.


Geekbench is a series of extremely short and bursty benchmarks. Because of this, it doesn't really test the steady state speed a system is capable of, it's more testing the peak performance for short periods.

In this view, it's entirely possible that the Air simply did not have time to throttle before the benchmark ran out.


Yep, try doing that test back to back 25 times and see who comes out on top.


I checked this geekbenchmark with our several different computers on hand, and I can confirm that it's total useless measurements for real world applications or performance.


Torvalds ripped it apart nearly a decade ago.

It's a useless benchmark, what I want to see is things like time to compile a bunch of different software, things that take long enough for the processor/cooling to reach thermal equilibrium etc.

I.e. stuff that more closely matches the real world


>There’s been a lot of criticism about more common benchmark suites such as GeekBench, but frankly I've found these concerns or arguments to be quite unfounded. The only factual differences between workloads in SPEC and workloads in GB5 is that the latter has less outlier tests which are memory-heavy, meaning it’s more of a CPU benchmark whereas SPEC has more tendency towards CPU+DRAM.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


He ripped apart a very different benchmark for what it was worth, that was GB3 at the time I believe. 5 was a rewrite to make it cross platform-equal. In real world use it actually is far more relevant than thermally limited benchmarks. It just measures max peak burst performance...which is important because 90% of all users use their computer to do only bursty tasks rather than long term processing. See exporting a 10 second clip on an iPhone or loading a heavy SPA webpage on a Mac. These are 5 second burst tasks where real world use would not be thermally limited but would see real change consistent with Geekbench.

It's really only intended to be one of many benchmarks to tell the whole story; of course Linus would attack that because it doesn't make any real sense in his use and isn't the full story for him. If Geekbench was not tested, it would not cover the majority of computing uses and it would weigh cpus that had poor turbo or burst performance unfairly high for most uses.

Geekbench is kinda like 0-60MPH times and other tests (like SPEC2006) are like top speed I guess? The whole story is between them.


I believe this is the discussion OP is talking about: https://yarchive.net/comp/linux/benchmarks.html


The results seem a little weird but if remotely true then these machines are going to sell like cup cakes.

Why would anyone (who is not forced) buy an Intel PC laptop when these are available and priced as competitive as they are?


> Why would anyone (who is not forced) buy an Intel PC laptop when these are available and priced as competitive as they are?

- locked bootloader - no bootcamp - can't install or boot linux or windows

- virtualization limited to arm64 machines - no windows x86 or linux x86 virtual machines

- only 2 thunderbolt ports

- limited to 16GB RAM

- no external gpu support/drivers - can't use nvidia or amd cards

- no AAA gaming

- can't run x86 containers without finding/building for arm64 or taking huge performance hit with qemu-static

- uncertain future of macos as it continues to be locked down


Although I personally care about every point you listed, I don’t think most buyers care about any of them.


Maybe the gaming part.


To the extent that games are made available for MacOS/ARM in the first place (admittedly a sticking point), it looks like these machines will be able to play most of them reasonably well. Certainly much better than most of Apples previous machines with integrated graphics or low end discrete GPUs.


iOS games/apps run on Apple Silicon Mac, so that by itself opens up a huge gaming market.


Casual gaming market.


In other words, the biggest gaming market.


Maybe, but distinct from real "gaming".


A game is a game. There is no "real gaming". Apple Arcade with a Xbox controller paired to a Mac is actually a fun gaming device for some types of gamers.

Genshin Impact is a great game that is on iOS in addition to "real consoles". Oceanhorn 2 is an amazing game that was originally on Apple Arcade and brought to Nintendo's "real console".

There is also quite a number of ports that I think you aren't aware of.


There is a difference between Tetris on Facebook versus Dota or CSGO on PC. The later is "real gaming", the former not. The border might be a gradient.

It's like calling yourself a programmer because you can set a timer on your VCR. (dated but still accurate)


And since when CSGO is a GPU-demanding game?

You think "real games" are for "hard triers only", but it's just your point of view.


Mine and many others. Rightfully so, see the example.

GPU was not the issue here.


umm there are more ports than you think.


I work on multiple OSX machines, servers run linux, and i have a single windows machine just got the few games that can't be run elsewhere.


If porting IOS games is easy then gaming for most people shouldn't be an issue.


I feel like the demand for desktop versions of simple phone games is pretty low.

Mac users who hope to play anything from their steam library or dual boot Windows are going to be very disappointed.


> Although I personally care about every point you listed, I don’t think most buyers care about any of them.

Buyers don't especially care about performance either to be honest, unless they care about one of those factors in order to need it.


Consumers absolutely care that their Apple MacBook Pro has 20 hours of battery life, when the comparable XPS has only 10 hours or whatever.


The ROI for battery life falls off at a certain point, right? For phones, it's probably about a day -- how often is it a problem to plug in your phone at some point in a 24-hour period? -- and for laptops it's often about a full workday, 8-10 hours. I'm not saying that a 20-hour laptop battery isn't an incredible accomplishment, but I do think that I care a lot less about 20 vs 10 hours than I do about 10 vs 5.


And then sit at a desk all day within reach of a power socket


This. The amount of times I've truly needed more then a couple of hours of battery life are rare. I think most people think they want more battery life when they really don't need it. Just add more cooling to stop those processors from throttling all the time.


> "no AAA gaming"

This is, arguably, a disadvantage of any Mac.

But Apple Silicon may actually improve the situation over time, as having the same GPUs and APIs on Macs and iOS devices means there is now a much bigger market for game developers to target with the same codebase.


It's more about losing the ability to boot into windows to game there, as well as losing egpu support.


I agree. It is not without danger, the same as with Apps actually, that developers target only the iPad (touch interface) and don't care about optimizing for the Mac experience.

But on the whole I am optimistic.


Apple has had support for game pads for a while now. This will just make it more viable.

The only issue might be multi-touch based games on M1


And don’t forget that all of the iPad/iPhone games will work on these laptops. That’s not quite the same thing as having major PC titles, but it’s not nothing either.


That and streaming direct to the browser


> But Apple Silicon may actually improve the situation over time, as having the same GPUs and APIs on Macs and iOS devices means there is now a much bigger market for game developers to target with the same codebase.

Not really. The business models for desktop gaming are completely different to mobile devices, and there is no meaningful common market.

I think people will actually be surprised at how few games from iOS will even run on an ARM Mac because developers will block them.

It used to be possible to do some gaming on a Mac - the vast, vast majority of Steam users have graphics hardware of a level that was perfectly achievable on a Mac, especially with an eGPU. The end of x86 is the end of that market, forever.


> "the vast, vast majority of Steam users have graphics hardware of a level that was perfectly achievable on a Mac"

Exactly. So it was never really the hardware that held back gaming on Mac, but the fact that from a game-development perspective it's an esoteric platform that has limited or no support for the main industry standard APIs (DirectX, Vulkan, etc).

It was never worth the effort for most game developers to bother porting games to the Mac because writing a custom port for Metal was way too expensive to justify for such a niche market.

But now with Apple Silicon, that all changes. If you're going to port your game to iOS (and that's surely tempting - it's a huge platform/market with powerful GPUs and a "spendy" customer base) then you basically get Mac for free with close to zero additional effort.


> Exactly. So it was never really the hardware that held back gaming on Mac, but the fact that from a game-development perspective it's an esoteric platform that has limited or no support for the main industry standard APIs (DirectX, Vulkan, etc).

I think it's more that gaming wasn't held back on the Mac. It was just bootcamp was much more common than people think.

> If you're going to port your game to iOS (and why not? It's a huge platform with powerful GPUs and a huge, "spendy" market)

Because mobile gaming and desktop gaming have very little in common. Note that Nintendo didn't port their titles when they released iOS games, they made new games. Users want different experiences, and flat ports of successful console gaming titles to iOS tend to fail. There are, all told, very few ports of successful PC/console games to iOS, and those that exist tend to be brand reuse rather than literal ports.

> then you basically get Mac for free with close to zero additional effort.

Not even remotely. The way you secure your service has to be totally different, the UI paradigm is completely different, you have to cope with totally different aspect ratios etc etc. It's significant effort, and it will be very hard to justify for most game studios. It's certainly more work in most cases than porting a Windows game to MacOS was when using a mainstream engine, and that was not a huge market.


Why would developers "block" their iOS games from running on ARM Macs?


1) Macs are harder to consider secure, they're effectively all jailbroken. Cheating/bypassing in app purchases will be rampant, reducing the opportunity for cross play, and the Mac market isn't big enough itself. These aren't insurmountable issues, but they require investment, and the additional Mac market probably isn't worth the outlay and risk.

2) You have to rebuild the UI, which costs money which the Mac version may well not recoup.

3) You have a different version for desktops that costs more upfront with less reliance on in-app mechanics that you don't want to undermine.


> "Macs are harder to consider secure, they're effectively all jailbroken."

OK, but that's no different to Windows and Android.

> "You have to rebuild the UI"

No. Even with apps this is no longer the case (see: "Mac Catalyst"), but it's certainly not true for games. Maybe you'd need to add some keyboard/mouse bindings, but that's about it. Even iPads support keyboards and mice now days!


So if an ios title uses a multi touch gesture how do you replicate that on MacOS?


Every MacBook has a multitouch trackpad. It's rare that I ever use a mouse.


Still doesn't account for desktop devices. And no, it's not a given that every desktop mac user has a Magic Trackpad.


You can do those multitouch gestures on TouchBar /s


"uncertain future of macos as it continues to be locked down"

Citation Needed.

Apple detractors LOVE to bring this idea up, but there's nothing to it in any real sense. Do Macs ship with a checkbox filled in that limits software vendors? Yes. This is a good thing. Is it trivial to change this setting? Also yes.

Anyone who buys a Mac can run any software on it they like. There is no lockdown.


There is a lockdown as you cannot even boot linux anymore...


I guess that may count for you, but I mean lockdowns within the OS itself.

I don't care that I can't run Linux on my Mac. If I wanted to run Linux, I'd have different hardware.


You can change that on T2-based Intel Macs, at least, just like on Windows: https://support.apple.com/en-us/HT208330

Of course, Apple as an OEM does not support running non-Mac OSes, so virtualization should still be preferred for most use cases.


That's why I'm saying anymore. It was the case but it is no more possible with the new Apple Silicone laptop.


There’s no indication from Apple that they are intentionally not supporting this feature - just that it doesn’t exist right now. That said in practice I never use BootCamp because the driver support is always sub-par. It’s a much nicer experience to virtualize, especially now that most virtualization platforms offer native support for Apple’s virtualization libraries, such that installing third party kernel extensions are less necessary now than ever before. (I think the only ones I tend to install now are Tuxera NTFS support which tends to be really high quality. Apple should just buy Tuxera and ship them natively.)


I guess Microsoft will have to hurry up and build an ARM version of Windows so they can keep the rounding error number of BootCamp users satisfied.


An ARM version of Windows has existed for almost a decade. In fact it's running on quite a few laptops right now.


Since 1997 to be exact. WinCE ran on arm since time immemorial.


There is an ARM version of Windows 10 that runs on the Surface Pro X [1].

[1] https://www.microsoft.com/en-us/p/surface-pro-x/


Not only that, the first released version of NT on ARM was in 2012.

They had crappy code signing policies (only store apps on Windows RT tablet) which guaranteed poor adoption but that was a policy decision, not a technical one.


Catalina already broke a TON of legacy software and you cannot downgrade newer Macs to Mojave (at least not without some serious hacking efforts, and I know at least one person who tried and failed).


That's not true at all. You can use recovery mode to trivially revert back to the OS that was installed when the computer was purchased. If that's pre-Mojave then you can just upgrade back to Mojave afterwards.


What if the Mac had a newer OS than Mojave originally installed on it? That is how I interpreted the parent poster's comment. Given this interpretation, I'm don't think I'd have the expectation to be able to install an earlier OS.


With that interpretation, you'd be correct but I don't think you've ever been able to downgrade to something earlier than what it came with since the older OS wouldn't include the appropriate drivers or kexts to properly run the hardware.


So the choice is between running an older OS version that will go EOL sooner, or abandoning the ability to dual-boot? How is that OK?


ARM Macs can't run unsigned software.


Yes, but code signing can be ad-hoc, can be done automatically at build time, and doesn't require notarization. So it's basically just a way to ensure the binary has not been tampered with. I don't really see the problem here, as the code signing itself does not prevent any kind of code from running on macOS Big Sur.


With iOS, you have to be an Apple developer paying $99/yr to do ad-hoc signing, I'm guessing it's the same now for MacOS..


That's not true. Anyone with an Apple ID is able to sign software they build/binaries and install them on their iOS devices.

Although instead of lasting 1 year they only last 7 days, but there is no fee for a user to sign and install their own binaries.


My question was about MacOS and if similar behaviour exists there too with the M1 Macs.

To clarify iOS, so the app erases itself after 7 days? Or is it something like you can install an app for only 7 days after downloading/using Xcode?


To answer my own question, an ad-hoc signed iOS App will deactivate after 7 days unless you pay $99/yr. This behaviour is not present on Big Sur and likely M1 Macs, they can still run notarized and non-notarized apps: https://arstechnica.com/gadgets/2020/11/macos-11-0-big-sur-t...


If you can sign ad-hoc then there's no point, right? Just modify and re-sign.


Can be ad-hoc? For how long?


Cite?


- Starts at $999 for the base laptop version. You can get much cheaper still good Windows laptops.


You lost me at "good Windows laptops".

macOS has plenty of warts, but my experience with high quality equipment (Thinkpad, XPS, Alienware) has left me ultimately disappointed with Windows in many day to day situations compared to Mac.

Windows is still clunky, despite many improvements. And aside from a Thinkpad Carbon X1, I haven't used any laptop with the performance and build quality (for the size/money) as a Macbook Air.


If you need a computer for serious (long hours) use, I would always go for desktop, as you can get a vastly superior machine to any laptop, with massive amounts of disk space, memory, tons of cores, screens, etc. If you want a Mac, I'm not familiar with desktop Macs but I'm guessing the Mac Pro machines blow laptops out of the water the same way high end desktop PCs do.

For travelling, I don't think anything beats a Macbook due to how light, thin, and resilient they are. But my 2016 MBP is a pretty shit machine for its price. It's also loud (like every other laptop I've had). I avoid using it. Sure, if you take size/design/mechanical quality into account, it is probably unmatched. But for 95% of my computer usage, those are irrelevant, as I just sit at my desk. I had a company provided HP laptop (not sure if stock or upgraded by our IT staff) at my previous job which was far more performant than my Macbook, so I don't really agree that Windows laptops are necessarily bad, but it was even louder than the Macbook, and of course clunky and ugly.

For me personally, the new Macbooks are disqualified as viable work machines if it's really true that you can't use more than 1 external screen. That's just not a viable computer for me (for work). I will always have a Macbook though just because of how much I love them for travel. But a Macbook is more of a toy than a serious computer, especially if the 1 screen limit is true.


I'm in the market for a new work machine myself, and have been eying a final-generation loaded Intel MBP16. I'm sure the AS models will catch up on graphics capability by the end of their transition time, though I'm certainly wondering what the first AS MBP16 will do for graphics. I certainly wouldn't buy less capability than the 5600M myself.


" I'm not familiar with desktop Macs but I'm guessing the Mac Pro machines blow laptops out of the water the same way high end desktop PCs do."

Unfortunately they will also blow your wallet.


Wow, I just checked and yeah those prices are pretty insane, especially if you want a better-than-base model. I guess then in the desktop arena, Macs are at a disadvantage, because you can build a similarly powerful PC for a much more reasonable amount.


Certainly the Pro Desktops must be intended for Pro people that can quantify the number of billable hours they will save in Final Cut or Logic and come up with a "return of investment" figure.

The iMacs are a mistery to me, but guess I'm not the target market anyway. (I have a 2018 MBP)


> you can build a similarly powerful PC for a much more reasonable amount

It's not even a contest or similarly powerful, spend $3000 on an AMD + Nvidia PC and its significantly more powerful than the $5000 Mac Pro in both CPU and GPU compute.


In my experience, the Thinkpads are indeed the only real competition, hardware-wise.

When my current Mac dies, that's where I'm headed, but running Linux; Microsoft is less of a danger, so I don't outright boycott anymore, but I still find Windows super annoying to use.


The windows market is all over the place. All the way from leftover bin junk from 5 years ago sold as new to high end cutting edge. When you get bellow 900 dollars the market decidedly on the crap side with respect to windows. There are some exceptions but usually you have to get 1200-1800 before you start get quality items. Not saying you can not find good stuff near the bottom. But you get what you pay for. Usually they skimp on the screen, memory, and disk. I am currently using an MSI stealth gaming laptop. other than the keyboard layout being slightly odd i am liking it a lot. i replaced my previous hp of 8 years. that will find a new home doing something else once I do a full teardown and repasting. luckily it is one of the last hp laptops where taking it apart is not a total nightmare. finding a decent laptop takes a lot of work. going with apple has a lot of advantages as the hassle of 'picking' is cut down to a few models, and you have a good shot of it being decent. I personally would not buy an apple but that is because of other 'petty' reasons and not quality.


If Lenovo could for once figure out why the speakers on these Thinkpads are SO BAD, I wouldn't be reading this thread because I wouldn't care about Macs. I know there are headphones, but many times when I'm alone, I just want to watch a video and actually hear the people talk, can't do with the Thinkpad.


I bought a thinkpad hoping to avoid quality issues that I’ve experienced with other machines. The hardware is great (except for the wimpy cooling), but I have had various annoying issues with drivers, bios updates, and the behavior of their system update tool.


Maybe that's Windows or Windows pre installed bloatware specific ? The high end thinkpads (t/x/p/w/carbon) are generally well supported by Linux distros, partially thanks to many kernel and distro developers using them.

As for bios (well EFI these days) that should be handled very seamlessly via fwupd on all major Linux distros: https://fwupd.org/lvfs/devices/

(Frankly seems much more robust to how it is handled on Windows - not at oll or via half broken OEM bloatware.)


You can also buy some ThinkPads with Linux preinstalled now if you don't want to worry about hardware compatibility issues.


It's been a couple of generations since I used the Thinkpad, but wiping and carefully reinstalling only the useful drivers/apps was how to did it. Perhaps it's locked down now such that you can't do that (and if so, I wouldn't buy it!)


In my experience, Microsoft and PC laptop manufacturers still haven't figured out how to make a trackpad that works as well as a 2008 era Macbook.


I have both platforms at the office for years, still haven't discovered what is so magic about the trackpad.


The biggest difference between Macbook trackpads vs the best for Windows is the super low hysteresis of pointer motion vs finger motion. I recently bought and returned a Microsoft Surface Book with "precision touchpad". The main reason for returing it was that pointer control feels sluggish compared to the Macbook and its pointer speed was too slow even at its fastest. The best Dell touchpads are no better and Lenovo trackpads are even worse.

I understand that this may be because PC touchpad hardware reports jitter, sometimes higher than it really is, and this causes the Precision Touchpad software to increase the hysteresis. Macbook touchpads have low jitter and the driver is tuned to benefit from it.

If anyone Microsoft with input into the Precision Touchpad reads this, why don't you fix it or work with your licensees to fix it?


Sounds like exactly the kind of thing you can optimize much easier when you control both the hardware and the software.


Like Surface laptops?


Edge to edge uniformity, physical feel, virtual feel of the scroll and motion, gestures. Other than that, sure, same.


Yep, though sadly, in my view, the current line up of Macbooks trackpads aren't as good as the 2008 era Macbooks either....


I disagree, my razer blade 15 has been amazing.


IIRC, binaries on arm on osx have to be signed.

I.e. "We raised the walls on our garden further"

Balls to that, if I buy hardware I want to be able to run what I want on it or it's not a general purpose computer, it's something else.


The Macintosh, by design, was never a general purpose computer. It was a computer that Steve Jobs allowed you to use. The Apple II was the general purpose computer that Woz championed.


The claim of no AAA gaming is totally uncertain -- Apple seems to think that you'll be able to run games in Rosetta with better performance than what you can get on the existing 16" Macs. I guess we'll have to wait and see, but if these new Macs really are so great, I'd expect devs to start porting their games.


>- locked bootloader - no bootcamp - can't install or boot linux or windows

This has been a claim made about the Macs since the T2 chip came out. It was strictly false then (you just had to boot into Recovery Mode and turn off the requirement that OSes had to be signed by Apple to boot) and we still don't know for sure now. Apple has stated in their WWDC that they're still using SecureBoot, so it's likely that we can again just turn off Apple signature requirements in Recovery Mode and boot into ARM distros.

Whether or not that experience will be good is another thing entirely, and I wouldn't be surprised if Apple made it a bitch and a half for driver devs to make the experience usable at all.

>- virtualization limited to arm64 machines - no windows x86 or linux x86 virtual machines

True, but this isn't a strictly unsolvable limitation of AS and more like one of those teething pains you have to deal with, as it is the first-generation chip in an ISA shift. By this logic, you could say that make doesn't even work yet. Give it some time. In a few months I expect all of these quirks to be ironed out. Although, I suppose if you're concerned about containers it sounds like you want to be in the server market, not the laptop market.

>- only 2 thunderbolt ports, limited to 16GB RAM, no external gpu support/drivers, can't use nvidia or amd cards, can't run x86 containers without finding/building for arm64 or taking huge performance hit with qemu-static

See above about "give it some time".

>- no AAA gaming

I mean, if you're concerned about gaming, you shouldn't buy any Mac at all. Nor should you be in the laptop market, really. Although, this being said, the GPU in the new M1 is strong enough to be noted. In the Verge's benchmarks, Shadow of the Tomb Raider was running on the M1 MacBook Air at 38FPS at 1920x1200. Yes, it was at very low settings, but regardless – this is a playable framerate of a modern triple-A game, in a completely fanless ultrabook ... running through a JIT instruction set translation layer.

>- uncertain future of macos as it continues to be locked down

I disagree. I know we were talking about the M1 specifically, but Apple has shown that the future of ARM on desktop doesn't have to be as dismal as Windows made it out to be. Teething pains aside, the reported battery life and thermal performance on the new AS machines have been absurdly fantastic. I think, going down the road, we'll stop seeing x86 CPUs on all energy-minded machines like laptops entirely.


> - no AAA gaming

I thought Google, Microsoft, Nvidia, etc. were all pushing streaming gaming services that will run on any hardware with a decent internet connection. I would imagine the hardware video decoder in the M1 chip would allow 4K streaming video pretty well.


But these “features” are not highlighted on the product page. (Aside from memory) The core count and battery performance are listed. I think many people will buy these. And arm64 containers will come in time with adoption.


As a related benefit to a non Mac user - ARM64 support for packages at build time is going to greatly improve over the next few years!


Most games played by most players are on iOS & iPadOS, and macOS Big Sur will run them on your MacBook.


Hmm. I might buy one anyway and use a remote docker host for x86...


> Why would anyone (who is not forced) buy an Intel PC laptop when these are available and priced as competitive as they are?

There are enough people who do not want to deal with MacOS and Darwin regardless the hardware specs.

Also the way of least friction is usually to use whatever the rest of your team uses. There are even relevant differences in Docker for MacOS vs Docker for Linux that make cross platform difficult (in particular thinking about host.docker.internal, but there are certainly more). Working with C/C++ is another pain point for cross platform, which already starts with Apples own Clang and different lib naming conventions.

Going away from x86 makes this situation certainly not better.


> Working with C/C++ is another pain point for cross platform, which already starts with Apples own Clang and different lib naming conventions.

A walk in the part to anyone that had to deal with coding with C or C++ across UNIX flavours.


Or anyone who had to deal with Microsoft's on version of everything.


Point one C or C++ compiler vendor that has a pure ISO compiler with zero extensions.

Toy projects don't count.


I do web development and I'm not sure how my locally compiled libs will behave on x86 based servers. We often upload our local build artifacts to the DEV envs... I'm not sure this will work on a different arc.

That said, my wife returned the macbook air she bought 3 weeks ago in favor of this new one, so I'll be able to test on that machine before I dive in.


I'm primarily a Mac user but laptops are cheap. If I were working on a team doing Linux development for x86 I'd certainly have a Linux laptop for that even if I preferred a MacBook for other purposes.


Until all software is ported to ARM, it will run in emulation, which is going to be slower in most cases. People invested in plug-in ecosystems, like DAWs or video editing, will likely have an endless long tail of plug-ins that aren't getting ported, or that require a re-purchase to get an ARM version. And due to Rosetta's architecture, you can't mix ARM and x86 plug-ins (in-process, like VSTs - Apple wants you to use AUv3 which is out of process but nobody does that), so you will be running your entire workflow under emulation until you can make the switch hard and all at once. And some of your software will never make it.

Mark my words, this is going to be a massive shit show for people using those ecosystems, for 5 years if not 10. It already happened with the PPC transition.


Rosetta2 is mind blowing.

“ fun fact: retaining and releasing an NSObject takes ~30 nanoseconds on current gen Intel, and ~6.5 nanoseconds on an M1”

https://mobile.twitter.com/hhariri/status/132678854650246349...

“…and ~14 nanoseconds on an M1 emulating an Intel”


How does it handle DSP inner loops? SIMD? x87 code? Floating point corner cases like denormals? What about the inevitable cases where impedance mismatch between the architectures causes severe performance loss? Is Rosetta2 binary translated code guaranteed to be realtime-safe if the original code was? What about the JIT? There's no way that is realtime-safe. What happens if some realtime code triggers the JIT?

We still can't emulate some 20-year-old machines at full speed on modern hardware due to platform impedance mismatches. Rosetta2 may be good, but until someone runs a DAW on there with a pile of plug-ins and shows a significant performance gain over contemporary Intels (and zero unexpected dropouts), I'm not buying the story of Rosetta2 amazingness.


Rosetta2 is not emulation.

Edit: And Apple has already discussed how Rosetta2 handles complexities like self modifying code. It probably won’t help with performance but the M1 has a lot of power to keep even that code running fast.

But more importantly video/audio apps aren’t going to be using Rosetta2 for very long. 99% of code written for x86 MacOS is going to be a simple recompile to native, if not more. Not going native when your competitors did and got 2-5x faster is corporate suicide.


Rosetta2 is emulation just as much as qemu and Dolphin are emulation, both of which also use binary translation like every other modern emulator. Apple marketing just doesn't want you to call Rosetta2 an emulator because "emulators are slow". Anything running software on a different architecture is an emulator.

If you read my parent comment you'll see how DAWs are going to be using Rosetta2 for years to come, maybe even a decade, for many people. Even if there are ARM versions, you won't be able to use them until all your dozens if not hundreds of plug-ins, some of which won't be actively developed any more or will require a re-purchase for an ARM version, have also migrated.

People invested in such ecosystems aren't just going to up and give up half their collection of software, or spend thousands re-purchasing upgrades to get ARM versions.


Apple is a lot bigger market now than PPC transition though


won't emulation catch up in those timeframes ? still, 3+ yrs which is a long time...


Three years isn't that long for CPU performance gains anymore, but even if it was, it isn't the emulation that gets faster, it's the hardware. Contemporary ARM machines emulating x64 would still be slower than contemporary x64 machines natively executing x64.

You're also going to be in a bind if Apple decides they don't care about the long tail and stops supporting emulation before all of your plugins have been converted (if they ever are).


There is no emulation per se, there is a one time AOT translation of Intel to Arm. Then that native code just runs. So no emulator is running on the cpu while the app is.

There is an exception for apps with JIT and those will perform poorly (think Chrome and every Electron app).


"Emulation" is a catch-all term that includes binary translation, static and dynamic, which every modern emulator uses (Apple just doesn't want you to use that name because people think emulation is slow). Rosetta2 is not a pure static translator, because such a thing can't exist (see: self-modifying code).

Just because binary translation is used doesn't mean it's magically as fast as native code. Converting code that runs on architecture A to run on architecture B always has corner cases where things end up a lot slower by necessity.


> So no emulator is running on the cpu while the app is.

Nonetheless, the translated code is going to be slower than ordinary native code because a lot of the information compilers use for optimization isn't available in the resulting binary, so the translator has to be extremely conservative in its assumptions.


Yet the translated code will still run faster on the M1 than the original code runs on x86.


Citation needed. Not a microbenchmark, or a single example of some software. Actual sustained mixed workload usage of real life applications. Especially realtime-sensitive stuff like DAWs (where you have the added risk that calling into the JIT in the middle of a realtime thread can completely screw you over; keeping realtime-safe code realtime-safe under dynamic or even static binary translation is a whole extra can of worms).


Benchmarks are now out, and x86 Chrome is faster on the M1 MacBook Air than the x86 MacBook Air.


Sustained benchmarks await production hardware. But it will be surprising if Rosetta2 translated apps run slower. Not only will system calls be native, but common operations like retain/release are 2x faster under Rosetta3.

https://mobile.twitter.com/hhariri/status/132678854650246349...


That's a microbenchmark. There are a myriad reasons why one specific thing might be faster under a new CPU even under emulation. That doesn't mean other things won't be much slower.


Oh dear lawd, Electron apps can get slower?


Oh yes. JS -> bytecode-> JIT for x64 -> interpreted and converted to ARM (both using CPU and memory).

And most use electron-builder which does not have Mac Arm support. Expect super slow mode for a while!


We still can't emulate some 20+ year old machines at native speed on modern hardware under certain conditions. Emulation always has corner cases where some awkward impedance mismatch between both architectures causes severe performance loss.


> Why would anyone (who is not forced) buy an Intel PC laptop when these are available and priced as competitive as they are?

as a power user I will not be touching anything apple ARM until all my hundreds of software apps are certified to work exactly the same as on x86_64. i will not rely on rosetta to take care of this. i need actual testing.

besides this, 8GB of RAM is how much a single instance of chrome uses. i run 3 chrome instances, 2 firefox and 2 safari. and this is just for web.

this could be a good time to jump the apple ship. it's pretty clear their focus is not their power users' focus.

as such i was looking into a lenovo thinkstation p340 tiny. you can configure it with 64gb ram and core i9 with 10 cores and 20 threads for less $$$ than what an underpowered 6 core mac mini is selling for.


> this could be a good time to jump the apple ship. it's pretty clear their focus is not their power users' focus.

Apple is at day 1 of their two year migration to Apple Silicon. Your judgement seems not just a little premature.


I jumped ship back to Linux (still sucks). This is the first new computer I’ve bought in almost a decade.

I think many professionals who need new hardware will use this as the catalyst to make them move back to PC hardware. The M1 looks amazing, but I need more than just Apple software to do my work. It’ll be a while before all the things I use get migrated.


it depends how you consider it.

“two year migration” sounds just about right for a transition to something non apple.

we can then re-visit apple in 3 years time.


I don't replace laptops that frequently, currently.


> this could be a good time to jump the apple ship. it's pretty clear their focus is not their power users' focus.

Their focus is not on power users ? They just completed the first, small, step of the migration to ARM. They only updated the very low-end models, those who were never targeted at power users anyway, and we're seeing that their cheapest, lowest-end models are whooping the i9 MBPro's ass.

Sure, the features and RAM may not be there yet, but again, these are the low-end models. If we're seeing this level of performance out of an MBAir or Mini. I can't wait to see what the Mac Pro is going to be capable of.


They also updated the MacBook Pro so that is exactly the performance you are going to get for this generation.

The big screen model might give you more cores and RAM but IPC is going to be exactly the same.


They updated the lesser 13" Pro, but not the high-end 13" Pro (since 2016, it's been separated into two lines, with the high end one distinguished by higher TDP, four thunderbolt ports, and more fans) or the 16". IPC will be the same, sure, but I'd expect the higher end 13" and the 16" will have more cores or higher clock speed or both, to soak up the extra TDP headroom.


The 13" MBP was never a pro "pro" model. I bet the big screen models next year will have more RAM and maybe an M2 chip.

> but IPC is going to be exactly the same.

I am not sure what you mean with this?


Its the same chip so single core performance is going to be the same, unless they raise the clock.


Why don’t you think the M2 will increase clock speed?

And the problem with the M1 isn’t performance, single core is already off the charts. The M2 is going to provide 32Gb and 64Gb systems with up to four thunderbolt/USB4 ports and support for dual 6K monitors.


I doubt that the M1 or M2 is going to have superior single core performance to the upcoming Zen4/5nm laptop chips.

Let alone multicore performance. Apple's core are also far behind in IO, 64GB of RAM and 4x Thunderbolt is less than what current gen laptop chips can do.


I agree that Zen4 should be comparable, but it also will cost 4X to make, and more to implement since it doesn’t include RAM.

The M1 is a system on a chip, with all the benefits and drawbacks of that including RAM and port limits.

The next releases will likely be A) a tweaked M1 for higher end PowerBooks with more RAM and ports and B) a desktop version with plenty of ports, significantly higher clock speeds, and off chip RAM.

I think there will always be faster CPUs out there, but not remotely near the M series in power per watt, and cost per power.


Zen is also an SoC, but with off-chip memory, this brings other advantages.

Most importantly, Zen 4 is a chiplet design, so for the same amount of cores it will be cheaper to make than the M1 chip.

As for performance per watt, Renoir in low power configurations matches the A12. I would really doubt that a laptop Zen 4 on 5nm LPP wouldn't pass the M1/M2 in both performance and performance per watt, because Renoir is on 7nm with an older uArch and gets close.


> it's pretty clear their focus is not their power users' focus

Depends on the definition of "power user". Music producers, video editors, and iOS developers will be served quite well.

> lenovo thinkstation p340 tiny. you can configure it with 64gb ram and core i9 with 10 cores and 20 threads for less $$$ than what an underpowered 6 core mac mini is selling for.

When making that calculation, one should also take power consumption into account. $ per cycle is very low now with the new CPU.


Taking power consumption into account makes sense when the machine is running on battery power, but all modern processors are power efficient enough for the cost of electricity to be negligible for a tiny desktop computer.


I agree for the most part; the exception would be if I were running the small machine as a server. I know this is outside of most use cases, but if I were buying a machine to have on all the time (Plex, email, whatever), I'd want to at least feel like it's not driving up my electric bill.


This is where idle power matters. I recently replaced a pretty low power atom in my nas with a i3-9100F. The peak power usage is probably a good 2x higher, but the idle power is just a couple watts, so I expect my average power draw to be much less since the power draw under plex/etc is about the same and the machine sits idle most of the time.


My tiny desktop is on all the time (but idle most of the time) and that was the frame in which I wrote my comment.


Hypothetical scenario: You save 50W (maybe too high, maybe not), use the machine for 10h every day, and a kWh costs you €0.40 (eg in Germany). You save €0.20 per day, €73 per year, and €365 in 5 years. Definitely a factor in areas with high electricity prices.


I think for most power users, they probably generate significantly more than €73 in value from the computer every day (or maybe every hour), so they are probably not thinking too much about that savings.

(Of course, power savings are important in their own right for mobile / battery-operated use cases.)


"Video editors" are going to buy a machine with 8GB RAM? (Which, I assume, will be soldered to the motherboard, like all recent Apple products.) Good luck to them, I guess.


They also doubled (!!!) the SSD speeds, at least according to their slides. Presumably swapping will be much more seamless, so I'm not sure low RAM would be a huge issue for most day to day tasks.


It will still be a problem. The difference in access time between RAM and SSDs is still order of magnitude faster for RAM (10s of micro-seconds vs 10s of nano-seconds). So even if they doubles speeds random access of small data chunks will still choke your performance.


Yes, Apple SSDs are back on the leading edge, they were about half the speed of the fastest Gen4 SSDs.

Low RAM is still an issue with such fast SSDs, as someone who ran RAID0 Gen3 NVMe SSDs (so equivalent to what's in there).


> this could be a good time to jump the apple ship. it's pretty clear their focus is not their power users' focus.

Let's back up a second: Tim Cook said this transition would take place over two years. This is just the first batch of computers running Apple Silicon.

I certainly hope and think that Apple can come out with a beefy 16 inch MacBook Pro with 32 gigs of ram within the next two years. Also, in that time I imagine everything in Homebrew would be ported over natively.


As expected, the Apple M1 is a little faster than Inter Tiger Lake in single-thread applications, but it is a little slower than AMD Renoir in multi-threaded applications.

So for things like software development where you compile frequently your projects, the new Apple computers are a little slower than similar computers with AMD CPUs.

So even when taking only CPU performance into consideration, there are reasons to choose other computers than those with Apple Silicon, depending on what you want to do.

Of course, nobody will decide to buy or not buy products with "Apple Silicon" based on their performance.

Those who want to use Apple software will buy Apple products, those who do not want to use Apple software will not buy Apple products, like until now, regardless which have better performance.


> Of course, nobody will decide to buy or not buy products with "Apple Silicon" based on their performance.

That's exactly the reason why you would chose Apple Silicon right now where you can choose between Intel and Apple SoC. There are of course other reasons such as battery life and price.


Not really. Right now Apple Silicon would be translating most code and therefore be slower and possibly have worse battery life. By the time that isn't true anymore, the option of buying Intel from Apple will be gone and your choices will be ARM from Apple or a PC with Intel/AMD.

The x64 options from Apple are also uncompetitive with existing PCs already because they're using Intel processors when AMD's are faster.


Most code? I would imagine that most code run on Apple laptops today would start with Safari. And then Slack, some IDEs, etc etc. These will all get ported extremely fast if they haven't already been.

There will be a long tail of edge case software that runs in emulation, but that won't affect the majority of users.


That's not how the long tail works. Any given esoteric piece of software won't be used by very many people, but very many people will use some esoteric piece of software.

You also have the problem with proprietary software that even if a port exists, it's not the version you have an existing license for, and you may not be able to afford a new laptop and all new software at the same time.


That's partially correct, and partially wrong. Long tail means that few people will buy/use a particular software package, but that if you have lots of such packages, you can make money. In the case of Apple Silicon, if there's an "esoteric" package, by definition it's only used by a small number of people.


The Macbook Air with 16GB RAM and 512GB isn't really priced competitively here in Germany. It's almost 1600€.


It's not trying to compete with German PCs (which have some nice options, but yet are Windows PCs, not Macs).

I'm not an Apple fanboy, and I'm still very displeased with many of their decisions (touchbar being #1 on MBPs). But if you consider the packaging (small, light, sturdy, now-decent keyboard), and consider their performance, and then consider macOS, I think they are more than competitive.

Even if you match every spec, including size/weight and durability, it comes down to Windows vs macOS. Ironically, macOS is free while Windows is not, but macOS is worth more (to me and many others).


I got a new computer from work last year. I spent quite a while carefully studying my options, and what I saw came down to this:

If you're only looking for computers that are comparable according to the usual hardware specs (cpu, ram, etc.), a Mac costs 25-50% more than the cheapest comparable PC.

If you also throw ergonomic factors like weight and battery life into the comparison, there's no price difference.

(This was USA prices.)


Macs are significantly cheaper in the US than in Europe.


> macOS is free while Windows is not

what laptop are you buying where you need to purchase a Windows license?


The manufacturer of your laptop has paid Microsoft for the OS and is passing that cost on to you in the total price (excluding MS hardware such as the Surface).

Or if you buy a bare system or build your own, you need to buy Windows yourself.

Apple gives their OS away, but in theory you can only run it on their hardware.


When you buy a Mac you are subsidizing the development of that free OS. The the price of the OS is baked into the MSRP in both cases. Comparing the prices is apples-to-apples. Also, Microsoft has made it clear that Windows 10 will be around for a long time, so OS upgrades don't work the same way Windows once did.


>Apple gives their OS away, but in theory you can only run it on their hardware.

If you don't understand why this isn't free then I have a bridge to sell you.


If the bridge auto-upgrades itself for free for the next 8 years after the initial capital expenditures, I'm sure you'll easily find buyers.


I have a MacBook Air 2012, and have been waiting to upgrade for... 2 years already. The laptop will probably end up being 10 years old by the time I upgrade...

- crappy webcam,

- no built-in SD card reader (a 1TB SD card is ~200$, and my music does not need to be stored on an expensive SSD)

- magsafe.. if this was the only downgrade, I'd upgrade, but TBH I love magsafe on my mac and I would miss it if I upgrade.


Eliminating MagSafe for power ports was one of the Apple choices I hated. So many times something has happened and ripped my power cable off my 2014 MBP, and MagSafe saved the laptop from damage or a fall. And worse, Apple has now applied new and different meaning to the same name :(.


Just buy a USB magsafe type cable for $20 and be done with it.


can you recommend one? all the ones i've found have horrible reviews


>macOS is free

Oh wow, that's cool, I didn't know that. Do you have a link to where I can download the free edition of macOS? Google doesn't seem to be helping me.


Depends on your country, but here's a US example: https://apps.apple.com/us/app/macos-catalina/id1466841314?ls...


€1665 in Czechia, and we have much lower purchasing power.


To be honest, you'd expect widely traded good to trade at the same prices, regardless of local purchasing power.


The price seems to be the same except for tax. Germany has a 16% rate (July-Dec 2020) and Czechia 21% for sales tax/VAT.


I think it often depends on big resellers (like Best Buy, for example). They can provide discounts which Apple will not do directly to the customer, and which other resellers can't do because they don't turn enough units.

That's true of many other common goods worldwide. Unless you can buy a locally made item in a lower purchasing power country, you will usually pay a currency exchange equivalent price for the item. Actually you often pay more because the local shop selling the product cannot get bulk pricing and pass along the discount to you.

Finally, when you add the local taxes - 23% in Portugal, for example - the price can be much higher compared to Alaska, US (< 2%). That last bit is really not Apple's fault.


Before Brexit sometimes the you could buy apple stuff for less on amazon.co.uk than in the rest of Europe because the price was fixed in GBP.


You can get a fanless PC with a 1,700 single core GeekBench score for less than $1,600 euros in Europe?


Seems to be priced pretty competitively?


Asking myself that very same question. I've been booting linux, quite happily I might add, off mid/high-end Dells and HPs for a while. The last time I looked Airs were still dual-core, and much more expensive for 16 GB.

I'm not an Apple fan, but the change in value is stunning. I don't need a new laptop currently...


Many apps optimized for the x64 platform won't run as well as the benchmarks.


I think this is an important one to keep in mind. I'm sure most native Mac apps will be compiled to ARM, but a lot of existing apps won't.

Plus there's the brouhaha about Electron apps.

I for one really wouldn't mind if Apple would build a native app to replace Electron apps, e.g. a chat app that works as a good client for (what I have open right now in Rambox) Discord, FB Messenger, Whatsapp and multiple Slack channels. Or their own Spotify client. Or a big update to XCode so it can use language servers for VS Code and it's viable to use for (what I do right now) Typescript, PHP and Go development.

They have more than enough money to invest in dozens of development teams or startups to push out new native apps.

One day I'll switch away from Chrome in favor of Safari as well. Maybe.

(I am taking recommendations for native alternatives to apps)


I can't understand why you think Apple should be building apps for their competitors. It's very strange.

Use Apple Music, Messages, Safari, Swift if you want first-class support.

Or one of the better options now might be to use the iOS apps for Slack, Spotify etc.


Most of the electron apps out there already have native iOS versions which will run natively on AS macs too, that should go a long way to smooth the transition (and will be interesting to see how much extra RAM you gain from not needing slack/spotify/notion etc to run on Electron).

I guess there will still be issues for people who need to run VMs or media apps like Adobe CC etc, and also it will take a while for some dev environments to be fully supported (https://github.com/Homebrew/brew/issues/7857 for example shows it will take some time to get to feature parity).

Overall though a lot of the hard work has already been done, and I'm sure in 2 years time or whenever the transition is 'complete', mac owners will be getting much more value for money with few drawbacks (the main one being higher walls around the garden)


> Most of the electron apps out there already have native iOS versions which will run natively on AS macs too, that should go a long way to smooth the transition (and will be interesting to see how much extra RAM you gain from not needing slack/spotify/notion etc to run on Electron).

They don't have desktop UIs, and will be a big step down for most users. You can't seriously argue the UI doesn't matter on a Mac.


Electron seems to have support in recent beta releases.


You still have to rebuild your software and explicitly support a bunch of different architectures.


Apple Silicon has integrated floating point hardware specifically designed to run JavaScript super fast, so Electron will be fine.


> Plus there's the brouhaha about Electron apps.

Won't this be handled by just porting V8 to the M1?


whatsapp support in native apps is not something that isnt happening because there is no will in teams or companies to do that. its just not possible, there are no APIs. everything you see is a workaround or mashup of the whatsapp web feature


Not just the lack of will - hostile action and threats from whatsapp against even just community projects trying to build a client for a platform not supported by the official ones. This might no longer be the case but a couple years ago they still used to do that, so no wonder so little native clients exist.


They did mention in a presentation some applications ran even quicker in Rosetta 2 than native. Though Wine isn't an emulator, I've seen the same in Wine numerous times. How many, which, etc, who knows? Interesting to figure out regardless.


That happened regularly in the transition from 680x0 to PowerPC.


This is often because it's translating syscalls rather than emulating them, so for applications that are only asking the OS to do the real work, in those cases it's running native code. And then it's running it on a current day CPU instead of one from two years ago.

Unfortunately, although applications like that exist, they're not the common case.


One of the most common operations done in MacOS is retain/release of objects. Rosetta2 translated code is TWICE as fast on the M1 as the original code on x86.

https://mobile.twitter.com/Catfish_Man/status/13262387851813...


Microbenchmarks are meaningless. Where are the benchmarks of real-world applications?


The M1 running native code can retain/release objects five times faster than x86 processors running native code.

X86 code translated by Rosetta2 on the M1 retains/releases objects TWICE as fast as native x86 processors.

https://mobile.twitter.com/Catfish_Man/status/13262387851813...


I was going to add you cant do Android development on these, as you need Android Studio, but that seems to be on the way — Support for Apple Silicon is in progress


I assume they have the OpenJDK JVM ported at this point so all of JetBrains' products should be working or close to working.


They have to make the claim ‘millions of devices run Java true’. But anyway. A lot of programming languages are going to have to support Arm now. Interpreters for like php and js must be cross-compiled and then most things can work. Like Rust just brought their arm support to the tier 1 support level, see https://github.com/rust-lang/rfcs/pull/2959


And I’ve been getting Rust to work on Apple Silicon. It’s only tier 2 for now, but that’s mostly because there are no CI providers so we can’t automatically run tests for it. I’ve been running them by hand.

https://github.com/rust-lang/rust/issues/73908


Most popular languages have supported Arm for many years already, on Linux (and more recently, some of them on Android and iOS).

> Rust just brought their arm support to the tier 1 support level

(for Linux)


Yep, most things were ported in the first wave of linux arm enthusiasm around the netwinder/ipaq craze 20 years ago.


The "netwinder/ipaq craze 20 years ago" would be 32-bit ARM (AArch32), while AFAIK this new chip is 64-bit ARM (AArch64); everything has to be ported again to this new ISA (though yeah, most things were already ported for Linux on AArch64).


Yep, in the intervening time ARM on Linux became popular enough that doing the required compiler backend work for 64-bit ARM in GCC, LLVM etc by commercial interests was a given, there was eg a big push for ARM on servers from various vendors. MS even ported Windows Server. Eg Hotspot/OpenJDK was ported in 2015.


You also have GraalVM (Oracle), OpenJ9 (IBM) and Corretto (Amazon).

So plenty of enterprise-class JVMs available.


Azul and Microsoft are doing the port.


IIRC, Azul is a hardware vendor for the JVM ... can you please share a public source on this collaboration between Azul & Microsoft?


I don’t think Azul have sold hardware in a few years. Their current offerings, Zing and Zulu, are cross platform JVMs and don’t appear to be sold with any hardware.


Azul was originally a hardware vendor with Java-optimized silicon. They haven't sold hardware AFAIK for probably over a decade.


Yes, they used to have hardware with a ton of cores/cpu's and large amounts of RAM if I remember correctly.


Yeah, it was around the time that thread-level parallelism was getting a lot of love. As I recall, they had some massive number of cores directly connected to each other and garbage collection at least partly in hardware. They got burned for pretty much the same reason a lot of the other custom CPU hardware of the time got burned; if you could just wait for Intel to double performance in a couple years it wasn't really worth going with some one-off design for a temporary advantage.


This is the actual JEP https://openjdk.java.net/jeps/391.

This is an early access build from today https://github.com/microsoft/openjdk-aarch64/releases/tag/16...


Sure, here sparing you the effort to learn how to use Google or whatever search engine you like using.

https://www.infoq.com/news/2020/09/microsoft-windows-mac-arm...


Thanks for the link but there was no need for you to be patronizing about it.

FWIW, I originally thought your mention of Azul was a typo, so I parsed your comment as "Azure and Microsoft" before I realized the tautology, which was why I posted the question. I didn't realize that Azul had pivoted to be a software-based vendor of the JVM.


I answered like that because of what looked like a snarky comment.


Wouldn't the x86 version run under emulation?


Basically anyone not lucky to live in countries with comparable economy to US, aka 80% of world IT.


You cant run bootcamp on these..

For me atm thats a dealbreaker.. but I still want one


Even if you could, it wouldn't really help, as it would only be able to boot Windows for ARM, which has even less software support.


There's work being done getting Wine on ARM to emulate Windows/x86-64.


Far more pluralistic software environments would be a huge factor.


I think price comparisons depend on what you're looking for. You cheapest model with 16GB of RAM is 1800$. That's pretty steep, especially considering other laptops will let you upgrade the RAM yourself. And along with that you get the touch bar and the garbage keyboard. I'm just one person, but that's why I would never buy one of these.


> cheapest model with 16GB of RAM is 1800$

$1200 for the Macbook Air with 16GB RAM in USA. No touchbar, no garbage keyboard.


Ah, you're right there. I didn't realize you could modify that.


The old butterfly keyboard that was prone to failure is history.


Maybe they have to run enterprise software which does not get updated every year so they can't use MacOS. Or perhaps they want a fully featured copy of Office? Maybe they want to run an Active Directory network? Maybe they like having ports on their laptop


Professional use in creative industry - film, TV, videography, audio production, and a million other things. 16GB ram and a souped up integrated GPU won't cut it for many 'Pro' applications Mac has traditionally excelled at.


Why would anyone who is not forced to buy a Mac get one of these?


No Linux support is a deal breaker. (current versions with Intel included)


They are not priced competitive. Cheapest macbook air starts from $999. Cheapest Dell Inspiron starts from $319.


The cheapest Dell Inspiron doesn't even hold a candle to the MacBook Air. They're not competing in the same class...


That's true, for sure. But I was answering to "Why would anyone (who is not forced) buy an Intel PC laptop when these are available and priced as competitive as they are?". The answer is simple, everyone, who does not want to spend $999 will buy an Intel/AMD laptop. And $999 is quite a lot for someone who does not need a powerful workhorse. That Insiron is extremely underpowered, yet it'll launch web browser and office apps with some swapping here and there. Apple is not going to kill x86 laptop market with it, just like it won't kill smartphone market with their $400 SE phone when you can buy $100 android phone.

Another thing is that you can buy "gaming" laptop for $999. Something like i7-10750H with GTX 1650. And it's powerful enough to run almost any game on high to medium setting. Apple GPU is awesome compared to Intel GPU, but compared to dedicated Nvidia GPU - not so much. So if you need GPU for gaming, that's another area where Apple does not compete with their new laptops. At least for now.

Ultrabook with focus on portability and long battery life - Apple is awesome here.


> Ultrabook with focus on portability and long battery life - Apple is awesome here.

Exactly that, I think that's the ultimate reason to have a laptop and if not, it might make sense to re-think the setup. Why should I buy a 1500$ Intel/AMD mobile workhorse when the battery is empty after 2 hours? It usually makes more sense to have a server at home or VPS for that. Also a lot of native Apps like Steam have first-class support for that nowadays. For the rest Parsec might work.


It certainly does when the best one can hope to bring home as software engineer is around 1000 euros after taxes.


I fully get that the cheapest option might fit your budget and the MBA doesn't. And that's absolutely fine. Been there, done that. Heck, I wish my 1st laptop had cost $300 but in the end it was more around what the MBA cost today.

But it's not really an Apples to Apples comparison.


I am lucky for having gotten the opportunity to live and work in a country where buying an Apple device is not an issue, and there are plenty of them around the office, but I don't forget my roots nor the monetary possibilities of similar countries that I had the fortune to visit.


Well, buying an Apple laptop is not a necessity or some fundamental human right. It's a nicety for those who can afford it and appreciate the differences (and understand the tradeoffs).

In raw performance per buck you could always get a customer PC setup for cheaper, especially in desktop form.

In some countries, even a $300 laptop comes down to half a year's salaries...


Indeed, which gets back to the original OP point not standing.

> Why would anyone (who is not forced) buy an Intel PC laptop when these are available and priced as competitive as they are?

Apple devices are definitely not priced competitive outside first world countries.


It is an Apples to Dells comparison :)


That's for each buyer to consider, based on their budget and prospects.


Answer straight from the horse's mouth: https://www.youtube.com/watch?v=eAo8gnUCWzE


The recent keyboard fiasco would say otherwise.


You're free to buy the 50% plastic Dell Inspiron which comes with is probably underpowered for Windows 10 and comes with several free nagwares

You might be better served by wiping it and installing Linux though


it's probably <20s short tasks, if you run the CPU/GPU at full load for extended periods the thermals kick in and the M1 Macbook Air without fan will reduce clock speed.

iPad pro - the current 2020 gen iPad pro has A12Z (essentially the same chip as 2018 A12X with extra GPU cores) - significantly older chip than A14. I think there will be an A14 iPad Pro refresh with mini led display in early 2021.


> thermals ... reduce clock speed.

I see that statement a lot, and yes, at some point that is going to happen.

But the analysis seems to fail to take into account what utterly amazingly low power devices these chips are. So while it will happen, it might take a long time.


I think more results are needed to smooth out the curves. If you check all the results by model so far,

https://browser.geekbench.com/v5/cpu/search?q=Macmini9%2C1

https://browser.geekbench.com/v5/cpu/search?q=MacBookPro17%2...

https://browser.geekbench.com/v5/cpu/search?utf8=%E2%9C%93&q...

it looks like they're all in the same ballpark (i.e. the Air is not leading others, just comparable).


One has to remember that Apple is still selling an Intel mac mini at the top of the range: it likely means something about the performance to expect from M1 vs Intel.


The available RAM and eGPU support could explain that rather than raw performance.

I also imagine not all customers are ready to jump on ARM day 1. Some will want to wait until the software ecosystem has had time to make the transition.


They probably have a lot of customers still demanding Intel CPUs. Mac minis are often used in server farms as build servers and there are many companies that would require Intel CPU there for some time.


All of that is true, however it is notable that the models they replaced (are not selling anymore) are all the lower end models. The two port MacBook Pro, the Air, only the lower end mini.

Seems pretty obvious to me that there will be another more higher end variant of the M1, though maybe the only difference will be the amount of RAM, the number of GPU cores, the number of supported USB4 ports or something like that, not raw CPU performance.

Either way, it seems obvious to me that the M1 is their low end Mac chip.

That will be interesting to watch.


Yes, it looks like the M1 was designed mostly for the MacBook Air. The specs are a perfect fit and it makes a lot of sense, as the Air is their most popular laptop. Having the perfect Air - and the new one is truly impressive - will make for a lot of Mac sales. Also, they also put it in the bottom end MB Pro and Mini. But indeed with the next variant of Apple Silicon, the higher end variations, and other devices are probably going to be supported.


The M1 Mini is far better than Intel Mini on cost/power, power/watt, power/heat measures.

Server farms are going to switch rapidly, one leading Mini server farm just announced a 600 unit starter order, and the CEO noted that Big Sur also made significant changes to licensing to make its use in server farms easier.


Of course i just found out M1 only supports 1 gigabit Ethernet, but I don’t think that changes the decision much.


I remember when Intel simultaneously released the first x86, 286, and 386 CPUs all on the same day. What exciting times it was!

Apple released a killer low end SOC in the M1. It contains the highest performance single core processor in the world along with high end multi core performance. But it’s limited to 16Gb and two USB4/Thunderbolt ports, so it’s targeted at the low end.

When the M2 is released mid next year, it will be even faster, support four USB4/Thunderbolt ports and will also come in 32Gb and 64Gb versions.

Greatness takes a small wait sometimes.


Wait.. they've already announced specs for a M2 chip?


No, but it’s pretty clear what next release will be. They will move Apple Silicon to the rest of their MacBooks, and into their iMacs. 16 gb RAM and two ports ain’t going to cut it for them.

Where I can be wrong is that Apple could release two chips. First an upgraded M1, let’s call it M1x that supports a bit more on chip ram (24 or 32 Gb) and four ports. It would be only for high end MacBook Pros and again optimized for battery life.

And they would release a M1d for desktops that has more cores, but moves RAM off chip. That would improve multicore performance, but I don’t know how it much it would hurt single core with slower memory fetches. Probably they could compensate with higher clock speeds, power budgets, and more active cooling.


A14 Air outperforms all iPad Pros in single core, multicore is still faster on the A12X. Keep in mind the fastest iPad Pro is using a two generation old cpu.


I have watched a number of reviews comparing the new Air to the Pro. The CPU performance increase isn't noticeable in most cases, and the Pro still offers better pencil latency and display refresh rate (plus camera/lidar, but probably most don't care).

I wouldn't buy a Pro now because I would wait for the next version, but I wouldn't trade a current Pro for a new Air just for the CPU bump...


the latest ipad pro is still on the A12Z bionic. the air just got refreshed with the A14 bionic


https://twitter.com/tldtoday/status/1326610187529023488

Has an interesting comparison of an iPhone 12 mini doing similar work to an i9 iMac

Now I haven't dug into the details to verify both produced the same results. I believe most of the difference is from software encoding versus hardware encoding. the follow up tweets suggest similar output.

it does show how workloads can cause people to jump to conclusions on simply one test and not having all the details to support the conclusion they desire to arrive at


> “the A14 iPad Air outperforms all iPad Pro devices?”

iPad Pro is still on an older generation of SoC (A12Z), while the iPad Air just got the new A14.


> This seems a bit odd too - the A14 iPad Air outperforms all iPad Pro devices?

Well yeah, every year for the last bunch or years the A series of chips have had sizeable IPC improvements such that the A12 based iPad Pros are slower than the new Air. Apple's chip division is just industry leading here.


This is pretty crazy to see, even if the full story isn't clear yet. A base level MacBook Air is taking the crown of the best MacBook Pro. Wow. SVP Johny Srouji and all of the Apple hardware + silicon team have been smashing it for the past many years.

For what it's worth, I have a fully specced out 16 inch MacBook Pro with the AMD Radeon Pro 5600m and even with that I'm regularly hitting 100% usage of the card, and not to mention the fan noise.

Looking forward to a version from Apple that is made for actual professionals, but I imagine these introductory M1 based devices are going to be great for the vast majority of people.


It's also funny that Johny Srouji and probably others in his team come from the team at Intel in Israel that "saved" Intel in the early 2000s by designing the Intel Core architecture which is still used by Intel today.

cf. Anandtech article from 2003:

https://www.anandtech.com/show/1083/2


Did not know that, and indeed the pentium m / core / core 2 series microarches have done incredibly well.

I've become something of CPU collector in recent years, and I have a nice line of p6 cpus from thePentium Pro -> Pentium 2 -> Pentium 3 -> Pentium M -> Core 2 that conveniently sidesteps those awful Netburst p4 CPUs.

It feels like this (p6+) microarch has finally run out of road and needs a rethink. What 'saved' intel was a change in philosophy rather than chasing MHz they chased power savings. And with Apple's new chips that history is repeating itself (and appears to be with a similar outcome).

It's an exciting time for hardware again because Intel and AMD are going to have to react to this and I think there's still legs to x86, it's survived everything thats been thrown at it so far...


It's fascinating to see history repeated - the architecture and design with the most power efficiency generally also turns out to be the one that can be pushed furthest for performance when you want to go that route.


That's an interesting observation. It holds true in other areas as well. For example, we have lots of high horsepower cars as the result of R&D effort into high efficiency engines.


"I think there's still legs to x86, it's survived everything thats been thrown at it so far..."

Also, some one, some day, had to disrupt it. Maybe this is it, maybe not.


> I've become something of CPU collector in recent years, and I have a nice line of p6 cpus from thePentium Pro

I bought i486DX for $20 month ago, as a memo of my first CPU.


I got a Pentium 100 last month with a motherboard and a psu. I plan to run W95. It's going to be fun!


Some people are priceless. One of the designers of Ryzen, Jim Keller [1], for example.

[1] https://www.anandtech.com/show/12689/cpu-design-guru-jim-kel...


Jim also lead the team behind the Apple A4 and A5 SOCs - Apple's first venture into designing their own Silicon.

Edit: hadn't clicked through to your article. my comment is redundant


It is sad that engineers like him are not multi billionaires, but it all gets pocketed by investors, who don't even pay the same tax as people who actually do the work...



It's also worth pointing out that Apple is also benefitting from TSMC's latest and best fab processes. Intel is not only behind architecturally but in manufacturing too:

https://www.macworld.com/article/3572624/tsmc-details-its-fu...


It's not just outperforming the MacBook Pro. It's also blowing away the current 2020 top-end iMac, which has a 10-core Intel i9.

And it's doing this while using more than an order of magnitude less power (10W vs. a TDP of 125W for that Intel part).

That's stunning.


> And it's doing this while using more than an order of magnitude less power (10W vs. a TDP of 125W for that Intel part).

That's the wrong conclusion to make. For instance, the Lenovo ThinkBook 14s (with a Ryzen 4800u) with a 15W TDP posts the same Geekbench multicore scores [1] as the M1 Macbook. But the ThinkBook isn't in any way faster than the top-end iMac for real world compute intensive tasks.

The M1 certainly looks efficient, but there's little you can conclude from a single benchmark running for a very short period of time.

[1]: https://browser.geekbench.com/v5/cpu/4642736


Maybe Geekbench is kinda useless as a benchmark suite then? I only see it used by Mac fans.


>There’s been a lot of criticism about more common benchmark suites such as GeekBench, but frankly I've found these concerns or arguments to be quite unfounded. The only factual differences between workloads in SPEC and workloads in GB5 is that the latter has less outlier tests which are memory-heavy, meaning it’s more of a CPU benchmark whereas SPEC has more tendency towards CPU+DRAM.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


And yet here we have the M1 MacBook Air apparently beating the M1 MacBook pro, by a large margin.


It's thought that the MBP score is due to it being ran possibly during indexing on setup. The score difference is too big, it's a single sample, and the pro still fries it at single core.


Based on a sample size of 1 or 2, most likely. It could be due to something as stupid as Spotlight running in the background on the MBP but not the Air.


The silicon lottery is still a thing.

I imagine they will eventually have enough chips to start binning for different performance levels like AMD and Intel do.


They are already binning. The air is supposed to be less powerful, with cheaper variants even having one core less.

The benchmark is just ... that accurate.


They are binning for functional GPU cores and allow chips with only 7 functional GPU cores instead of 8 to go into the lower priced Air.

They are not binning for how high the cores will clock, which is just how business is done with Intel and AMD.


It's not measuring sustained performance. The fanless MacBook Air is going to throttle much sooner than a desktop iMac with proper cooling and unlimited power.


A Ryzen 4800u actually uses up to 25W TDP, depending upon implementation.

And it’s 45% slower in single core.

Most importantly, the M1 is estimated to cost Apple $65, the 4800u is a $300+ part.


That price is a meaningless comparison, you can't buy the Apple processor in retail. What's the cost to procedure the AMD part? Something similar I'd guess.


AMD on it’s current hot steak has gotten its gross profit margins to 43%, which would make the cost to manufacture a $300 part around $171.

Two and a half times higher cost to build a slower, more power hungry CPU is not actually very similar.


Yeah using the gross margin of the whole company and applying it to a single product is going to provide a reliable figure...


It’s how companies price their products. Is it perfect? No, but that’s why I used the word “around”.

You can argue that this particular Ryzen has a higher gross margin, say 50%, and lower ASP than $300, but that only gets your cost down to what, $140? And with RAM costing extra.


how much of the $300 goes to the retailer? how much goes to the distributor?


How much goes to R&D?


Not to mention I believe the RAM is included on the M1 SOC.


Unlikely OEM's are paying 300+ for a 4800u. Certainly more than $65, though.


That makes sense, AMD is selling to OEMs for a profit (over cost) while Apple is its own OEM, if any charging is done it's purely internal and for accounting purposes.

This comparison looks at different segments of the fab<>manufacturer<>OEM relationship. Add the user in there and you might say that you can buy an AMD CPU for $100 but an Apple CPU will cost you $1000. Not very meaningful as a comparison.


Certainly more than $200, so what’s your point?


While it sounds promising, I'm going to wait for some additional benchmarks and real world usage scenarios; factors like cooling, multi-process work, and of course suboptimal applications (browsers, Electron apps, stuff compiled for Intel) will be a big factor as well.

That said, it's promising and I'm really curious to see where this development will lead to in a few years' time.


> Electron apps

iOS/iPhones/iPads already smoke every other device in running javascript. Apple Silicon may end up being the best thing to ever happen for Electron apps.


Me too - let's see what the sustained performance is like. That said, with this much headroom, I'm cautiously optimistic that even with some throttling going on, it'll still be plenty fast for anything I'm likely to throw at it.


The data we don’t have is for sustained use over time. An Intel iMac Pro can sustain max performance far longer than an Intel laptop, as it has a far higher thermal exhaust capacity.

Does the M1 performance have to be ramped down during sustained use due to exceeding thermal envelope of the fanless MBA? Of the fan’d MBP?

We’ll know soon enough!


Without doubt, the Apple M1 has the highest single-threaded performance of any non-overclocked CPU, being a little faster than AMD Zen 3 and Intel Tiger Lake.

Nevertheless, because Apple has chosen to not increase their manufacturing costs by including more big cores, the multi-threaded performance is not at all impressive, being lower than that of many much cheaper laptops using AMD Ryzen 7 4800U CPUs.

So for any professional applications, like software development, these new Apple computers will certainly not blow away their competition performance-wise, and that before taking into account their severe limitations in memory capacity and peripheral ports.


But given that M1 is clearly the basic CPU for low cost, thin-and-light devices, we can strongly infer that Apple’s next M chip will be significantly more capable. Chips with eight or more performance cores would be a certainty for the upper tier of laptops and iMacs.


Given that the M1 is a full node ahead of Zen 3 and two nodes ahead of whatever Intel has to offer, one would think that when on same node, Intel and AMD will be just as capable.

But the truth is comparing to future offerings is bullshit, and we have to stick to what's available today. Impressive power/performance and all that, I have to say. We will see how sustained load looks like and how it runs non-optimized software. But to put in perspective 1 CCX of zen 3 performs better on 7nm (but draws up to 65W). With approximately the same die size (although w/o GPU and other things, the M1 has).


TSMC’s 5nm node is exceeding everyone’s expectations.


On the non-Apple side, it will be interesting what AMD does with the 5nm node.


Yes I’m guessing we can expect a die shrink of Zen 3 at least next year meaning 10-15% additional performance with no architectural changes. Crazy.


5nm is not design compatible with 7nm, but 6nm is, so there might be a die shrink to that.


Interesting! What makes the design incompatible I don’t know enough about it to know...


It’s also outperforming the 5950X in single core. Incredible!


Yeah, and doing this without a fan. It's almost like Apple is rubbing Intel's face in it for sport. It's not even fair.


But did the test run long enough to need the fan, and what was the ambient temperature?

The fanless Intel Core-M CPUs could post excellent benchmark scores (for its time). But if you give it a lengthy compile task, it'll slow down dramatically.


Aren’t they both running the same test?

Or are you saying that the test needs to run for way longer to be fair?


> Or are you saying that the test needs to run for way longer to be fair?

Yes, the main computing constraint of mobile devices is heat management (This doesn't really reflect the CPU but the complete device. Putting the CPU in a more ideal setup like a traditional desktop or water cooling will improve the CPU's performance in longer tasks)


The 5950X in a single core load only uses around 18w. Drop 200mhz off of the top end boost frequency and it drops to 11w.

Short duration single core workloads workout a fan is trivial even for CPUs that aren't trying to do so.


Laptop Macs of late are known for heavy throttling due to inadequate cooling, which cannot have been unnoticed in simulations


I’d go so far as to say “laptops of late”, thermal throttling isn’t unique to MacBooks.

My XPS15 regularly throttles and sounds like it’s about to take off.


It was Surface that started the trend, then MacBook followed and became a feature.

By the way, “throttling” refers to CPU _slowing down_ despite cooling working at full capacity, so loud fans in itself isn’t one.

e: Another way to explain thermal throttling would be “thermal fading”, like brake fading on a car. Whether brake fading is considered a design fault or a feature that allow bursts of stronger brakes is semantic.


I think the Surface was late to the party, the original Macbook Air had huge problems when it launched in 2008.

https://www.theage.com.au/technology/apple-fans-burned-by-ho...


If we're doing analogies, let's do them right :) Therefore I'd argue it is more like supercharger overheating on a car. As a supercharger gets hotter from prolonged load it gets hot and warms the air entering the engine which reduces cold-air intake thus reducing horse power compared to a cooler supercharger. There is a way to solve this: by fitting a chargecooler - which is basically a cooling system for the supercharger.


Sorry, yes, I realise that’s what throttling is but wasn’t clear. I meant to say in the original comment the fans normally come on followed by throttling.


Gaming laptops are huge for a reason: they need to get rid of a lot of heat.

Thin and light is great for short bursts of activity, but, when you need sustainable heavy usage, you'll need a bigger computer, even if it's just to have a bigger heatsink.


My XPS15's GPU runs at 100C (on benchmark loads), even after I replaced the heat pads and cooling paste. It took me weeks to get used to the idea. I guess that's just the new normal. I don't find mine to be loud though.


I have the 16” MacBook Pro. It doesn’t throttle and will happily run at max boost for tens of minutes.


Mine doesn't throttle when the ambient temperature is below 21°C, but throttles almost immediately if the ambient temperature is above 24°C.


The MBP16 has a problem with overheating VRMs during prolonged high load. A hack with thermal pads helps:

https://www.reddit.com/r/macbookpro/comments/gs6bal/2019_mbp...


Which one? I have the i9 flavor and it sounds like it's taking off if I run a video conf with zoom or Microsoft teams.


Throttling refers to throttling down. Choking up. Going up is boosting.


mine throttles to 30% within a minute of ffmpeg


*Intel Laptop Macs of late are known for heavy throttling

We have no idea what the M1 would look like. Though it's likely. I imagine they wouldn't have taken the fan out of the Air if that was a big concern.


But not anymore. That era is over.


So it’s natural that devices from that era and devices after that era behave differently


Surely a version that can beat a 8 core Xeon is made for 'actual professionals'?


It still has a lot of limitations that matter to many pros - max 16 GB of RAM, max 2 displays (only 1 external for laptops), only enough PCIe lanes to support 2 thunderbolt ports. eGPUs aren't supported either, but hopefully that is a software thing that will be fixed.

It will be very interesting to see what the performance will be of the more "pro" chip that overcomes those limitations that they'd put in the 16" and iMacs


It’s a first generation chip in a thin-and-light laptop. I suspect all those problems will be fixed as they scale it up. Not to mention their unique position of leverage dedicated acceleration silicon in their software now.


eGPU support will depend on if they have a way to work around the need for a PCIe I/O BAR. Many GPUs require that to initialize and as far as I know no ARM cpus support it since it's a legacy-ish x86 thing. It'll be the same problem that prevents gpu use on raspberry pi 4s still. I bet you can make a controller that'll provide a mapping for that to allow it but that'll mean needing a new enclosure (probably not a huge deal) and new silicon and drivers.


AMD GPUs do not require I/O BARs, and highly doubt Nvidia ones do either. The VBIOS will probably assume it can use it, but most modern cards can be initialized without actually running the VBIOS (because people use them on headless servers, for virtualization, and things like that). The I/O BAR is only required for legacy VGA compatibility, you can ignore it.


AMD GPUs work just fine with POWER8 hardware so this is certainly solved.


I don't quite get how Apple could claim to be Thunderbolt-compatible with this sort of limitation in hardware, though.


It's not a limitation in hardware. It's a legacy feature of the x86 that isn't supported on ARM.

This could be solved by the GPU engineers by removing the legacy compatibility. "Nobody" boots a modern PC from BIOS anymore


max 2 displays (only 1 external for laptops)

I wonder if that includes AirPlay screens, or just wired.


This may work. On a 2014 macbook air, which officially only supports only one external display, I've been able to use two external screens at the same time by sharing one through airplay. However, you're always limited in resolution and latency that way.


I would imagine just wired - it's probably a lack of DisplayPort/HDMI encoders rather than rendering resources.


why would it be only 1 external display? Can you not plug into a thunderbolt dock and use multiple external displays? I do exactly that with my 2012 MacBook Pro.


It's a limitation of the M1 chip apparently

https://appleinsider.com/articles/20/11/11/how-apple-silicon...


That’s merely speculation as to why.


You can use sidecar with an ipad for a second external display.


With some iPads. Not all. I was thinking of using an older iPad for this purpose, but alas. Won't work. Too old.


The embedded GPU doesn't support it


Dunno, seems like most professionals would want to use docker, virtual machines, or enough video/data to want more than 16GB ram. Or maybe even plug in more than a charging cable and one more device. Or run more than one external monitor.

Doesn't seem very "pro" to me. The MBP16" intel has 4 x USB-c ports, can drive two monitors, and can have >= 32GB ram.


Most professionals aren't software developers and don't need to run virtual machines.

But regardless of professional requirements, "Pro" in Apple's product line just means "the more expensive slightly better version." Nobody's arguing that AirPods Pro are the wireless earbuds of choice for people who make money by listening to wireless earbuds.


Hm, a good VM is one you don't even notice. Say you run Qubes. Not necessarily for development (I would argue Nix is the best OS for DevOps). If all goes well, such an OS becomes very adequate for an average user. For example, a hardware VM could allow you to run a browser more secure.


> most professionals would want to use docker, virtual machines, or enough video/data to want more than 16GB ram

I'm a Sofwware Engineer / pro photographer / videographer.

I sling code from time to time, edit thousands of RAW files from my cameras and edit together 1080p footage day in and day out.

I did that for years on a 2013 MBA with 8GB of RAM. Now I have a 2015 MBP with 16GB of RAM. It's perfectly adequate.


Yep. My new work laptop only has 16GB of RAM and it’s never been an issue. I’m usually running half a dozen containers, VS Code, Slack, Brave/Chrome, and a few other things. Maybe our work loads are just computationally lighter than some?

I ordered a 16GB Pro the other day to be my personal dev machine. I’m sure it’ll be more than fine. I’m upgrading from a 2013 8GB Pro which was only just starting to slow me down.


The code and compilation for me is the light part, and for the most part an hour's worth of essentially text editing for about a moment of compilation anyways.

My resource hogs are slack, mainly the browser, and Zoom calls are apparently the most computationally intensive thing in the world, especially if you screen share while you have an external plugged in.

Memory wise the reason I had to go from 8Gb to 16GB on my personal laptop was literally just for TravisCI.

Honestly, adding external monitors cripples MacBooks pretty quick, even unscaled 2 2k monitors will slow a 2015 15 down significantly (don't try and leave YouTube on either), and it gets worse from there once you start upgrading to 4k monitors. a 2017 15 is good for a 4k and a 2k, and gets a bit slow if you try and go dual 4k.

I planned on looking into eGPU solutions until IT offered me a new Macbook, and I convinced them I needed a 16" Pro.

tldr: External monitors or badly optimized applications (Zoom, YouTube, or browser based CI) will make most MacBooks feel sluggish pretty quick.


Are those displays in scaled mode? Scaled displays tend to perform badly on integrated graphics, and suck up memory, because it has to render a 2x or 3x size internally and then scale down for every frame. Running something that updates the screen constantly, like zoom, probably exacerbates that issue.


> most professionals would want to use docker, virtual machines, or enough video/data to want more than 16GB ram.

Again, what definition of "pro" are you using, and how is that relevant to professionals with other occupations?


Apparently I am not a professional, since I don't use docker and leave VMs for servers.


How about using a second monitor?


I am always on the go, it is a bit hard to carry a second monitor with me.


The MBP16 actually can do 4 displays if they're only 4k displays. It's why I bought one over a MBP13 which can only do 2.

The two limit seems to be for 6016x3384 which I assume is 5k.


Not sure if you mean an Intel MBP 13" or the new MPB 13", but the new arm based MBA and MBP can only do a single monitor.


That's the resolution of the Pro Display XDR at 6k. It can run 2.


I wonder if M1 dominates an i9-9980HK at multithreaded workloads that make full use of available SIMD? Does an M1 dominate at peak theoretical flops?


M1 is not magic and can't break the laws of physics. SMT makes better use of silicon and will probably push speeds closer. OTOH, M1 has a fast memory that the i9 can't match.

I still bet on the i9, but it'd be interesting to run a test.


>M1 is not magic and can't break the laws of physics.

Anandtech's deep dive provides several examples of advances in Apple's core design that didn't involve magic or breaking the laws of physics. For example...

Instruction Decode:

>What really defines Apple’s Firestorm CPU core from other designs in the industry is just the sheer width of the microarchitecture. Featuring an 8-wide decode block, Apple’s Firestorm is by far the current widest commercialized design in the industry. Other contemporary designs such as AMD’s Zen(1 through 3) and Intel’s µarch’s, x86 CPUs today still only feature a 4-wide decoder designs

Instruction Re-order Buffer Size:

>A +-630 deep ROB is an immensely huge out-of-order window for Apple’s new core, as it vastly outclasses any other design in the industry. Intel’s Sunny Cove and Willow Cove cores are the second-most “deep” OOO designs out there with a 352 ROB structure, while AMD’s newest Zen3 core makes due with 256 entries, and recent Arm designs such as the Cortex-X1 feature a 224 structure.

Number of Execution Units:

>On the Integer side, we find at least 7 execution ports for actual arithmetic operations. These include 4 simple ALUs capable of ADD instructions, 2 complex units which feature also MUL (multiply) capabilities, and what appears to be a dedicated integer division unit.

On the floating point and vector execution side of things, the new Firestorm cores are actually more impressive as they a 33% increase in capabilities, enabled by Apple’s addition of a fourth execution pipeline.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


> Featuring an 8-wide decode block, Apple’s Firestorm is by far the current widest commercialized design in the industry. Other contemporary designs such as AMD’s Zen(1 through 3) and Intel’s µarch’s, x86 CPUs today still only feature a 4-wide decoder designs

This is one place where the 64-bit ARM ISA design shines: since all instructions are exactly 4 bytes wide and always aligned to 4 bytes, it's easy to make a very wide decoder, since there's no need to compute the instruction length and align the instruction stream before decoding.


> advances in Apple's core design that didn't involve magic or breaking the laws of physics.

That's exactly what I said. It's faster, but not an order of magnitude faster and different workloads will perform differently depending on a multitude of factors (even if benchmarks don't). Do not expect it to outperform a not-too-old top-of-the-line mobile CPU by a large margin.


The current gen iPhone chip using the same cores literally outperforms anything Intel makes on a per core basis.

Zen 3 slightly outperforms the iPhone chip, but it runs it's clocks slower to stay inside a 5 watt power draw.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

So, yes. Expect it to outperform Tiger Lake and Zen 3, at least on a per core basis.


Remember the intel part has 8 fast cores while M1 has 4 (and 4 puny ones which really doesn't count). The Intel part also uses SMT to squeeze some extra parallelism that the reordering plumbing can't.


Yes, Intel makes parts with more cores, but their entry level chips only have two cores.

Apple chips with more cores will come in time as well.

It's the per core performance, especially at a given power draw, that matters going forward.


What are the laws of physics that would be broken in this case?


x86 needs to use more complicated logic to deal with the instruction stream than ARM, freeing more of the silicon for things like better reordering and more execution units. OTOH, the SMT somewhat mitigates the delays caused in reordering by working on more than one instruction stream at once. I'd say the 16-thread chip will end up being overall faster than the 8-core one, if cache misses don't create a huge penalty for the slower memory bus of the x86. The i9-9980HK is also two generations behind, which doesn't help it much.

When I said there is no magic, I was warning that we shouldn't expect huge speedups or a crushing advantage, at least not for long. The edge M1 has is due to a simpler ISA (which is less demanding to run efficiently, freeing more resources for optimization and execution) and a faster memory interface (which makes an L3 miss less of a punishment). This fast memory interface also limits it to, for now, 16GB of memory. If the dataset has 17GB, it'll suffer. Another difference is that all of the i9 cores are designed to be fast, whereas only 4 cores of the M1 are. This added flexibility can be put to good use by moving CPU-bound processes to the big cores and IO-bound and low-priority ones to the little ones.

In the end, they are very different chips (in design and TDP). It'd be interesting to compare them with actual measurements, as well as newer Intel ones.


Agreed that the vast majority of Hacker News comments about the M1 Macbook Air are very glass half empty.

This seems like a really cool piece of technology, and I'm kind of bummed that everyone is so cynical and pessimistic about everything these days (albeit understandably so).


AMD's Zen 3 (Ryzen 5xxx series) are beating the Apple M1 in single core score: https://browser.geekbench.com/v5/cpu/singlecore

As another datapoint Ian (of Anandtech) estimated that the M1 would need to be clocked at 3.25Ghz to match Zen 3, and these systems are showing a 3.2Ghz clock: https://twitter.com/IanCutress/status/1326516048309460992


No, they aren't. All of the top results have crazy overclocking and liquid cooling. You need to look the numbers here: https://browser.geekbench.com/processor-benchmarks. Top end Zen 3 is slightly lower than M1.


Not exactly.

You can check the clock speeds: https://browser.geekbench.com/v5/cpu/4620493.gb5

Up to 5050MHz is stock behavior for the 5950X and it's using standard DDR4 3200 memory.


Yet it still makes it very clear: a properly implemented ARM core can easily bury an X86 of equivalent size because of inherent advantage of not having to pay interest on 40 years of technical debt in the ISA.


AMD64 (x86-64) runs x86-32 at near-native speed, but it isn't x86-32. As someone who was an early adopter of Linux/AMD64 I know first-hand backwards compatibility is very important. Apple knows, hence Rosetta. Every time they switch architecture, they invest into backwards compatibility. As a counter-example, Itanium wasn't good with backwards compatibility.


What was the Linux landscape like for AMD64 early adopters?


Debian was quick with adopting it (they've always been very cross-platform focused), in contrast to say Windows (which took a lot longer). On Linux, a lot worked, but not everything. Slowly but surely more got ported to AMD64. What didn't work? Especially pre-compiled proprietary software was not available (IIRC Nvidia drivers? At the very least games). You had to have x86-32 userland installed. Which adds up to higher diskspace requirement. Nowadays, diskspace requirement is negligible, and x86-32 userland is less relevant (on AMD64/x86-64). I would assume the 4 GB limit eventually made games swap to AMD64 as well.

Back then, Intel was still betting on Itanium. It was a time when AMD was ahead of Intel. Wintel lasted longer, and its only since the smartphone revolution they got caught up. In hindsight, even a Windows computer on Intel gave a user more freedom than the locked down stuff on say iOS. OTOH, sometimes user freedom is a bad thing, arguably if the user isn't technically inclined or if you can sell a locked down platform like PlayStation or Xbox for relatively cheap (kind of like the printer business).

I'm sure other people can add to this as well. :-)


Thanks


Because 5 nm better than 7 nm. That's about it. AMD Zen will be on par with Apple Silicone when they'll use 5 nm process.


Actually i'd bet that you're both wrong. What M1 does well isn't that "ARM-is-better" or that they're using a smaller process (even if both factors probably plays into helping the M1 chips edge a few %).

Rather i suspect that the main benefit that M1 has in many real world benchmarks is that it has on-chip memory, cache-miss latency is a huge cost in the real world (why games has drifted towards DoD internals), so sidestepping that issue to a large extent by integrating memory on-die gives it a great boost.

I'm betting once they've reverse engineered the M1 perf, we will see multi-GB caches on AMD/Intel chips within 4 years.


There's nothing to "reverse engineer" there: M1 has 4x the L1 cache and a wider bus. That's it.

This cannot be implemented in AMD's current 7nm process due to size restrictions.

The SoC-side of the story is also contrary to the very core design of a general purpose CPU. RAM, GPU, and extension cards for specialised tasks are already covered by 3rd party products on the PCIe and USB4 buses and AMD has no interest in cannibalising their GPU and console business...

With their upcoming discrete GPUs and accelerator cards, Intel might be in the same boat w.r.t. SoC design.


The M1 has an L1 cache with less than 0.5 KB per core and an L2 with about 4 MB shared by all cores. https://en.wikipedia.org/wiki/Apple_M1


> What M1 does well isn't that "ARM-is-better"

Of course, not all to it, but denying that having to emulate a 40 years old ISA does not place a huge cost on transistor count, and efficiency is impossible.


ISA has very little to do with it, ARM is almost as old as x86.


AArch64/ARM64 was developed from the ground up, not bolted on to the old 32-bit ISA


ARM went through multiple iterations of its ISA. They don’t need to run 40 year old code.


X86 CPUs are not really "running" X86 ISA since Pentium Pro (1995), they are translating on-the-fly X86 instructions to microcode which is actually getting executed. ARM CPUs are also not executing ARM ISA directly and doing translation as well.

Simpler ARM ISA has advantages in very small / energy efficient CPUs since the silicon translation logic can be smaller but this advantage grows increasingly irrelevant when you are scaling to bigger, faster cores.

IMHO these days ISA implications on performance and efficiency are being overstated.


Yes, those are widely known fact. There are aspects of the ISA that do constrain performance and cannot be easily worked around, eg the memory model which is more relaxed on ARM.


> IMHO these days ISA implications on performance and efficiency are being overstated.

Noooo, besides simply copying instructions 1-to-1, the process is way to involved, and imposes 40 years old assumptions on memory model, and many other things, which greatly limits the amount of way you can interact with the CPU, adds to transistor count, and makes making efficient compilers really hard.


Interesting point. So on the one hand we have all these layers in the CPU to abstract away things in the ISA that are not ideal for block level implementation... but on the other hand compilers are still targeting that high level ISA... and ironically they also have their own more general abstraction, the intermediate representation.

I'm probably not the first or last to suggest this but... it seems awfully tempting to say: why can't we throw away the concept of maintaining binary comparability yet and target some level of "internal" ISA directly (if intel/AMD could provide such an interface in parallel to the high level ISA)... with the accepted cost of knowing that ISA will change in not necessarily forward compatible ways between CPU revisions.

From the user's perspective we'd either end up with more complex binary distribution, or needing to compile for your own CPU FOSS style when you want to escape the performance limitations of x86.


I think IBM mainframes do something like this. Software is distributed as bytecode, and then compiled to machine code to CPU-specific assembly.


Even if the Ryzen wins out, that would still be comparing a desktop CPU to a mobile one, using 105W vs 10W. It is incredible that we are making these comparisons. Apple outdid themselves.


There's going to be a AMD mobile version of the 5000 generation soon and when looking back at 4000 generation their single core (boost) performance is going to be virtually the same as the desktop variant.

Desktop CPUs differ from the mobile CPUs mainly in how much can they boost more/all cores.


In my experience mobile cpus run at about 75%-90% the single-core performance of desktop counterparts. Zen 3 APUs will be close.

Isn't the M1 fabbed on TSMC 5nm? Zen 3 is on 7nm. If a Zen 3 APU will run close to Apple Silicon I will be mightily impressed.


Best single core scores for Ryzen 2 on Passmark:

* 3800X (105W desktop) scores 2855

* 4900H (45W mobile) scores 2707 or 95% of 3800X

* 4750U (15W mobile) scores 2596 or 91% of 3800X


That's pretty good. For geekbench we see:

* 3800XT = 1357 (100%)

* 4800H = 1094 (~80%)

* 4800U = 1033 (~76%)

I would expect a 5800U to score at best around 1500, but realistically closer to 1300-1450. That's still behind the M1, but pretty darn close for being behind a node (and will still probably be faster for applications that would require x86 translation).


They are not even of same series. 3700U(fastest Ryzen 3000 15W processor) single core is 57.5% of 3700X(not the fastest Ryzen 3000 desktop).

Source:

[1]: https://browser.geekbench.com/processors/amd-ryzen-7-3700x

[2]: https://browser.geekbench.com/processors/amd-ryzen-7-3700u


AMD's naming scheme has mislead you.

Renoir is 7nm Zen 2 aka the 4000 series. https://en.wikichip.org/wiki/amd/cores/renoir

Matisse is also 7nm Zen 2 aka the desktop 3000 series. https://en.wikichip.org/wiki/Matisse

Picasso is 12nm Zen+ aka the mobile 3000 series. https://en.wikichip.org/wiki/amd/cores/picasso


So you mean Apple has this huge advantage of 5nm compared to 7nm but failed to outperform AMD? What a failure.

(that was sarcasm. My take is this performance is impressive but you should not be surprised if it does not completely outperform CPUs that should be less efficient)


> So you mean Apple has this huge advantage of 5nm compared to 7nm but failed to outperform AMD?

I understand you are being sarcastic, but no, that's not what's not what I'm saying.

It is Apple Silicon that is faster (at least on paper). I'm saying I think even though AMD will have worse perf/watt, I think it will get impressively close despite it's less efficient fabrication process.


I highly doubt it will match it in performance under the same power envelope, and in the end for a mobile device that's what's important.


The Geekbench score explicitly ignores thermal power budgets.


The M1 is just the tip of the iceberg. It’s an MVP desktop arm chip.

M2, M3... that is when I think we will see stellar performance against things like Ryzen.


Yes. I thought I was dismissive [1] with the new MacBook especially with regards to pricing. ( Mostly because of BOM Cost and Margins are price gouging, even by Apple's standards )

Now things are settled a bit I thought may be it isn't as bad as I thought . Had the MacBook Air Priced any lower, it would have seriously hurt their sales of 16" MBP. Once MacBook Pro transition to ARM, with a rumoured of Mini-LED Screen refreshed as 14" and 16". ( MingChiKuo has been extremely accurate with regards to Display Technology used on iPad and Mac ) So MBP wont be lower in price but offer more features ( Mini-LED is quite costly ). And possibly an M2 with HBE? I am not sure how Apple is going to coupe with the bandwidth requirement. It would need to be LPDDR5 Quad Channel at 200GB/s or HBM2 if we assume M2 will double the GPU core again.

May be only then Apple could afford to offer a MacBook 12" at $799. And educational price at $699. Although I am not sure if that is enough, Chrome Book in many classes are going at $299. Apple doesn't have to compete dollar to dollar in pricing, but 2X difference is going to be a hard battle to fight. But at least it would allow Apple to win key areas in Education market where TCO and Cost are not as stringent.

May be Apple will do just one more Final update for some Intel Mac like Mac Pro ( At least I hope they do for those who really need an x86 Mac )

Oh M3 in 2022, Still within the 2 years transitional period, I think we are going to see a 3nm monster Chip for Mac Pro. When Intel is Still on their 10nm. And I think 2022 is when we will see an Apple console. Cause I dont think the Mac Pro Monster SoC volume is enough for its own investment. Some other product will need to use that, and Game Console seems like a perfect fit. ( At least that is how I could put some sense to the Apple Console rumours )

[1]https://news.ycombinator.com/item?id=25049927


> May be only then Apple could afford to offer a MacBook 12" at $799. And educational price at $699. Although I am not sure if that is enough, Chrome Book in many classes are going at $299. Apple doesn't have to compete dollar to dollar in pricing, but 2X difference is going to be a hard battle to fight. But at least it would allow Apple to win key areas in Education market where TCO and Cost are not as stringent.

Apple is already doing quite well in the low-end education market with the base model iPad. These are competitive with Chromebooks on price. They also do a better job of replacing paper with Notability or GoodNotes and open up project opportunities with the video camera. Most kids seem to be fine with the on-screen keyboard, but that part is not ideal without an external keyboard/keyboard case.


That’s my guess. I’m a Mac house but I have two gaming machines arriving Friday for some rendering projects. I really wanted to wait to see what Apple released, but I figure two things: I’m not traveling much in the next year, so I’ll have a desktop year, 2) not a good idea to get v1 of new Apple things. I hope they’ll have new 2nd gen things in the market next year and I’ll come back.

Apple often has 2-3 future generations in development. This was just the first complete design they turned into a product.

That RAM design, tho...


It'll be sooner than that. Just wait for "M1X" or "X1" or whatever Apple calls the increased-bandwidth variant that goes into their 16-inch model and desktops.


Sure, call it what you want. This is the beta product. It’ll be buggy. It won’t be full throttle.

I’m excited for whatever is next.


This is the beta product. It’ll be buggy.

Doubtful. You know they've been using ARM-based Macs with the requisite version of macOS for at least a year inside of Apple.

They've done a processor transition two other times; unlike the last two times, this time Apple controls the entire stack, which wasn't the case going from 68K to PowerPC or from PowerPC to Intel.

Apple has been designing their own processors for a decade now. There's nothing in the smartphone/tablet market that even comes close to the performance of the A series in the iPhone and iPad; there's no reason to believe this will be any different.


Rosetta 2 hasn't had much mileage yet, though.


I’d be surprised if hundreds of apps haven’t already been tested already.


Even if it's used internally it doesn't mean it's not beta/buggy. Intel releases regular microcode patches and has issues with all their existing experience. This is apple's v1 which was in the pipeline for over a year. The designers are likely months into working on v2 already - that cycle is very long.

"Don't upgrade MacOS to x.0 version" is already a common idea. Why would it be any different for their hardware?


"Don't upgrade MacOS to x.0 version" is already a common idea. Why would it be any different for their hardware?

Because hardware and software are very different. The M1 is the next stage of Apple’s A series of SoCs—and they've shipped over 1.5 billion of those. I’d like to think all of the R & D and real world experience Apple has learned since the A4 in 2010 has lead to where we are today with the M1.

If anything, this simplifies things quite a bit compared to using an Intel processor, a Radeon GPU (on recent Macs with discrete graphics), Intel’s EFI, etc. This transition has been in the works for several years and Apple knows they only get one shot a making a first impression; I'm pretty sure they wouldn't be shipping if they weren't ready. I’m not concerned in the least about buggy hardware. They just reported the best Mac quarter in the history of the company; it's not there's pressure to ship the new hotness because the current models aren't selling [1].

The release version of Big Sur for Intel Macs is 11.0.1 and I've been running it for 2 days now. It's been the smoothest macOS upgrade I've done in a long time—and I've done all of them, going back to Mac OS X Public Beta 20 years ago.

[1]: https://www.theverge.com/2020/10/29/21540815/apple-q4-2020-e...


They've been making ARM CPUs for 10 years. They're not new to the game, this is just the first time they're in non-mobile devices.


And I expect it's a big enough change. (I may be wrong. We'll see)


> This is the beta product. It’ll be buggy. It won’t be full throttle.

Apple has been running a version of OS X on these CPUs for 10 years now. The only thing which is "beta" here is Rosetta.


This seems right to me, in performance.

In the market, I think M1 systems will not alienate Apple-app-only users (Logic, Final Cut, Xcode-for-iPhone development) and may attract some purely single-page-application users.

Mostly, Zoom call efficiency will drive its broader adoption this year among the general population. If the Air is fast, quiet, and long lasting for Zoom calls, it will crush.

I won't buy one. I have a 32GB 6-core MBP that will satisfy my iOS dev needs until M2 (and a clearer picture of the transition has developed). But I might start recommend Airs to the folks sitting around our virtual yule log this year.


I dunno. I find it hard to believe that their next chips will be more powerful. You must have an inside with Apple to know this


I'm as skeptical as the next person, but.. Apple's track record on delivering solid performance improvement year after year in their chips has been solid for quite a while now.. [1]

It'd be more surprising at this point if it _wasn't_ more powerful.

[1] https://browser.geekbench.com/ios-benchmarks


So this is it-- the fastest chip they'll ever ship? No more progress can be expected?

I hope your comment is sarcasm :P


> I find it hard to believe that their next chips will be more powerful.

>Whilst in the past 5 years Intel has managed to increase their best single-thread performance by about 28%, Apple has managed to improve their designs by 198%, or 2.98x (let’s call it 3x) the performance of the Apple A9 of late 2015.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


They wouldn't have announced the full transition if they weren't confident they could deliver. They would have kept the ARM for the low end models and Intel for the high end.


The snark is strong with this one.


OK... but let's say it's 95% there, even. How much power does an M1 draw compared to a 5950X? It's not even funny. And the M1 is running at a lower clock.


We don't know what the M1 draws at load because Apple won't say.

It's almost certainly better per watt, which I'd expect because the 5950X (and the 6-core 65W TDP 5600X, which also tops the MBA multi-core Geekbench result) are still desktop processors.


It’s very impressive. It seems like the open computing platforms where you have control of your hardware/ os are in real trouble.

I use Mac at work, but Linux at home, if the hardware isn’t competitive....


- Mac has ~10% of the global market for end user machines. It doesn't now, never has, and never will own the market nor does it desire to sell cheap enough machines to do so.

- Given that you can't add ram after the fact and 256GB is anemic the cheapest laptop that is a reasonable choice is $1400.

- The cheapest desktop option is $6000 with an 8 core cpu or 8000 with a 16 core.

- The average end user spends $700 on a computer

- We literally have marketing numbers and a worthless synthetic benchmark.

I think it entirely fair to say that the new macs are liable to be fantastic machines but there is no reason to believe that the advent of apple cpu macs marks the end of open hardware. Were you expecting them to sell their cpus to the makers of the cheap computers most people actually buy?


> Mac has ~10% of the global market for end user machines. It doesn't now, never has, and never will own the market nor does it desire to sell cheap enough machines to do so.

This includes a massive number of corporate desktops which often Apple doesn't really compete with.

> The cheapest desktop option is $6000 with an 8 core cpu or 8000 with a 16 core.

?? The Mac mini is $600 with an M1 which is likely a far faster computer than most $600 Windows desktop computers. Likely significantly faster.

I don't think Apple is going to eat Windows alive, too many businesses have massive piles of Windows apps. I do see the potential Apple to increase market share significantly though.


The average user was spending peanuts on phone before iPhone. They also had 0% market for phones.


The average user was spending a very modest amount to be able to call and send text messages. Little portable multi function computers already cost hundreds of dollars.

Iphone helped clarify what a good interface looked like while prices came down and performance went up positioning themselves well as a product category that was already a thing became mainstream.

Laptops aren't a new category and the majority will continue to buy something other than apple in large part because of the price.


> The average user was spending peanuts on phone before iPhone.

The iPhone was mid-range at launch, $499 versus $730 for a contemporary smartphone like the N95


This was not what it felt like when it debuted.

Blackberry was the competing “smart” phone [1] and the newest releases were we under half the price of iPhone w the same 2-year discount.

I had the blackberry curve myself at that time and iPhone seemed way high-priced.

[1] https://techcrunch.com/2007/07/25/iphone-v-blackberry-side-b...


Guess it depends on the region. Here in Sweden I saw a few N95s and of Sony Ericsson and Nokia feature phones. Not a single Blackberry in sight, before or after.


The way I remember it the iphone-with-2-year-contract price was very similar to the buy-outright price for other phones. Are you definitely comparing the same contracts?


But most users were on feature phones at the time. The iPhone 1 was expensive.


Completely nailed it. I need something with more grunt than the base prices here and apple don’t have a hold on that market because of expense. And they don’t hold the lower end market.

This is still a niche.


Thanks for putting it into perspective. 3D graphic performance is another variable I didn’t think of.

I wouldn’t expect them to sell their cpus to others.

It’s weird though that they’re so vertically integrated and able to push performance as high as they have. I really enjoy my Linux system so I’m going to keep on doing that.


> The cheapest desktop option is $6000 with an 8 core cpu or 8000 with a 16 core.

No, it is $600 with an 8 core M1 chip, the new mac mini. Also the iMac is arguably a desktop option, even if not really upgradeable.


You mean a non portable laptop?


What is your point even? The Mac mini isn’t really a desktop because it shares its chip with some of their mobile devices? When and where has that ever been the criterion for desktop PCs?


> The cheapest desktop option is $6000 with an 8 core cpu or 8000 with a 16 core.

And also with RAM and SSD idiotically soldered in so 2 years later you need to spend another $6000, while a couple weeks ago I spent a grand total of $400 to upgrade my 2TB SSD to 4TB.


The RAM and SSD are not soldered in on the Mac Pro, which is the machine I assume they're talking about given the price.


Okay, point taken, but I believe the RAM and SSD are not user-replaceable on the MacBook Pro, MacBook Air, and iMac, whereas both are user-serviceable on almost every other brand of laptop and all-in-one PC on the market.


fyi, RAM is not soldered (https://support.apple.com/en-us/HT210103) and there are unoccupied PCI slots to install SSDs in (https://support.apple.com/en-us/HT210408)


> It’s very impressive. It seems like the open computing platforms where you have control of your hardware/ os are in real trouble.

Not really. The M1 may objectively and factually be a very good CPU, but it comes bundled with the cost of being locked into a machine with a locked bootloader and not being able to boot any other OS than MacOS.

And many people will find such a cost unacceptable.


I have a hard time believing that the amount of people that care so deeply about loading other OSs as to switch their computing platform of choice is significant. Perhaps more significant for those who are already doing that with either a mac or something else and choose not to switch, likewise for virtualization, but I sure as hell wouldn't switch away from mac for the ostensible benefit of multibooting Windows or Linux, and I'm at least in the subset of people who might.


There are gargantuan unseen costs for giving up computing freedom that will not readily apparent at the moment you abandon it. The benefit will be shown as much more than "ostensible". I do hope for both of our sakes that most people are not so fickle to abandon it at first opportunity just because it is not an immediate cost.


> I do hope for both of our sakes that most people are not so fickle to abandon it at first opportunity just because it is not an immediate cost.

Generally, people are absolutely terrible at taking long term effects into account. I don't think many people are going to think twice about giving up their computing freedom.

But I think Apple's positioning as premium brand is going to ensure that open hardware keeps existing. And maybe we can even look forward to RISC-V to shake the CPU market up again.


I absolutely agree, but the problem is that there does need to be a compelling immediate term benefit or alternative. While I'd agree with the sibling reply that people often don't consider long terms effects, it's worth considering that immediate effects are more definite.

Any mac user could have seen this transition coming many years ago, and given up their platform of choice then on that prospect, but what good would that have done them? They wouldn't have got to enjoy anything.

Lastly, I do simply see it as a bit of a false dichotomy (or whichever fallacy is more accurate) to suggest that by using a mac that can't run other operating systems, you're giving up computing freedom. If I found it necessary to have a Windows or Linux machine, I'd simply just go get something that probably has better hardware support anyway. Yes conceivably Apple is setting some precedent that other manufacturers could follow, but in the previous example Apple is also just pushing you to buy their products instead.


I consider not losing the freedom to run anything I want on the hardware an immediate benefit. I don't need to have a particular use case. I view as detrimental the very action of giving money to someone who wants to decide how I use the hardware I bought.

> Any mac user could have seen this transition coming many years ago, and given up their platform of choice then on that prospect, but what good would that have done them? They wouldn't have got to enjoy anything.

This could easily devolve into a "to Mac or not" type of discussion which I don't want delve into, but I've personally never used a Mac (I have tried it) and I don't feel like I'm missing out because of it. Certainly the freedom to run any software and not be beholden to a large corporate interest is more important to me.

> Yes conceivably Apple is setting some precedent that other manufacturers could follow, but in the previous example Apple is also just pushing you to buy their products instead.

Yes, precedent, but also increased market share if they were to become more popular. One day, an alternative might not exist if we do not vote financially early enough. Therefore, my immediate urge is to say: no, I do not want to participate in this scheme. Make your hardware open or I will not buy it.


> There are gargantuan unseen costs for giving up computing freedom

There is a social experiment about that, running since at least 2007. It's the smartphone and the tablet. I think I don't have to detail it and all of us can assess the benefits and the problems. We could have different views though.

By the way, I wonder if the makers of smartphones hardware and/or software could do all of their work, including the creation of new generations of devices, using the closed systems they sell (rent?). I bet they couldn't, not all of their work, but it's a honest question.


Myself, yes. Most people here, maybe. My friends asking me for advice when buying a new computer, they care about the price, not speed. They assume that anything at 300 or 400 Euro will be good enough for their needs and they're right. Only one of them ever asked me about a Mac but went back to looking at some low end Windows laptop after I gave them the link to the page with the price of the Macs. It's not that all of them cannot afford a Mac, they can't just see what they gain for the extra cost.


The bootloader is not locked.

However, good luck writing the full set of drivers for your OS of choice.


With the prices these machines are going at (1600 for a basic mac air with 500gb and 26 gb ram), i dont see how these macs will be able to push an increase in apples market share.


I don’t think that’s the expectation they have from this release. This release is about cutting their costs. The next release will be about offering new features to capture more market.


If that was to pass, wouldn't linux just port itself to the new platform? After all linux supports powerpc and Motorola 68k.


They're locked down. You can install anything on it without apples permission.

Like secure boot, just without an off switch


I hear "hold my beer" moments now. I give it a year max for luser usable jailbreak.


Your comment is not accurate.


I cannot speak with authority on the topic and based my statement on the statements of several YouTube tech news Channels. It's entirely possible that they're missinformed and it wouldn't be for the first time.

I however cannot find anything that says differently from apple or a source showing how non signed systems can be booted on this chip.

The only thing I could find was apples statement that your system is even more secure now because non signed code won't be run.

Do you have any resources I can read so we can clear up this misunderstanding?

Or are you referencing my auto-correct error which replaced cant with can? If that is the case... I'm sorry for that but it's too late to fix and my intent is (I think) quiet clear considering I said that they're both locked and this lock is without an off switch.


From recovery OS you can personalize any blob. The local SEP will sign it for you and iBoot will happily load and jump to it.

That blob may be a Darwin kernel or it may be something else.


> How much power does an M1 draw compared to a 5950X?

The 5950X cores are actually reasonably power efficient. Anandtech has nice charts here: https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...

TL;DR is that 5950X cores draw about 6W each with all cores loaded at around 3.8GHz per core. They scale up to 20W in the edge case where a single core is loaded at a full 5GHz.

> And the M1 is running at a lower clock.

Comparing a power-optimized laptop chip to an all-out, top of the line desktop chip isn't a great way to compare efficiency because power scaling is very nonlinear. The AMD could be made more efficient on a performance-per-watt basis by turning down the clock speed and reducing the operating voltage, but it's a desktop chip so there's no reason to do that.

Look at the power consumption versus frequency scaling in the Anandtech chart for the 5950X: Going from 3.8GHz to 5.0GHz takes the power from about 6W to 20W. That's 230% more power for 30% more clockspeed. Apple is going to run into similar nonlinear power scaling when they move up to workstation class chips.

If you really wanted to compare power efficiency, you'd have to downclock and undervolt the AMD part until it performs similarly to the Apple part. But there's no reason to do that, because no one buying a top of the line 5950X cares about performance per watt, they just want the fastest possible performance.

Comparing to an upcoming Zen3 laptop chip would be a more relevant comparison. The Apple part is still going to win on power efficiency, though.


Plus, 5950X costs 799$ for the chip alone. (cooler no included)


I think it was AnandTech, but an analysis suggested that Apple's BOM price for the M1 (which of course excludes R&D costs, software dev costs, profit, it's just the raw manufacturing cost) to be about $64.


Sure, and AMD's is probably similar.

The difference being that Apple only sells theirs inside of $1000+ computers, and AMD has to make up the entire margin on their CPU alone.

Behold, the power of vertical integration.


The power of vertical integration means that Apple could sell their hardware at a loss, to get you inside the Walled Garden TM and then keep 30% of all you spend inside it.

I'm not saying they do that, considering how much their products cost, I'm saying they could. That's what vertical integration brings to their table, above all else.


Apple does the opposite by giving away their software and design “for free” and making up for it with hardware margins.


They give away their software but get a 30% on the software made by other companies and do their best not to let those companies get paid by other means than Apple's stores. I think this was the point of GP.


> "They give away their software but get a 30% on the software made by other companies"

Not on Mac they don't. macOS isn't tied to the App Store in the same way that iOS devices are, and it probably accounts for a tiny percentage of third-party Mac software sales by value.


It's also 4x the big cores and with way way way more L3 cache & I/O.

Almost like they are completely different CPUs with completely different design goals...


M1 can't run all existing x86 code natively. Ryzen can.


5950X is 105W desktop CPU. Apple M1 is for laptops and Mac Mini.


You can buy it in a Mac Mini, and an iMac eventually no doubt. The "laptop"/"desktop"-grade chip distinction is pretty arbitrary here.


> The "laptop"/"desktop"-grade chip distinction is pretty arbitrary here.

This is the first of their CPUs. The iMac will almost certainly be running a higher end CPU which at the very least supports more RAM. It's likely the 16" MacBook Pro and the higher end 13" MacBook Pro will share a CPU with the iMac the same way the Mac mini and the MacBook Air share a CPU.


I think the holdup for the iMac is not the CPU, but rather Apple's discrete GPU which is not ready yet.

For replacing the Xeon-W in the Mac Pro and iMac Pro, they will also need a higher performing CPU, sure.


I hope they offer an option for integrated-only for all their product lines going forward.

The 16" MacBook Pro is only available with a discrete GPU, which I don't need but causes me tons of issues with heat and fan noise. The dGPU has to be enabled to run external monitors, and due to an implementation detail, the memory clocks always run at full tilt when the resolution of driven monitors doesn't match, resulting at a constant 20W power draw even while idle.


The distinction theme between labtop and destop is customization, integration, and maximization. Apple Silicon is not going to destroy case/fan/liquidcooler/fancyLED/GPU/Motherboard/etc industry. It's still crystal clear distinction.

EDIT:// sorry, i misread/skip the "chip" part.


Single core performance usually isn't much different between laptop and desktop CPUs.


These top scores are achieved with heavy overclocks combined with ridiculous cooling rigs. You can't really do that to a laptop. (I mean, you can in theory, but at that point it's not a laptop any more. It's ripped in half and strapped to a water block.)


Fake news. These are not Macs and are very overclocked PCs (Hackintosh).


Not sure why this is downvoted? The top score shows a 6.472GHz clock speed - clearly not stock.

https://browser.geekbench.com/v5/cpu/4644694.gb5


Because some of them aren't. For example, result #4. Stock 4850 MHz for a 5800X and it scores over 1800: https://browser.geekbench.com/v5/cpu/4665766.gb5

Also notice this result is using clang9 while the MacBook results are using clang12. I assume clang12 has more and better optimizations.


Agreed. People are acting like this is some kind of record breaking performance when it isn't. It is impressive that the m1 chips can do it without a fan which shows that they do have a lot of headroom which is a sign of where things may go in the future.


I don't think that scoring high in the single core area is Apple's primary goal.

Doing it while not burning lots of Watts and being energy efficient is what Apple aims for.

And I doubt that AMD and especially Intel will offer an alternative here soon. Desktop yes, but not on mobile.


*The 5900 is a desktop chip that costs $500+ with a 100W TDP and we're comparing with the cpu in a $1000 laptop with no fan.


Zen 3 is sold out it seems for now


Their line from the video about being the highest performance chip in single core appears to be true. This is of course a synthetic benchmark but the single core result is very promising. Note that the single core and multi core scores exceed the top-of-the-line 16” MacBook Pro (9th generation 8-core i9 2.4 ghz). I actually made the call to sell my 16” for the new Air yesterday. It’s looking like a good call. Glad I’m selling my 16” while it still has some value.

You can see all Air results so far here: https://browser.geekbench.com/v5/cpu/search?q=MacBookAir10%2...


Direct comparison between best MBP Intel 16" vs M1 Air: https://browser.geekbench.com/v5/cpu/compare/4651916?baselin...

1.5x single-core perf.

M1 MacBook Pro vs Intel MBP (top specs) show same performance: https://browser.geekbench.com/v5/cpu/compare/4652718?baselin...

Likely because GB5 doesn't run long enough to trigger thermal throttling on the M1 MBA.

M1 is beating all CPUs on the market in single-core scores: https://browser.geekbench.com/processor-benchmarks (M1 at 1719, vs AMD Ryzen 9 5950X at 1628).

Anandtech on the memory-affinity of GeekBench vs SPEC:

> There’s been a lot of criticism about more common benchmark suites such as GeekBench, but frankly I've found these concerns or arguments to be quite unfounded. The only factual differences between workloads in SPEC and workloads in GB5 is that the latter has less outlier tests which are memory-heavy, meaning it’s more of a CPU benchmark whereas SPEC has more tendency towards CPU+DRAM.


The new MBA is a total beast. The comparison is almost unbelievable. Can't wait to see what they do with the iMac, Mac Pro and MBP 16". Just phenomenal!


Replacing my 16" MBP (8 core i9 2.3ghz / 32GB RAM) with the new Air as well. All of that power (yes I know, it's not sustained) without a fan is incredible.


Why did you pay a fortune for 32GB RAM, a larger screen, and a dGPU if you don't need it? You could have bought a 13" MBP and saved enough cash to get the new Air and now have 2 laptops.


I’ve always used 15” MBPs without external displays as my only computer for years. I like being mobile and don’t particularly like having a desktop + laptop. Since COVID I’ve been stuck at home so I decided to finally get an external monitor. The problem is that you can’t use the 16” with an external display and the lid open without the fans spinning at full speed. Some people don’t mind it but it drives me crazy. Now it’s hooked up to my display in clamshell mode. Do I need all of that power? No, but I did want the biggest screen at the time when I purchased it a year ago. I plan on selling it when the new Air comes in.


> The problem is that you can’t use the 16” with an external display and the lid open without the fans spinning at full speed

Say what? I have a LG 5K and two 27” Apple Thunderbolt displays (four screens total including laptop display) hooked up to my 16” MBP and fans definitely are no where near full speed, unless I’m compiling or in a Google Hangout that is...


Yeah, I'm not sure what's going on but I'm not the only one it seems.

Here's a 173 page thread on MacRumors about it: https://forums.macrumors.com/threads/16-is-hot-noisy-with-an...


Have you tried only using the ports on the right side of the laptop?

As crazy as this sounds, using the left hand side ports for charging causes the fans to kick in more often[0].

[0]https://apple.stackexchange.com/questions/363337/how-to-find...


I have two identical top-specced MBP16", bought them as soon as they were released. They BOTH do that with my LG 5K Ultrafine display, whether they're closed or open, without doing anything particularly heavy. Sent them both to Apple, they passed all checks with flying colours. And that's the end of more than a decade of giving Apple tons of money. I built a Ryzen desktop this year and, despite missing macOS terribly, I couldn't be happier with the speed and ergonomics.

edit: I've tried both sides of the laptop, I have iStat Menus and keep an eye on temps, etc.

edit2: they "only" spin to 3.5k-4k at idle, but go up as soon as I do anything with Chrome or am on a video call, which is most of my job


Make sure the resolution is set to “native” for the external monitor.


They're all set to native, I don't do scaling. I've since got a free top case replacement from Apple (keyboard was slightly busted because I spilled soda on it lol) and, after a PRAM reset, it's not going crazy at idle, only when being in WebRTC calls (which is 90% of my job).


Why? Really interested.

Back in the day I had 2 Sun 20" GDM20E20 (1997) which was major $$ and after then I alway had 2 monitors, moving at some point to a single ultra wide LG (which are pretty neat). One day I looked at my setup and how I used it and realized I did not look at all of the screen. I swapped it for a small single Apple LG 4K and it turns out I am very happy. The dense nature of the 4K was a game changer. I plan on getting an 8K when it comes out.


> why? Really interested.

Generally speaking, I’ve never found that to be genuine, but assume best intent and all as the site rules say, so here goes...

For me, I often am doing multiple things at once and juggling between unrelated tasks which actually need my attention sporadically. The LG 5K with it’s beautiful display gets my primary attention and is what I want to be focused on. Apps there are what I should ideally be working on. The two Apple TB displays then flank either side, and they get the “distractions”, but stuff important enough to allow distracting me when needed. What that is is variable from day (sometimes Slack makes the list, sometimes it doesn’t, as one example), but it’s intentionally in my peripheral vision so I only “look” for motion/changes in certain areas, not actually try to read. If I need to read, I context shift by rotating my chair slightly to the left or right (better for you than rotating head).

End of the day, do whatever works for you. Yes, there are folks who can legitimately take advantage of lots of screens like me. Some folks who have tried multiple don’t, and are happier when they switch back, but I’m not one of them and it’s something I routinely experiment with to ensure I’m still using the best “for me” setup. I’ve gone as high as nine screens attached (with eGPU) to my laptop (eGPU seems to keep laptop fans on elevated, but not full power btw, back to original thread purpose), but I found I was too easily distracted and hence am back to four. Ideally I’d like to do two 8K 32” or less monitors, but haven’t justified buying them yet.


Thanks, that makes sense. For me I have found having something on the other monitor catch my eye a distraction, so I have gone back to a single high DPI one.


Yeah I'm powering 4k external display for normal things (hangouts, photoshop) no fan, only when doing a-frame / three.js stuff in the browser will bring the constant fan and provide a nice finger warmer above the touch bar


> The problem is that you can’t use the 16” with an external display and the lid open without the fans spinning at full speed.

I had the same problem, when connected to a USB-C monitor I wanted to use the keyboard, but not the built in monitor. Even with the display backlight off the fan would still run. After a lot of searching I found that you can disable the built in monitor by:

- Booting into recovery mode - Opening Terminal - Entering `sudo nvram boot-args="niog=1"` - Restarting - Close the clamshell - Plug in the external monitor - Open the monitor

I hope that helps.


You need to plug the power port into the back right usb-c connector.


I have the same problem and I've tried plugging it into all 4 ports, no difference at all.

The monitor gets plugged in and the fans just start taking off to the moon - no workload or anything. It's extremely annoying.


I regularly drive a 5k2k display on a 16" MBP, along with glowy keyboard and trackpad, and I only get the fan when I'm doing something like a full-screen streamed video: text editing and web browsing seldom trigger it.

I like the bigger screen, so I'll hold out on this platform for now. Pretty impressed with where the M series is going, though; might hold out two iterations instead of the four I had in mind before the M1 dropped.


    sudo htop 
See what’s going on.

Os-query was going bonkers on me.


It's not a process thing. The left-side TB3 controller freaks out under sustained load and can drive the system to a crawl. We have to unplug one of our LG 4/5Ks and run the other+power on the right when we want to be able to do back to back video conferencing.

It's been a running joke in corporate for years that Apple's "premium fan noise" is a brilliant branding move because you can identify the Mac users as soon as they unmute.


What video conferencing app? Why is the machine loaded up when running it?


1. How do you know he didn’t need it?

2. Even if he didn’t need it, why assume he didn’t want it?

3. Why are you assuming that money is an issue for him?


He's assuming he didn't need 32GB if he's replacing it with a 16GB Air.


It is a general theme, people say how they are switching their specced out 2019 16inch MBP to these new M1 Macbooks, then getting all upset when people point out, they are probably not the professionals the top-end MBPs were aimed at.

A lot of people run around with way more powerful laptops than they actually need for whatever they are doing, because it's through a business or it's deductible, but news flash, buying a Macbook Pro doesn't make you a pro.

A question, if ALL pros were fine with 16GBs of RAM, why does Apple offer 4x as much? Answer, because a lot of people will actually need it.

I am happy for people who will get these new devices an be happy with it, I might get one too. But truth be told, most of us getting these devices could make it work with the latest iPad Pro + Magic Keyboard just as well. (OK, I do need to code occasionally, but even for that there is pretty ok apps for iPad I could use)

The expectations have to come the fuck down from where they are today, because the expectations put on these devices are just crazy. It's so overhyped that I think many will be disappointed, when compatibility issues surface and when people realise that the 3x, 5x, 7x performance digits are mainly down to Fixed Function Hardware and Accelerators and general performance increase is just slightly above the generational leap we are used to, with a bigger increase in efficiency.


If he needs it he wouldn’t move to a laptop that doesn’t offer it. You can’t get 32GB of memory or a dGPU on the new Macs.


Also: Why assume he would want to have two separate laptops


Somebody else bought it.


Note that the M1 does not support eGPUs, in case you need it.


Where is that spec’d out?



Yeah I had almost the same configuration (2tb/32gb/2.4 ghz). I’m literally getting the new Air for half of what I’m selling it for.


Where are you selling it? I want to know where to buy a used 16” MBP.


eBay-you’re welcome to buy it if you want: https://www.ebay.com/itm/Apple-MacBook-Pro-16-2-4-Ghz-i9-32-...


I've sold on cragislist before... worked out fine.


You guys must have some generic or Apple specific workflows since you are confident that the software is already supported.


As long as you don't depend on x86_64 Docker containers or Boot Camp, you should be fine in general.


I want to do the same, but I like being able to do some gaming on my Windows partition (I opted for the 8GB dGPU as well).


That Air will sell so well.

Many don't even want to pay for the MacBook Pro's Touch Bar and many will probably see an Air's fanless design as an advantage over Pro, even if its CPU is throttled a little more often in sustained high CPU workloads. Complete silence is just that good. And it's going to be so much cheaper.

I think the star of the show yesterday was definitely the MacBook Air.


I bought the new Air, but I’m keeping my 16” as it may be the last laptop of that quality that can run programs without telling some stranger over the network that you’ve run a specific program.

Current (and presumably future) macOS does this and you can’t turn it off, except with Little Snitch. New APIs in macOS 11 means that Little Snitch will no longer be able to block OS processes, so it will require external network filtering hardware.

I’ll likely end up with Linux on the 16”, and use the new one for things that are not secret/private.


This is where I’m at too. Compromising Little Snitch and VPNs is just a bridge too far. It’s cool that they got this level of performance in a lightweight form factor, less so when it enables the worse of surveillance practices.


Ugh, I didn’t realize that the system apps would bypass VPNs, too. That’s terrible. :(

https://appleterm.com/2020/10/20/macos-big-sur-firewalls-and...

Looks like it will be impossible to use Apple Silicon (without external network hardware) without revealing your track log to the CIA. How cool is that?!


I'd like to see a good piece on these new Mac systems from the perspective of exactly what they mean in terms of software lockdown.


Mine should be here in a week or two, and doubtless I’ll be complaining on my blog with receipts. I can’t promise “good” but I’ll present the facts.


Damnit I have to decide the same thing. I’m really happy with my 16” mbp for once and I’m not sure if I want to get a smaller screen and give up windows support (for now) I feel like Ms could be convinced to make a version for Apple silicone if it keeps its performance advantage.


Good news is Parallels announced a closer colab with Apple to bring x86 virt to M1, too. They demo'd Parallels running a linux VM at WWDC, but the upcoming release will also support seamless Windows virt again.


That demo shows an ARM VM with Linux for ARM running inside it. There have been no announcements or demos of Intel emulation, besides Rosetta 2.


Parallels announced a full version of Parallels Desktop (which is the Win-on-macOS product) at the same time as the event on Nov 10: https://www.parallels.com/blogs/parallels-desktop-apple-sili...


That PR doesn't actually say anything about running Windows. You can't just port the app. A VM on an ARM system is still ARM inside, and given that the PR specifically mentions "support of x64 applications in Windows on ARM", this is clearly for ARM VMs. You'd need actual Intel emulation in order to run the normal version of Windows.


> You'd need actual Intel emulation in order to run the normal version of Windows.

Microsoft is working on enabling x64 emulation on ARM, it should roll out in preview this month[1]. I can see Windows 10 ARM-edition working inside Parallels with its own x64 emulation inside. The issue right now is that MS does not sell Win 10 ARM, it is available for OEMs only.

x86 emulation on Windows 10 ARM was already done few years ago, when MS shipped their Surface ARM notebook.

[1] https://blogs.windows.com/windowsexperience/2020/09/30/now-m...


X64 apps running in a emulator, running on ARM64 Windows, running virtualized in ARM64 MacOS. What is the world coming to??


That actually seems more logical than the alternatives.

One of the big hopes for Rosetta2 is the possibility of intercepting library calls and passing them to the native library where possible. So a well-behaved app using OS libraries for everything it can, and really only driving the business logic itself, would be running mostly-native with the business logic emulated/translated.

(This is hopes/dreams/speculation with no insider knowledge.)

If Windows could do the same, then letting windows-arm do the translation of windows-x86/64 binaries would allow it to leverage windows-arm libraries - so an app could be running in mostly-virt with some-emu. If we let parallels/qemu/etc do the emu, it can only ever be 100% emu.


At least it doesn't involve Electron.


The VHDX for Windows 10 on Arm inside of virtual machines can be downloaded at: https://www.microsoft.com/en-us/software-download/windowsins... as part of the Insider program.


You can emulate anything on anything (pretty much), but the real question is can you emulate it at a speed that’s sufficient. That takes host-system-specific optimizations.

Look at N64 emulation, for example.


No mention there of it supporting x86 on M1.


Indeed. Might be misleading marketing. Docker meanwhile mentioned they will launch with ARM containers only, but are expecting QEMU to be able to run x86 (probably badly).


I'm seriously considering doing the same thing. My only hesitation is the screen size. But right now the laptop I bought in January is being outclassed by one that costs 1/3 the price...


I think you should wait because I have a feeling all of this sounds too good to be true. wait for real world reviews.


They need to put out a 16" MBA.


This is very interesting and in line with Apple's claims. I am looking forward to some real world numbers for different tasks in the next few weeks and months as native apps become available.

Jonathan Morrison posted a video [0] comparing a 10-core Intel i9 2020 5K iMac with 64GB RAM against an iPhone 12 Mini for 10-bit H.265 HDR video exporting and the iPhone destroyed the iMac exporting the same video, to allegedly the same quality, in ~14 seconds on the iPhone vs 2 minutes on the iMac! And the phone was at ~20% battery without an external power source. Like that is some voodoo and I want to see a lot of real world data but it is pretty damn exciting.

Now whether these extreme speed ups are limited to very specific tasks (such as H.265 acceleration) or are truly general purpose remains to be seen.

If they can be general purpose with some platform specific optimisations that is still freakin' amazing and could easily be a game changer for many types of work providing there is investment into optimising the tools to best utilise Apple Silicon.

Imagine an Apple Silicon specific version of Apple's LLVM/Clang that has 5x or 10x C++ compilation speed up over Intel if there is a way to optimise to similar gains they have been able to get for H.265.

Some very interesting things come to mind and that is before we even get to the supposed battery life benefits as well. Having a laptop that runs faster than my 200+W desktop while getting 15+ hours on battery sounds insane, and perhaps it is, but this is the most excited I have been for general purpose computer performance gains in about a decade.

[0] https://www.youtube.com/watch?v=xUkDku_Qt5c

Edit:

A lot of people seem to just be picking up on my H.265 example which is fine but that was just an example for one type of work.

As this article shows the overall single-core and multi-core speeds are the real story, not just H.265 video encoding. If these numbers hold true in the real world and not just a screenshot of some benchmark numbers that is something special imho.


Your h265 example is due to the iPhone having a dedicated HW encoder while the iMac was rendering using the CPU. A hardware video encoder is almost always going to be faster and more power efficient than a CPU-based one by definition. However, a CPU encoder offers more flexibility and the possibility of being continually improved to offer better compression ratios.

Generally, HW encoders offer worse quality at smaller fie sizes and are used for real-time streaming, while CPU-based ones are used in offline compression in order to achieve the best possible compression ratios.


Yes but that is kind of the point. Going forward all Apple Silicon machines will have this kind of hardware baked into the SoC at no extra cost whereas no Intel system (be it PC or Mac) will.

That is a big deal as it means Adobe, Sony, BlackMagic, etc. will be able to optimise to levels impossible to do elsewhere. If that 8x speed up scales linearly to large video projects you would have to have a Mount Everest sized reason to stick to PC.


That's not really true. Every modern Intel, AMD, and Nvidia GPU does actually include hardware to accelerate encoding video, it's just that software solutions end up being preferred in a lot of cases because they produce better output and can be continually improved.

A large body of encoding software is however, perfectly capable of taking advantage of them.

The comparison is kind of silly.


> The comparison is kind of silly.

I disagree. Sure we have things like NVENC for accelerated H.265 encoding but that is an additional hardware expense and means only the machines you have that hardware in benefits. This will literally be all Macs from a $699 Mini and up.

I don't know enough about Intel QuickSync to compare but it clearly isn't used on the iMac in that video for some reason (perhaps the software does not support it? I don't know)

That is pretty exciting for video professionals IMHO.

I'm not saying it is world changing but being able to pick up a MacBook Air for $999 and get performance similar (or maybe better?) than a PC that costs two or three times that with dedicated hardware is very cool.

edit:

I appear to be missing why this is a ridiculous thing to say?

Could somebody please explain to me why the comparison is "silly"?


At the risk of going in circles, let me try to explain one last time:

All modern GPUs already have HW-accelerated encoding, including integrated Intel GPUs, and Nvidia and AMD dedicated ones.

Despite that, HW-encoding is not used that much by video professionals because CPU encoders produce better compression given enough time. You only have to do compression once, while the video will be watched who knows many times, therefore there is no real point in making your encoding faster if the file size goes up.

Your HW encoder is absolutely useless for anything else. It does not make your FX rendering faster, and cannot be used for any other codecs.

Even if say, your HW matches CPU-based encoder at first, it is fixed and cannot be updated unless you buy new HW which takes millions to design. Meanwhile any old dev can contribute to x265 and figure out new psychovisual enhancements that will enhance the quality while minimising the file size.

Specialized HW (i.e. ASICs) has been in existence for decades, yet despite that, there are very good reasons as to why we still use general-purpose CPUs (and GPUs) for most computing applications.


Editor / videographer here. Interested in those claims.

Hardware encoded H.264 and H.265 don't have any visual quality differences when encoded at the same bitrates in the same containers, as far as I'm aware. Could you list a source for this?

Have never heard of any client or production house requesting an encoding method specifically. Although granted I work at the low end of commercial shooting.


This is widely known and inherent to how hardware encoders work. They are limited by complexity and cannot be improved after they are manufactured, so most of the focus for them generally goes into real time applications. You can check out the blog for the x265 software encoder if you want some examples of the sort of regular improvements that get made:

http://x265.org/blog/

The reason these sorts of improvements are possible is because most of the power of a video encoder doesn't come from the container format but rather how effectively the encoder selects data to keep and discard in keeping with that container format. There is also a LOT of tuning that can be done depending on the kind of content that is being encoded, etc.

For high end work basically nobody uses hardware encoders on the final product.


Thanks for clarifying. That makes a lot more sense so perhaps not as exciting as Jon makes out in his video.


People (especially video professionals) tend to not use QuickSync because the quality is pretty low and you have very limited ability to tune the results. It's optimized for real time encoding and not high quality or small file sizes. I think on H.264 it's about equivalent to the x264 veryfast preset, no clue how the H.265 quality stacks up, only the more recent Intels even support that. NVENC has better quality, but still a software two-pass encode will give much better results.

It's the same on these phone chips, sure the encode is much quicker, but it's not a fair comparison because you have much more control over the process using software. We'll have to wait and see how the quality and file size on the M1 encoder stacks up.


The way hardware encoding works is to be focused on a small set of specific presets with little flexibility for the end user. It's fine for hobbyists, streaming etc. but nobody's going to use it for a professional render.


> all Apple Silicon machines will have this kind of hardware backed into the SoC at no extra cost whereas no Intel system (be it PC or Mac) will.

First of all, adding this hardware encoder to the Apple Silicon chips definetly has a cost, and you pay it when you buy their products.

Second, there are Intel CPUs available with hardware encoders (google Intel QuickSync). The only difference is that you can choose to not pay for it if you don't need it.


As said below, you already have that HW in many Intel CPUs and all AMD/Nvidia GPUs.

Dedicated HW for specific computing applications are nothing new, back in the 90s you had dedicated MJPEG ASICs for video editing. Of course, they became paperweights the moment people decided to switch to other codecs (although the same thing could be said for 90s CPUs given the pace of advancement back then).

Thing is, your encoding block takes up precious space on your die, and is absolutely useless for any other application like video effects, color grading, or even rendering to a non-supported codec.


Specialised silicon is always going to be more efficient than general purpose silicon, but you lose in flexibility. Don't expect this kind of performance gains across the board.


OK... but let's say I'm a professional. That's a big sell. Having dedicated HW to do something faster than CPU is a bonus, not cheating.


You already have HW encoder blocks on certain CPUs and most GPUs. See: Intel Quicksync, Nvidia NVENC and AMD Video Core Next. Support for them will of course depend on your platform and the applications you are using. IIRC, video editing software will generally use HW decoding for smooth-real time playback, but use CPU-encoding for the final output.


Absolutely. But how many of those hardware blocks exist for what you want to do? If you care about H265, great. Let's say you get a dozen specific hardware accelerators, that's still only 12 tasks. That might end up covering the vast majority of web browsing tasks (to pick one example), but particularly as an engineer there is always going to be something else. And not all intensive tasks are amenable to hardware acceleration, e.g. compilation. That's why we care about general purpose CPU performance - at some point you always need it.

Incidentally, this is philosophically the idea behind processors with built in FPGA capability. The hardware acceleration would just be a binary blob that could be loaded in and used when necessary. It could be continually updated, and provided with whatever software needed it.


Not only that, what happens over time? Not long ago the hardware accelerated codec would have been H.264. Soon it could be AV1. We're already at the point that professionals could want to be using AV1.


Well yeah, for that one use case. But you cannot go from "X is 20% faster than Y at H265" to "X is 20% faster than Y at doing computer stuff".


You see the same order of magnitude differences between encoding 8-bit H265 on the 2020 imac vs 2019 imac. The T2 in the 2020 imac has hardware encoding ability for 8-bit H265.

Now, the thing is: intel's AVX512 instructions are supposed to accelerate this sort of work, but in practice they are getting lapped by the T2 chip. That signals that apple's ability to tune hardware designs to the needs of content creators is greater than intel's.


> This is the most excited I have been for general purpose computer performance gains in about a decade.

I think I would be excited if it were not built by Apple. That functionally means it’s only going to be in Apple products at a 300% markup.


No one's stopping any other company from doing it. But as usual, Apple has been the only company (for better or worse) that has been innovating on the desktop for the past decade. Every other manufacturer is content with producing the same milquetoast laptops.


This really flips the argument that Mac hardware is overpriced and underpowered on its head. Now Apple computers are a premium product from a performance perspective, as well.


Shouldn't we wait for non synthetic benchmarks to be performed by third parties running real applications?

We could even compare some cross platform apps across both OS and cpu and see how the total package performs.


Judging by upward trajectory of their processors’ performance (https://images.anandtech.com/doci/16226/perf-trajectory.png) and their dominance in mobile, it seems like only a matter of time.


I've personally found Geekbench results representative of real time workloads. So do more traditional benchmarks like SPECint.


Anandtech has done SPECint on A14 and result is consistent with what we've seen from M1 on Geekbench. It's not the same CPU, but they share the same Firestorm/Icestorm cores.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


Fantastic since many pieces of kit have been released to the public wherein both geekbench and more traditional measurements of actual performance exist can you point out some instance where real benchmarks are well correlated with differences in geekbench score?


But now they are locking down their software hard. So there is really no free lunch.


Someone here posted when the M1 powered Macs launched that they're essentially ASICs now and that made me do a double take. Since you cannot change the operating systems anymore, aren't these essentially ASICs?


Not even close. An ASIC is a program on a chip. These are still general purpose, programmable computing machines.


An FPGA is a program on a chip. An ASIC is literally just a chip. Calling the M1 an ASIC is insulting to the Apple hardware team.


Yes, M1 is not just a chip, is a magical piece of the rib of steve jobs himself. I agree though its not an ASIC even though it have ASIC things inside of it, just like any other modern processor.

It is much more complete SoC then other procs which makes its performance even more impressive if this indications hold up, I am still very skeptical, nothing comes for free and the real world is a b


ASIC stands for application specific, and since thei IC is application specific - used only in certain hardware along certain software, it's kind of correct to call it ASIC.


I am unsure, since the application here is general purpose computing.


In what respect is Apple “locking down their software hard” with respect to these new Macs?


Its likely the end of the days of installing an alternative OS. Bootcamp support has been dropped and Linux support will likely not exist.

And then there is the OS which is getting more and more locked down so that you can not run unsigned software without increasingly difficult workarounds.

On one hand, alternative OS support on macbooks has gotten worse and worse over the last few years but it is sad to see the final nail in the coffin.


As a long time Mac user, the last few OS updated caused me to have to good some permission error or other just trying to run simple tools like emacs. They know they can't control the web and web apps, but they are making moves to lock down apps that run on their OS.


> In what respect is Apple “locking down their software hard” with respect to these new Macs?

Locked bootloader only booting stuff signed by Apple.

So these CPUs can only be used to run MacOS, no Linux or other alternate open platforms.


Where are you people getting this from? This is not accurate. You can in fact downgrade security from recovery OS.

ARM doesn't have a generic platform like PC but I'm sure someone will figure out how the device tree works if they haven't already.


ARM does have a generic platform like the PC, it's the BSA (https://developer.arm.com/documentation/den0094/a).


LOL, I should have known someone would try to "well akshually" me.

The BSA/SBS is relatively new as far as I'm aware. The server version was released in 2014, the same year as the iPhone 6 which was already using Apple SOCs.

I don't know when the client version was released but fairly recently AFAIK. I don't know of any systems shipping based on it.

Most ARM systems are using device trees and their own custom slate of devices.

So I should amend my comment I suppose: no one is using any kind of "Standard ARM PC" definition in any quantity, and I'm not sure we should bring over UEFI or ACPI when device trees have been working well so far.

Nevertheless as I noted I'm sure enterprising hackers will figure out how to do it. If you downgrade security the SEP will sign whatever "kernel blob" you like and the system will load and jump to it at boot. Technically that isn't even required - a kext could pause all CPUs, set the correct registers, and jump to a completely different OS if you were really determined.


Apple is moving to basically ban non-Apple store installs of Mac software. It's been in progress the last few years but they are on the final stages of turning Macs into iPhones.

Gotta take their 30% cut of everyone's revenue.


In the release event they featured prominently Cable Sasser from Panic and an App which is not in the App Store.

https://twitter.com/cabel/status/1326271980081876992


Probably not, since there are many crucial pro-level applications related to image editing, modelling and animation, video and audio production that realistically wouldn't be on the App Store (well, apart from Apple's own products).


This is salient, and almost upsetting frankly as I (and others?) have been looking for a 'way off' the platform after years of grievance. This is might just be good enough to keep their core platform value in place. It's a shrewd move in their part, it's been a while since we've seen this level of core innovation on their non-iOS offers.


It’s a huge problem for open computing to be sure.

This isn’t risk free, issues having to do with supply and process and frankly geopolitics can cause problems. But it looks like they’re off to a good start.


I'd expect their performance features, such as the wide execution and L2 etc, to be copied by amd/intel within a couple of years.


same feeling here. It seems that we'll soon have to choose between freedom on one side vs performance ( inaddition to good UX, which is today's tradeoff).

i was hoping for a good ios opensource replacement, i guess now i'll soon have to wait for a good laptop competitor as well.

I feel like i'm stuck in a jail made out of gold.


Not quite. You can't synthetically compare one chip to another and draw organic conclusions. Apple computers with M1 don't support the software that I use. That's why I bought a fully spec'd Intel MBP13 today. All this talk about battery life and benchmarks gets flipped on its head when I can't use the product in the real world.


Isn't there Rosetta that they talked about?


It's an emulator, not a panacea.

"Rosetta is meant to ease the transition to Apple silicon, giving you time to create a universal binary for your app. It is not a substitute for creating a native version of your app.”

https://developer.apple.com/documentation/apple_silicon/abou...


It's a stop gap yes, but I am replying to someone that said their apps will not run on these new machines.

And check my reply to a sibling comment where people are finding cases where Rosetta is faster than native.


Actually, you're replying to me. Check the usernames.

You can't run Windows on these things, and Rosetta 2 doesn't fully support kexts, VMs, or certain instruction sets. It's a translator and it's going to be imperfect in practice. That's why it's not intended to supplant development with native instructions.

Your other comment is a tweet regarding one function that is speculatively faster, but tells me nothing about real-world performance -- nor whether the tools I use for my business are going to be supported by Apple Silicon in the next few months.


And with rosetta it is certainly slower than running it native on x86



Does it? The article is comparing Apple products, instead of comparing an Apple product to an equivalent performer from a competitor.

It's not really Apples to Apples (even if it is in name), so to speak.


It's comparing Apple Silicon to Intel silicon (in an Apple product), which is apples to oranges to me.


Its not though, The 16 inch has known performance bottlenecks on both thermal and power draw.


Source please? Honestly curious, on one now.


Tell me more, I'm dying to go to the Apple Store and get this replaced.


Start adobe premier, any significantly complex scene and render it. The system will throttle down to 1.2ghz, sometimes even 1 ghz .

I'm testing on is the 2019 core i9 2.3 ghz. Apple "Genius" bar tells me this is expected, no replacement in warranty. Maybe the 2020 is better, but after getting this as an answer I wont be buying another apple macbook.

The prior 2018 Core i9 Apple MacBook Pro had previous issues with insufficient power supply to the CPU core (this might be fixed, but I can't even sustain it due to thermal throttling, so I can't really tell).


That's disputed. But when they get around to updating that 16-inch next year, holy smokes.


So performant premium that I’m thinking of calling Apple about my MBP preorder and switching it to the Air.

Last thing I need is a MacBook Air equivalent with an unnecessarily loud and annoying fan.


Hang on, you're swapping a AS MBP for an AS MBA because you think the fan noise will be an issue with the MBP?

If anything, it's a bonus to have the fan so your can have prolonged boost performance while the MBA chip will throttle under continuous load.


Any MBP with Fantel chips (brand new, no dust) is so quick to just gun it with the fan noise - slack, chrome, electron apps, second monitor.

I think I would rather take a small performance hit and some heat than have Apple quick to pull the trigger with a fan blasting noise as I’m trying to focus (if yield comparable stats).

Will wait for more info. If this chip is really that much more efficient hopefully we are back to the good old days where MBP fan is tolerable. Otherwise I’m all in on Air


I think the fan noise you experience is not Apple's fault, but Intel's fault.


well, my very powerful laptop barely every gets any noise. Most of the time it is silent until I am compiling something. It is of course apples fault as their cooling is suboptimum


Is Apple's cooling suboptimum or is Intel's TDP suboptimum?

"Make it thicker and heavier" is apparently not the answer that Apple was looking for from Intel.

Thin and light Wintel PCs are known for having a lot of thermal issues too.


I would. It’s the same chip in every device. Plus, having owned one of each, the chassis of the Air cannot be beat.


An air with 2 plugs on each side would beat it. Bonus points if MagSafe returned.


I don’t miss MagSafe. A good external monitor will charge the Mac and serve as a usb/thunderbolt hub.


Waiting for some more information. Fan noise on any MBP is Hell and out of control. Hoping this brings back to normal.

So happy Apple is dumping Fantel chips


Yeah, they only put the fan in MBP to annoy people, of course. It has zero use. Go order the Macbook Air and have fun. Maybe you can fry your breakfast eggs on it too.


In my perspective, the underpowered part has always been graphic card though.


> Now Apple computers [...]

Apple no longer sells computers. You can rent some shiny gizmo from them to run software of their choosing provided by people they deign to allow on their platform and in a manner they approve of¹. It's not really yours anymore.

¹ "But you can still do X". Well, and last year you could still do W, and the year before that V.


So we trust Geekbench over actual user testing now?


I think it's more that we want information today and we don't have actual user testing today. And we're all guilty of this - why else did we click when the machines are still pre-order.


wait for it. this all sounds too good to be true. I would love it if it was and I will be the first to get a Mac but it doesn't sound true.


They have 5nm chips because they essentially outbid everyone else from doing so at this time and bought all of the production capacity from TSMC for themselves.

They could do that because they've been selling overpriced products.


> This really flips the argument that Mac hardware is overpriced and underpowered on its head.

This doesn't excuse years behavior not respecting their customers on (price + performance) divided by bugs.


Customers seem to be pretty happy with Apple in general so I can’t imagine they feel disrespected by the price and performance.


I’ve done this exercise a couple times: pick a random Apple laptop configuration and find an equivalent from another vendor: they’re frequently so close price-wise it doesn’t matter (depends on what you want, but I’m willing to pay $200 or so for macOS and not having to deal with Linux power management.)


There was a period of time where the laptops were not updated for almost 5 years and the price didnt change. No one remembers the past


I wonder what Linus thinks of Geekbench's accuracy now:

https://www.realworldtech.com/forum/?threadid=136526&curpost...

https://www.realworldtech.com/forum/?threadid=185109&curpost...

Personally I'm not convinced, until I see something like SPECint/SPECfp results.


anandtech has specint numbers for the A14, which should serve as the floor for M1 performance. M1 will (likely) clock higher, has more cores and sustain thermals longer.

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...

I’m also interested in seeing what M1 can do once people get their hands on real hardware running on Mac OS where so many more details can’t be hidden like in iOS, but all signs point to it being an absolute monster.


> We've seen this before: cellphones tend to have simpler libraries that are statically linked, and at least iOS uses a page size that would not be relevant or realistic on a general purpose desktop setup.

This is an interesting thing to say considering that 1. most apps on iOS use large dynamic libraries and 2. Apple silicon Macs run on 16K pages, just like iOS.


The M1 runs with 4k pages, unlike the iPhone chips.


I believe that is only for Rosetta, and A14 shares the capability but does not use it.


we don't know Big Sur's optimizations, it is possible they used a number of iOS techniques within it.


He'd probably say it's garbage, since all the reasons it is garbage remain true. Getting yet another garbage result hardly changes the argument.


Here is a video of what the other Linus thinks: https://youtu.be/ljApzn9YWmk


This is a pretty good single thread number. It's essentially the same as Zen 3. On the other hand, it's using 5nm rather than 7nm to get there, so 5nm Ryzen is likely to pull ahead in the not too distant future.

The more interesting thing is the power efficiency, which doesn't have that much impact on single thread performance because higher power CPUs don't actually use their entire power budget for a single thread. But that's an impressive multi-threaded score for that TDP. It gets stomped by actual desktop CPUs for the obvious reason, but it has better multi-threaded performance than anything with the same TDP. Though that's also partially because the low-TDP Zen 3 CPUs aren't out yet.

What I'd really like to see is some benchmarks that aren't geekbench.


Geekbench is designed to test CPU's in short bursts to not get them into thermal throttling (this is by design, it is supposed to be CPU test, not cooling test), so I would be wary of Air's results - at least until other benchmarks results are available. MacBook Pro's results on the other hand should be more representative as it has active cooling and can sustain heavy load for longer.


My maxed out 2018 air did 760/1596 on geekbench 5.0.3.

New MBA has a 1719/6967 on gb 5.3.0


GB5 scores for the M1 Mini and M1 MBP are linked in TFA.


I know, and they show the same numbers as Air's CPU.

I'm just making sure everyone understands that it does not mean Air will have the same real life performance as Pro or Mini. If you compare top MacBook Air 2020 (Intel)[1] with lowest MacBook Pro 2020 (Intel)[2] their results are almost identical, but their real life performance was not even close - Air starts to throttle after just few minutes of work. At the moment there is no reason to believe that Apple's CPU will behave differently (after all, the fan is in Pro and Mini for a reason).

[1] https://browser.geekbench.com/v5/cpu/4652268 [2] https://browser.geekbench.com/v5/cpu/4653210


GB5 scores are not comparable across different CPU arch. The scale is different.


Kind of remarkable for a laptop without a fan. (Infamous fan-hater Steve Jobs would be proud.)

Power/thermal management looks very good - low heat and long battery life without sacrificing performance.

Presumably the Air will have to throttle performance for some workloads, but not in this benchmark apparently.


It’s one run and it runs for a couple of minutes, so it’s not going to tell you much about thermal management.


It is honestly crazy how many times Jobs was able to force his personal choices into the designs of Apple products, and then force the entire industry to follow suite thanks to the influence of Apple. And I am not even mad, he was correct (in hindsight) multiple times - styluses, Flash, and so on. Even in death, he may be proven right about his dislike of fans.


Was Steve Jobs an infamous fan-hater? What did he do/say?



Yes, Apple IIs and early Macs had no fans because Jobs wanted them to be silent.


He was right in many of his extreme tastes, a fanless computer is so much nicer to work with. Many laptops are silent for common tasks now.


Yeah, he wasn't a fan.


If you still own any Intel stock it is probably good time to dump it. Not only can't they compete with AMD, Apple now started running circles around them all.

I wonder what the world is going to be like when companies own entire stack including all hardware (even things like cameras and displays) and applications (including app stores).

There is going to be no competition as any new player would have to first join an existing stack that keeps tight grip and ensures competition is killed off before gaining momentum.

So, basically, dystopian future with whole world divided into company territories.


>If you still own any Intel stock it is probably good time to dump it.

Until Intel stops making huge amounts of money, I'm not sure, as a company, they're in huge trouble. Apple isn't going to be selling the M1 to other companies and other companies have proven they don't have the mettle to spend what Apple is spending to make chips like this. Really, AMD should be a little worried since they have some counterparty risk getting all of their chips made by TSMC. At least Intel controls their own production.


> Really, AMD should be a little worried since they have some counterparty risk getting all of their chips made by TSMC. At least Intel controls their own production.

Chip fab is probably a short term risk but not a medium long term one, as the situation with Huawei has made it very clear to world governments that having competitive chip fab capabilities is incredibly important from a geopolitical power point of view. I would expect to see significant investment by various governments in the next ten years into addressing it.

(If you have a proven skillset in that area and no particular ties to the country you live in I would look into buying a yacht.)


Intel may or may not be controlling their production in the future; although it’s claimed to be for the short term, they’ve also started using TSMC.

source: https://finance.yahoo.com/news/intel-now-ordering-chips-tsmc...


Stock value is a view on future, not current financial situation of the company, with relation to what existing market thinks it is going to be.

By buying stock you say: I believe the company is going to be better than the market thinks it is going to be.


Stock value is a view on whatever people want to think it is. If you look at pure financials, INTC is very undervalued. It's a dividend stock, it's trading at a PE of 9. Do we really think that Intel has lower growth potential than Exxon or AT&T?


And the market currently has intel at a price/earning of 9. Pretty much the lowest of any tech company. If you have any semblance of hope for intel it's a solid buy.


> 'There is going to be no competition'

this is what some of the intricate systems related to technological progress and markets we're dealing with converge on, yes.

In these terms, I'd rather look at this with a different perspective: the platforms are becoming more mature.

Think of it as an organism; there is nothing natural about outsourcing the control flow of your own components. My intuition for natural design would be something along the lines of microservices (like our biological organs), organically defining the greater entity they themselves form (the company).

Apple is just one illustration of systems getting out of control. Think of international politics, or edgecases in financial markets. Our systems didn't escape us, because they were never truly under control.

Therefore, this isn't dystopian. It is dystopian from a reference frame within wich you claim to have control, but you don't, and you never had. To exercize any control over technological and economic progress is _way_ beyond our individual scope, by a margin of at least one level of abstraction.

Instead, this is a transition from one system (one with decentralized and autonomous components (subcontractors et al)) to the next (one with organic components).

Within our current game, that feels dystopian, but that won't matter, because it won't be the same game anymore. Iteratively, that is.


The industry is not moving _this fast_. Intel still has many-many opportunities (and years) to catch up.

> If you still own any Intel stock it is probably good time to dump it.

The market already has the information you have, so it is unlikely that your evaluation of Intel's expected future profits is significantly better/more insightful that others'.


> The market already has the information you have, so it is unlikely that your evaluation of Intel's expected future profits is significantly better/more insightful that others'

This is religion


I am not saying the market is perfect, etc., just that looking at benchmarks and deciding that the stock must be overvalued because the benchmarks don't look good is an oversimplification.


"According to the analysts at Geekbench, Intel is doomed."


Except there's actual math to back it up. Doesn't mean it's true, but there's an argument other than faith.


Outside HN like bubble circles, very few companies actually bother to buy AMD, and ARM is no match for servers, neither is Apple going to sell their chips to third parties.

Intel can still do as many mistakes as they feel like it, and AMD better hold on to their game console deals.


This simply isn't true. I just spun up a new instance on Oracle cloud, and it came with AMD CPUs. If AMD is selling to cloud providers Intel is fucked.


I specifically stated outside HN bubble and AMD is worthless without access to Intel patents.


Oracle cloud is part of the HN bubble?

From what I gather AWS also offers AMD and ARM-based instances.

I don't see how it benefits the cloud providers not to offer these choices. No one benefits from a monopoly and they want to drive the cost of their services down.


I've been holding onto my late 2013 15" MacBook Pro for almost 7 years now and it's wild to see it get absolutely out-spec'd by the MacBook Air. It's been a long time, but I've always convinced myself that I'm not missing out on that big of a processor improvement. I do a lot of audio work with it, but even then it doesn't always feel like a slow experience.


My favorite model, the keyboard is heavenly.


I really do love it. I'm gonna miss it when I eventually have to retire it. It's been rock solid this entire time.


That MacBook has to be one of the best computers Apple has ever created. Mine has lasted this long with non-stop daily usage for years and travel around different countries.


Throwing in a comment in this thread to say same. Incredible how I've used mine for 7 years and it seems new still. Only issue is 8 gigs of ram makes things like docker or ides difficult, but I get hesitant to buy a replacement because of the year to year changes. I mean, why buy a new one if you have a machine that's been going strong for so long?


Yeah I’ve had two new models since but my 2013 keeps on ticking. Never had any issues and still running just fine. Night and day compared to the piece of garbage 2017 model I got.


Your 2013 would be at a huge performance disadvantage to the 2020 Intel Air and Pro.

Those are supposed at just as much of a disadvantage compared to the new Apple Silicon Air and Pro.

That's where shock comes in.


Can also vouch for this. I've been working with a late 2012 Retina MacBook Pro and it's been an indestructible workhorse for me.


same here. My friends don't believe it when we work together at a cafe and my battery lasts about as long as their brand new mbp.

my only pb are the smears on the screen, otherwise i think i'll keep it for another 7 years.


Can somebody explain how is M1 going to work on larger GPU tasks (rendering, encoding etc) with it's memory bandwidth M1 total memory bandwidth is 68.2 GB/s (128 bit LPDDR4x-4267)

AMD Radeon Pro 5600m memory bandwidth is 394.2 GB/s (2048 bit HBM2)

https://www.techpowerup.com/gpu-specs/radeon-pro-5600m.c3612


It's not clear, but here's some knowledgeable speculation:

- All M1 models only have 1 Thunderbolt Controller, thus can only handle 2 Thunderbolt ports on all announced models so far.

- All M1 models only support 1 monitor, but up to 6K.

- No M1 model supports >16GB of RAM or 10GB Ethernet.

All of the above seem like bandwidth limitations to save cost that a future "M1X" or "M2" or "X1" would be extremely likely to fix, and that's where you'll see the bandwidth increase.


The Mac Mini supports two monitors one 6K and one 4K.

https://www.apple.com/mac-mini/specs/


If you count the display in the macbook that's the same number of screens


So if the Macbook is closed can you drive two?


Theoretically yes (the M1 Mac Mini supports two external) but it's unknown whether the capability to disable the built in display is actually exposed.

You can disable the built in display on (some?) Intel MBPs via `sudo nvram boot-args="niog=1"` according to another poster. Whether this is supported on M1's remains to be seen.


> - All M1 models only support 1 monitor, but up to 6K.

Where did you find this? In the Macbook Pro 13 landing page I find the specs per Thunderbolt port which states the display capabilities for each while not restricting anywhere to just one monitor.

So I'd would expect 1x monitor per Thunderbolt port leading to total 3 displays for the Pro ARM model: one internal and two external displays. I assume the restriction you refer to just applies to the Air model.


In the tech specs page, under "Video Support":

> Simultaneously supports full native resolution on the built-in display at millions of colors and:

>

> One external display with up to 6K resolution at 60Hz

Source: https://www.apple.com/macbook-pro-13/specs/


Ok thanks and this is really quite a bummer.


Is there a legitimate laptop purpose that requires 10GB internet? If so, how niche is that purpose?


More the Mac mini, which had built in 10G as an option in prior generations.

For laptops, you can use an external dongle. It has its uses, like for a NAS on a 10G network for video or other data-intense work.


I have about 20TB of data on several external hard disks / nas / machines. Due to some disks failures I had to move these data and believe me I wish I had 10GB Ethernet so that transfers don't take literally days.


Well, the older Mini that the M1 Mini replaces has 10Gb Ethernet. That being said, there's no modern MacBook that has 10Gb Ethernet anyways. Cannot that port be added over Thunderbolt 3/USB 4 based dongle?


It makes a huge difference for networked storage. For some rough equivalency (ignoring seek times, but also ignoring tcp overheads etc, total spherical cow territory)

- An 8x cdrom narrowly beats 10meg ethernet.

- 1x dvdrom narrowly beats 100meg ethernet.

- ATA133 narrowly beats 1gbit ethernet.

Original SATA is 1.5gbit, so 1Gbit ether bottlenecks us to 1999 storage speeds.


Ethernet ≠ Internet


You can buy an entire M1-based Mac Mini for the price of the 5600M upgrade cost over a base MacBook Pro (5600M is a $700 upgrade over base configuration).

It's an apples and oranges comparison. The M1 GPU is an integrated solution targeted at the cheapest Macs. The 5600M is external GPU targeted at $4000+ top of the line Macs.

There's no reason that higher end Apple silicon machines can't also use external GPUs like the 5600M (after development, of course).


They were waving their hands around a bit saying that in addition to on-package RAM with high bandwidth, they have done optimizations inside driver, OS and APIs to eliminate 1 or even 2 or more RAM to RAM copies for common tasks. So maybe there is a higher effective bandwidth when comparing to Wintel machines. It was very vague. Would love to read a detailed article comparing the systems.


Intel and AMD theoretically also support zero-copy but I wonder if it gets used in reality since most apps are probably optimized for discrete GPUs.


68.2GB/s is a ton of bandwidth.

Actually, very, very few tasks on a desktop can stress a CPU enough to saturate the bandwidth on a single task.

Proper HPC programs written to have absolutely zero cache misses can. A javascript eating 1GB of RAM to show some text, and pictures, cannot.


I asked about GPU workloads the bandwidths are 394.2 GB/s > 68.2 GB/s (5.78 times)

And even the raw GPU processing power AMD Radeon Pro 5600M is 5.274 TFLOPS vs M1 as Apple rated 2.6 TFLOPS

It's a downgrade in GPU processing power for the user.


Yes, but for computational, non gaming tasks, it barely is.


The M1 will work as well as Tiger Lake or Renoir which have the same bandwidth and acceptable (not great) GPU performance.


I am really looking forward to where this goes.

That said, I am definitely waiting for at least one generation to pass before I jump on the train.

I’ve been through this before. It will be great, but Apple is a master at smoke & mirrors. Things will not go as smoothly as the sizzle reels make it seem.


I think this will be their smoothest transition by far. Everything is compiled by the same tool chain with much higher level frameworks.


I think you’re probably right, but it’s still a massive change. Basically, repaving the highway, while there’s cars driving on it. Apple has done this a few times before (but we don’t talk about Copland in polite company), so I know it will end well.

I think their new architecture will be awesome. Personally, I look forward to being able to debug iOS Bluetooth apps in the simulator, and I think some of the new form factors will be pretty cool.

But I can’t help but notice the current dearth of AAA apps that are already universal.

A transition like this is a big deal; especially if you have an app with thousands of function points, as it requires a complete, top-to-bottom re-test of every one.

In the unlikely case that you won’t find issues, it will still take a long time. Also, most companies won’t bet the farm on prerelease hardware; instead, using it to solve issues. They will still need to test against the release hardware before signing off for general distribution.

Also, I have a couple of eGPUs that I use. I don’t think the new architecture plays well with them.

I’m hoping that the next gen will obviate the need for them. They are a pain.


I'm hoping that at least the basics will be here (posix shell, vim, browser, etc), I won't need anything else for a long long time.


Like the grandparent comment, I would avoid this M1 Macbook since the developer ecosystem is still unsupported by it. Take this for example: https://github.com/docker/for-mac/issues/4733

Docker still doesn't run on Apple Silicon Macs, so the migration path is already disrupted here.


You say “still” as if Apple silicon Macs are more than a day old.


We can include the A12Z developer transition kit, which is an 'Apple Silicon Mac' which was available since July.

The problem here is that in WWDC, Apple showed Docker Desktop running on an Apple Silicon system (maybe suggesting that it is at least running) and here we are in November it is still not running or not known if Apple Silicon is supported.

I don't see any patches in the Docker repositories on such support, thus maybe Apple has a private fork for Apple Silicon. In general it is not available to us and we don't know when it will be.


Developers were able to start working on Apple Silicon software several months ago.

https://www.apple.com/newsroom/2020/06/apple-announces-mac-t...


The DTK doesn't support virtualization. The final product might.


Docker and Boot Camp like the only things that don't work, for fairly obvious reasons.


Personally, I feel like docker is a intermediary product in terms of evolution between servers and cloud, and it's been massively over hyped.

Many use cases, especially when it comes to microservices can almost always better served by something else


Not everyone needs to run Docker.


Citation needed.


All of those are there.


Dell Inc. XPS 15 9575 vs MacBookAir10,1

https://browser.geekbench.com/v5/cpu/compare/4654605?baselin...

Opinion: Pretty and impressive story


The Intel chip in that comparison was released in Q1 2018, almost three years ago now. How is this a helpful comparison?

https://ark.intel.com/content/www/us/en/ark/products/130411/...


The 2019 16” MBP (the one many pros will spring for) is still running the 9th gen intel. So a lot of people will either still have these chips, or will still be considering them in an upgrade.


This seems more relevant to me, 2019 16" MBP vs 2020 13" MBA https://browser.geekbench.com/v5/cpu/compare/4651583?baselin.... It is quite close! Not sure how significant it is that these tests were done on different versions of geekbench, so maybe caveat?


Not sure if this is in any way a useful comparison, but it's what I currently use and am looking to replace:

Mid-2015 15" MacBook Pro vs MacBookAir10,1 https://browser.geekbench.com/v5/cpu/compare/4643216?baselin...


This is definitely compelling. I’m still going to try holding out until the Apple Silicon 16”, but my computer is more and more reminding me how long ago 2015 was.


In the same boat. My 2015 is great but for the fact I do a lot of video and photo editing these days, and those apps push this machine to its limits. My fan runs at the highest speed all day.


A better comparison would be the new XPS 13 which beats it at single core but only has 4 cores to MacBookAir10's 8 https://browser.geekbench.com/v5/cpu/compare/4309589?baselin...


The M1 is a 4+4 big.little core configuration. It only has 4 fast cores, not 8. In a multi-threaded workload those little cores will still help, sure, but not nearly as much.

Hence why in that multi-core result the 4c Intel is way closer to the 8c M1 than it "should" be.


The cache comparison is especially striking. Wow.

I’d love to see one of these go head to head compiling a large project etc.


Note that this laptop uses Kaby Lake based on the older Skylake design. Newer laptops use Sunny Cove based on Ice Lake and have improved single-core performance.


The MacBook has 8 cores vs 4 (hyperthreaded) in the Dell.


M1 doubled it on single core performance.


Single core scores are indicative too


Not really. It's 8 big/small cores, so only 4 are "fast" ones

Which makes it even more impressive.


MacBook Air with M1 vs. XPS 13 with Tiger Lake i7-1165G7:

https://browser.geekbench.com/v5/cpu/compare/4306696?baselin...

Single core similar and multicore 50% better for MacBook.


And the Dell is $500 more


To think I lived to see the day when you buy a macbook for value


If you were alive a decade ago, you already have. When the SSD transition started the MacBook Air was a surprisingly good value with a better quality SSD than almost any of the PC options at any price (integrated controller versus the overhead of a separate chip) and most of the sub-$1k competition had spinning metal.


Meanwhile they kept spinning metal on the iMac for a decade.


I am an anti-fanboy, not buying apple for the last 6 years, but couldn't resist its value now. Sounds like a good time. Buying M1 today is quite a good timing strategy guesstimating M2(or whatever name) with 32GB+ will come in next 2 years or so.


Wait for real world reviews, the indications here are that apple will catch up and be a good price/performance if not better but the jury is still out.


The XPS's CPU only has 4 cores to the Air's 8. Dell would be wise to ditch Intel for AMD so that they're competing with the same TSMC 7nm silicone as Apple. Intel is in trouble.


Careful comparing core count, half of the Air's cores are power efficiency cores rather than performance.


So... I'd be feeling pretty silly right now if I bought the Mac Pro in 2019 for like $7,000. (Which I almost did!)

https://browser.geekbench.com/macs/457

M1 is comparable to baseline Mac Pro on multicore performance and better on single core performance. And several thousand dollars cheaper (and smaller).


There's still a lot to be said for software support, IO, RAM, etc.


Yeah, the Mac Pro supports up to 1.5TB of DRAM (vs. 16GB), it's got a bunch of slots and discrete GPUs, and it can run Windows and Windows VMs. I hope it doesn't end up as an orphan system like the 2013 "trash can" Mac Pro, which was an interesting system that got zero upgrades. Perhaps Apple will offer Apple Silicon upgrades to Mac Pro buyers.

Assuming they can build it (and they have implied that they can scale their silicon designs up in terms of cores, power, and clock rate), an Apple Silicon Mac Pro will be a pretty interesting machine.

If they wanted to, Apple could even bring back an Apple Silicon powered Xserve, or the legendary, mythical, modular desktop Mac (I know, now we're in the realm of pure fantasy, but one can dream.)


> Apple could even bring back an Apple Silicon powered Xserve

Given their performance/watt, this sounds like it could be potentially game changing.


It's clear to me that they will.

Amazon, Google, Microsoft each have a cloud offering.

The M1 is just what Apple needed to compete there.


I don’t know if the wider audiences would invest heavily enough in servers given Apple’s graveyard of products in that space.


Amazon AWS is already doing this with their own ARM cpus.


Yes, but Apple's ARM implementations seem to be better than ARM's.

The Gravitons are based on Cortex-A76 aren't they? Don't phones with that architecture benchmark similar to an Apple A10?


Also GPU: https://browser.geekbench.com/v5/compute/compare/1799092?bas...

The Vega II is even faster (but quite a bit more expensive).


There are very few reasons to buy the Mac Pro with the lowest CPU option. This here is the comparison to make and it crushes the M1's multithreaded score of 7,000 with a score of almost 19,000:

https://browser.geekbench.com/macs/mac-pro-late-2019-intel-x...


I suspect an Apple Silicon equivalent to Ampere’s Altra (which can go to 80-core today per socket and 128-core soon) would absolutely devastate these Geekbench scores on a tricked out Mac Pro.

If you want to make an apples to oranges comparison that’ll really be the one to make.


Great point. It also makes me wonder about how long the NVidia deep learning advantage will last.


If the Geekbench scores are to be believed, the M1 in the MacBook Air scores 7,500 multithreaded. So yeah, the 28-core Mac Pro is quite a bit faster (and can sustain that performance), but the performance of the Air looks extremely good.


It does look very promising. One can only imagine what future Mac Pros will be capable of. And thanks for the correction.


I'm curious for Windows users what can we hope for from Arm/AMD? It's kind of depressing seeing how well Apple and their ecosystem is coming along. Being able to run apps natively on your laptop sounds amazing. Android and Windows are far away from that level of cohesion. There's Phone Link, but running apps using that is clunky and slow.


I think there is also a moral angle that disturbs Windows fans to this as well: This is validation that Apple's heavy-handed walled-garden approach can, in fact, work and even sometimes beat an open ecosystem. That's freaky.


You’re kidding me. Are you seriously happy with the level of innovation coming out of x86? It’s a decades old pile of glue and skeletons. Intel has all but given up. AMD is out innovating Intel but at the end of the day the platform as a whole is aging and comes with a lot of baggage, spectre/meltdown, etc...

You can certainly choose to view this as a step towards bolstering the walled garden... but my money is on Apple wanting to innovate and knowing that they can. Vertical integration does a lot for efficiency, cost, etc... as well and not just keeping people in your walled garden.

People forget: Apple has been shipping their own silicon for years. If you had stellar cpus you could make in house, wouldn’t you rather use them instead?


Spectre affected Apple's chips just like pretty much everyone else, it isn't x86 "baggage".


Not really. Apple is one of the few companies that invests in this kind of cohesive experience. It didn’t come out of the blue. Doubt Windows does much in this regard given their atrocious UX


A lot of this dates back to the nineties. Companies like Dell, HP and Lenovo were happy to suck on the WinTel teat. Now it has mastitis they are going to suffer.


It says something, but for the first time in about a decade, I actually want to get an iPhone and ditch Android. This is coming from someone who used to rock all the Lumia phones. The last straws for me are usb-c and no notch. The next iPhone that has these updates I'll end up purchasing. I'll likley miss the integration of Windows Phone Link, but I can live without that.


Next iPhone probably no charge port at all.

I was hoping for USB-C too, but I bought the 12 Pro anyway.


I think nobody commented this, but Apple has just made the Mac lineup unrepairable. They will need to have a really high, almost perfect quality control.


Nothing they did this week made it more unrepairable that it already was. Newest Macbooks already had CPUs and memory soldered in to the motherboard for quite some time. And this doesn't make them unrepairable because most of the motherboard issues don't come from a faulty CPU or memory, but from corrosion/other kind of damage on the smaller discrete elements like resistors, capacitors or smaller chips, which can easily be replaced if you know what you're doing. It's not something that you can do at home without equipment, but there are a lot of repair shops that can repair even the newest Macbooks with high success rate.


> but there are a lot of repair shops that can repair even the newest Macbooks with high success rate.

That is my point. Now EVERYTHIN is in the M1 chip. Nobody will be able to fix it, not even Apple.


They barely fix anything on current gen hardware already. Pretty much any failure will result in them replacing the entire logic board. Seems to have been part of their business model for years now.


What's actually more interesting to me than Intel vs ARM is how much further ahead Apple is vs the competition. The snapdragon 8cx is about 1/3rd the speed in both Single and Multicore performance of the M1.

How is Apple so far ahead even with the same instruction set?


ARM laptops have been trapped in a vicious cycle of low performance -> low sales -> low R&D -> low performance. The A78C and X1 cores should help a lot but they still won't match Apple Silicon.

Apple is ahead because they have more money than Arm, Qualcomm, Samsung, etc. combined.


How big one's pocket isn't usually the driving factor on who does best.

This feels like once again Apple is 5 years ahead of competition just like when iPhone came out.


It might not be the usual driving factor but deep pockets are a powerful tool in the hands of people who know what they are doing, and friction for those who don’t have them.


They have incubated this chip arch in their iPhones for five years!


QCom focuses their capital on handsets. They could likely compete with Apple design, but they wouldn’t have a market for years. Also, they’d be competing as a commodity against Intel, AMD and AMLogic. In the semi industry, it’s not a good bet to invest only to become a 2nd source that eats into everyone’s margin.

Apple is a vertical. They own their market, so the investment has more predictable returns.


> How is Apple so far ahead even with the same instruction set?

Single customer. Apple can optimize directly for iOS workload and consider nothing else.

Intel sells into the general market and has to hit sometimes conflicting goals so that Dell, HP, Lenovo, etc. will all buy their chip.


Probably has something to do with the 5nm process.


I did some more research after commenting and found that while surely the 5nm transistors help this processor to run cool and stable, it's the packaging that really brings the speed. Everything being integrated right there means less latency, which likely means less need for caching, less stuff stuck waiting in RAM, etc.

I'm not a fan of Apple's domineering business strategies, but this SoC is impressive. I have to imagine AMD and Intel will follow up with something similar (a tightly integrated SoC aimed at higher performance applications).


Qualcomm and Samsung have access to 5nm.


It’s what happens when a trillion dollar company does vertical integration the right way instead of dicking around.

Intel, Qualcomm, Microsoft - all have to build products that work for the lowest common denominator. Loss of focus is a major problem.

Apple has a handful of products. One OS. One developer platform.

This kind of agility is extremely powerful. They can switch fabs whenever it makes sense. They can switch ISAs whenever it makes sense.

Contrast with Microsoft, that has to support so many hardware platforms. They’re not helping themselves with so many software frameworks - Win32, WinRT, .NET, MFC, WinJs? I’ve lost count.

Intel is handicapped stuck to their process nodes.

Qualcomm, while they’ve effectively captured the mobile SOC market, they too have the same problem. They can’t control what handset makers do. So they can only go so far.

Apple can make a single CPU core and mix and match that with variations. Things get a lot easier if you just have to deal with yourself e2e - even as far as retail sales.


How certain can one be that the M1 uses nothing but the ARM instruction set? They certainly aren't advertising it as ARM. They aren't selling it to other OEMs. One way to really put the kabash on Hackintoshes would be to make Mac OS use nonstandard instructions. What's to keep them from adding instructions?


Nothing, and the chip does have a handful of proprietary instructions. But for the most part it's just standard ARMv8.5, which you can see because it runs ARM binaries generated using a standard compiler toolchain.


Any thoughts and experience around using the current (slow) Mac Mini as a web and java dev box?

And any thoughts and expectations around using the (fast?) M1 Mac Mini for web and java dev?

I'm right now getting frustrated with my company's lead times ordering a new HP Zbook laptop. (Its not like I particularly want the Zbook, its just that's what devs get here.)

But I'm a remote home worker, and I'm thinking "M1 Mac Mini available next week. Its like 1/3rd cost of a high end laptop and probably outperforms it, and is silent...?"


Been thinking about this:

- openjdk is being ported to the new hardware but it's not there yet. I'm not sure if it works with the emulator; or how well.

- if you use anything by Jetbrains, like I do, you'll likely have to wait for them to use that and package up a new release. It's a resource hog as it is and emulation is not going to improve things probably; assuming that works at all.

- things like android studio that rely on virtualization for running simulated phones will likely need upgrades too; but since the emulated hardware is arm that should be doable.

- things like docker which most server side developers like me won't work until Docker releases an arm version. Then the next problem is going to be that the vast majority of docker images out there are x86.

- If you are a web developer and want to test on something else than safari, you'll have to wait or accept worse performance of chrome, firefox, edge, etc. in emulated mode.

- pro hardware with >64GB; 16 GB is a non starter for me. I upgraded to that in 2014. I'd frankly want that dedicated to GPU memory in a modern setup. If you are a pro video or graphics user; I imagine those are not strange requirements. If your workflow involves third party tools and plugins that are x86; I'd wait a couple of years before even considering. Long term, this could be great; short term it's going to be rough depending on what you use.

- Also nice would be for things like opencl (Darktable) and games to work. I have a few steam games and X-plane. I expect that Steam Mac is going to be once more decimated. With the last release 32 bit games stopped working. Essentially all of the remaining games are x86 only. I look forward to seeing some benchmarks of how these are running under rosetta 2; but I don't get my hopes up. I'm guessing, Apple is looking to unify the IOS and Mac gaming ecosystem instead and is actively working to kill off the PC gaming ecosystem on mac. This will be an app store only kind of deal. Likewise, if they finally make moves in VR, it will be ARM and app store only.

So in short, for me it doesn't make sense to upgrade until most of that stuff is covered. None of it is impossible but it's going to take some time. I would be in the market for a 16" x86 mac pro but of course with this transition, long term support for that is very much going to be an afterthought. I might have to finally pull the trigger on a move to Linux.


I don't know what you do with Java, but here Windows 10 + Android studio + an Android emulator consumes around 11GB of ram.

So add dozens of tabs open in a couple browsers, maybe a VM or 2 and 16GB or ram is going to be insufficient.


Man. Obligatory old person remark, but seeing this kind of a quantum leap in performance per watt makes me feel young and tingly all over again. If M1 makes SIMD suck less, there are downstream effects. It's true if that were /entirely/ the case, Radeon would have stolen the crown from NVidia years ago.

But this is different. There's the convenience factor here. How far does M1 have to go before commodity DL training is feasible on commodity compute, not just GPU?


The 16GB max ram still seems a head scratcher. Not what would suppose I want for video editing.


The three models Apple chose are all the consumer, low-end ones.

You will see much higher specs on the Mac Mini Pro, MacBook Pro 14/16 and Mac Pro planned for next year.


To think... this is the performance we’re seeing from their low-end model


I guess for those who desire more RAM the call is hold off on M1 based MacBooks until this ceiling is raised. I’m guessing an intel based MacBook is a better call, except resale price will suffer.


Timing is important here, M1 is not going to give you 32GB+, 1 year gap doesn't sound to me. M1 to M2 (whatevername) gap should be 2 years+


What do you need more memory for with video editing?

Video editing's all done on disk isn't it? It's not like editing programs are loading a 20 GB file into memory?

Even just 8 GB has never given me any problems with video editing.

What am I missing here?


editing programs are loading a ton into memory though. edits are done in memory before their final output is saved to disk. if your viewing it on the screen, it is almost certainly in memory, and if your working with high res video you very well could exceed 16gb. more memory means larger (longer clips or higher res) video files loaded to memory means faster and more efficient working


Kinda my thought exactly. Although most any video editing software uses proxy files, so you don't have to load in the original hi-res footage.

It's just that... swapping data in and out of ram is far slower than clock speed, right? So if the bottleneck is RAM, would it make sense to get the most RAM? (ie: Intel Mac)


The question is which professional software for video editing is available for this new platform? Probably not much at the moment. So these machines will be perfect for those who do their work with browser, email and not much more.

I'm using Tableau Desktop in my daily work and until it is available on this new platform - these Macs are not an option for me even if they are 10x more performant. I guess there are a lot of professionals that are constrained in a similar manner. So we will see if M1 is adopted in such scenarios at all.


> The question is which professional software for video editing is available for this new platform?

Final Cut Pro is one of the options for the new Macbook, on the item's page.


With the unified memory, 8 CPU and GPU cores and super fast I/O and native FinalCut Pro, video editing will be just fine.


I've got 16GB in my 2015 MBP and editing 1080p in FCPX it never slows down at all.


I currently have the same MacBook but everything seems to be breaking (keyboard, battery, screen, major OS upgrades fail, etc) and it needs to be replaced. But 16gigs seems not very future proof. I’m already having issues with some video codecs I can’t work with, example .360 GoPro files. (I bought an iPhone SE to handle that!)


ARM is the future, at least in laptops.

Apple lead the fray again, it's incredible.


Apple is hardly the first laptop manufacturer that built a arm based laptop. Microsoft did it before them with the Surface back in 2012. However it's not going to be as popular as Apple's laptop.

https://en.m.wikipedia.org/wiki/Windows_RT


But it seems that everyone else had the idea that ARM is cheap material and only Apple put in enough effort to make the full potential of both the instruction set and the chip design.


Apple is rarely "first" with anything (they were nowhere near the first with mp3 players, for example). But they are often first-in-class once they commit to an idea. And then we got the iPod after many other companies tried to also do that in the preceding years. ARM is not new, but Apple committing to this is going to be very disruptive, and ultimately a more impressive implementation, in ways the Surface never was.


I have a burning question: Can I use gcc, python, etc. on the new M1-based Macbooks? I am thinking about getting one but not sure if I can use it as a development machine. How does rosetta translation layer work with brew apps and other binaries?


They showed this during the previous transition announcement. They flashed up a screen of various open source tools that they’ve tested. It even included Blender.

Here is the image: http://www.cgchannel.com/wp-content/uploads/2020/06/200623_A...


I don’t have an M1 yet, but the things you mentioned work perfectly fine on my Developer Preview hardware, so I don’t see any reason they wouldn’t on the production hardware.


Yes you can.

For GCC targeting arm64 macOS, the dev branch is at https://github.com/iains/gcc-darwin-arm64/ currently.

For Rosetta, everything runs except virtualisation apps.


Brew would just compile the ARM version and it should work the same for the most part. Same as running Linux on ARM (like raspberry pi).

I’m not sure if it supports it immediately, but this isn’t a difficult change, so it will certainly come soon.


people have been saying that for over a decade.


A matter of time until Apple releases a ChromeBook competitor - a cheap plastic laptop for schools/students, similar in spirit to the iPhone SE, perhaps running on A14 instead of M1, and a 12" screen. This would be great for marketshare gains and to get more young people into the Apple ecosystem. Perhaps something that looks like the polycarbonate MacBook introduced in 2006?


Maybe reintroduce the iBook? Ha!


Yes, that's the name I couldn't recall!


There were talks of splitting Google and other huge companies due to their size (https://arstechnica.com/tech-policy/2020/10/house-amazon-fac...).

Wouldn't it be awesome if Apple's CPU/IC design segment became a separate company and sold these CPUs and maybe SoC by themselves?

I think that would make a big dent into AMD/Intel market shares. Since Apple's part/die should be quite a bit smaller than most of the x86 dies, the fab costs should also be smaller and so should the final price.


No, it wouldn't be awesome.

This thing ONLY EXISTS in the first place because of Apple's continual vertical integration push, and because other parts of the business were able to massively subsidise the R&D costs necessary to come up with a competitive SOC in an established market that's otherwise a duopoly. If their CPU/IC design segment were its own company, the M1 would never have seen the light of day. Period.

Furthermore, this chip is not meant to be a retail product. It's optimised for the exact requirements that Apple's products have. The whole reason why they're able to beat Intel/AMD is because they don't have to cater to the exact same generic market that the established players do, but instead massively optimise for their exact needs.

I genuinely don't understand how can anyone who wishes to break up Apple not see that these things?


>This thing ONLY EXISTS in the first place because of Apple's continual vertical integration push, and because other parts of the business were able to massively subsidise the R&D costs necessary to come up with a competitive SOC in an established market that's otherwise a duopoly. If their CPU/IC design segment were its own company, the M1 would never have seen the light of day. Period.

Eh, that's some very biased thinking.

In the real hardware world of both mechanical and electrical. I can approach a company and say "we want something with these specs, we'll buy 10 million pieces per month, what can you do you us?" and that kicks off R&D efforts after some ground contractual agreements to commit both parties.

You know, exactly what Microsoft did with AMD when they commissioned unique SOC designs for their consoles with a host of never before implemented features such as direct gpio <-> ssd io.


> This thing ONLY EXISTS in the first place because of Apple's continual vertical integration push

This seems pretty grounded.

> The whole reason why they're able to beat Intel/AMD is because they don't have to cater to the exact same generic market that the established players do, but instead massively optimise for their exact needs.

I'm less convinced of this. Their exact needs seem to be making laptops... and so these chips would make interesting candidates for other laptops, if split off from Apple.

It's never going to happen, and an independent company might struggle for R&D money, but if these prove to be better laptop CPUs there is a market there.


But they're not just making generic laptops. They're making Macs.

Everything from the memory model, to the secure enclave for TouchID/FaceID, to countless other custom features, are parts that other SOCs do not need to have present on the die, and cannot optimise for.

For good or bad, this is truly a piece of engineering that could only have come out of Apple.


> Update: There's also a benchmark for the 13-inch MacBook Pro with M1 chip and 16GB RAM that has a single-core score of 1714 and a multi-core score of 6802.

It doesn't make much sense for the Macbook Pro to have a lower score than the Macbook Air.


I noticed that too. But it’s one score for the Pro versus four for the Air.

Assuming it is sampling error?


It's also possible that the fan comes into play more with a sustained load. I believe their clocks are the same, but I'm sure the Air has to start throttling at a certain point once it gets hot.


Isnt the chip the exact sane in these devices? The air scores are probably going to tank massively when thenal throttling kicks in. There should be a decent performance gap between these machines then


How am I supposed to interpret this? A MacBook Air surpasses my i7-8700k in single and (almost) multi core performance?


Yes, in fact, the A14 (iPhone 12) already surpassed most Intel chips: https://images.anandtech.com/doci/16226/perf-trajectory_575p...

Intel is now #3


A modern mobile CPU with a TDP of 6 watts is beating a modern desktop CPU with a TDP of 125 watts? Is it just me or this seems too good to be true?


TDP lost meaning years and years ago, and power usage isn't linear. The extra 100-300mhz at the top end is a huge impact to power.

Check out for example the per core power charts that Anandtech does: https://www.anandtech.com/show/16214/amd-zen-3-ryzen-deep-di...

Compare for example the 1 core power numbers between the chips. The 5600X 1 core result is 11w @ 4.6ghz, whereas the other two chips boost higher and hit 4.8-4.9ghz 1 core turbos, but it costs 17-18w to do it. Huge increase in power for that last 1-2% performance. So you really can't or shouldn't compare more power-concious configurations with the top end desktop where power is infinite and well worth spending for even single digit percentage gains.

And then of course you should also note that the single-core power draw in all of those is vastly lower than their TDP numbers (65w for the 5600x, and 125w for the 5800x/5900x).


>https://images.anandtech.com/doci/16214/PerCore-1-5950X.png

Yeah comparing TDP is meaningless even within the same processor. The 4 core workload in this table uses 94W and the 16 core workload uses 98W. There is also an anomaly at 5 cores where the CPU uses less power than if it only used 4 cores.

If you tried to derive conclusions about the power efficiency of the CPU you would end up making statements like "This CPU is 3-4 times more power efficient than itself"


It's a mobile CPU with many silicon advantages (widest decoder in industry, memory closer, deepest re-order buffer of any CPU and much more) plus a sane ISA and optimized OS. So yeah, you're seeing the benefit of Apples integration. That's why even the Anandtech page calls that graph "absurd", because it seems unreal, but it's real.


Gotta be frank because it's not getting through: you're jumping way ahead here. Every time one of these threads has happened, there's an ever-increasing # of people who vaguely remember reading a story about Apple GeekBench numbers, so therefore this one is credible too - I used to be one of those people. This has been going regularly for 3-4 years now, and your interlocutor as well as other comments on this article are correct - comparing X86 versus ARM on GeekBench is nonsensical due to the way GeekBench eliminates thermal concerns and the impact of sustained load. Your iPhone can't magically do video editing or compile code faster than an i5.


My comment, and this specific thread, isn't even related to GeekBench. The graph I linked used SPEC instead of GB5. The gigantic architectural deep dive over on Anandtech even includes a discussion on the strengths and limits of both benchmarks, and how they make sense based on further micro-architecture testing.

The reason that graph doesn't include the A14 Firestorm -> M1 jump was simply timing. We know the thermal envelopes of the M1 and the cooling designs. We now have clock info thanks to GB5. So yes, the data is pretty solid. No one's saying that the iPhone beats the Mac (or a PC) at performance when you consider the whole system. Just that the CPU architecture can and will deliver higher performance given the M1 clock, thermals and cooling. Remember that The A14/M1 CPUs are faster at lower clock speeds.


Well, we have this evidence so far, on a phone that has no active cooling: https://twitter.com/tldtoday/status/1326610187529023488


that's comparing a hardware encoder to a software one, unfortunately, as the replies note.

it's unfortunately drowned out by the cpu throttling scandal on google, but, its well-known in ar dev (and if you get to talk to an apple engineer away from stage lights at wwdc) that you have to proactively choose to tune performance, or you'll get killed after a minute or two due to thermal throttling.


This raises the question of just why the Mac is doing software rendering—I think the hardware it’s running on should have two compatible hardware encoders, that on the CPU and that on the GPU. Is the software being used incapable of using hardware encoding? Does it default to software rendering because of its higher quality per bit? Was it configured to use software encoding (whether ignorantly or deliberately)?


Video encoding is generally done on CPUs because they can run more complicated video encoding algorithms with multiple passes. This generally results in smaller video files with the same quality. As you increase the compute insensitivity of the video encoder you get diminishing returns. 30% lower bitrate might need 10x as much CPU time. That tweet says more about the type of encoder and chosen encoder settings than anything about the hardware.

Imagine going on a hike and climbing an exponential slope like 2^x. You go up to 2^4 and then go down again and repeat this three times so you have hiked 12km (43) in total. Then there is a athlete who is going up to 2^8. He says he has hiked 8km and you laugh at him because of how sweaty he is despite having walked a shorter distance than you. In reality 32^4 (48) is nowhere near 2^8 (256). The athlete put in a lot more effort than you.


It is true. Note that a single core can only use ~20W so high TDPs only matter for multicore.


5nm vs 14nm is the easiest explainable reason.


It's also the most wrong explanation. The actual performance efficiency between those processes isn't that drastic. The power efficiency of the M1 would come from better IPC such that it just doesn't have to clock as high to be competitive.

That's why A14 only runs at 1.8ghz base, 3ghz boost. That's how it has low power consumption. And similarly Intel pushing 5ghz is why it has high power consumption.

TSMC's 5nm will have a raw transistor performance/watt advantage, but it's not huge


What's up with these iMacPro results with AMD Ryzen processors? https://browser.geekbench.com/v5/cpu/4644694 Thought Apple was an Intel-only shop until Apple Silicon.


Probably hackintosh builds


Hackintosh build. Look up Acidanthera Mac.


How long does Geekbench take to run? Long enough to reach thermal equilibrium?


Only a few minutes. And likely not.

On my 10980xe it will cause the CPU to reach peak temperatures for 30s or so but only during specific workloads. Typically those utilising AVX-512.


Geekbench is designed to measure CPU burst speeds, not to measure cooling systems. By design it does not run long enough to test the thermal situation.



The one you shared looked crazy overclocked or something(it says 4.7 ghz). Almost all other tests of 1165 is in the range of single core performance of 1500. This is more representative of the chip of the same laptop: https://browser.geekbench.com/v5/cpu/compare/4648891?baselin...


The Dell XPS with a i7-1165G7 processor has a base price $1,499. This includes a 512GB SSD so compares to the Macbook Air's $1,199 price with the same SSD. With that Macbook Air configuration, you also get a higher resolution screen.


Apple’s SSDs tend to be faster too, but also soldered in along with everything else.


Is this arguably comparing a 28 watt part to a 10 watt part?


We don't know the boosting behavior of the M1, so we have no idea how much power it's pulling during Geekbench's short burst.

Also the i7-1165G7 is a 12-28w part, configurable by the OEM. I'd assume the XPS 13 is running it at top spec, but that'd also need validation.


I wonder if this is good enough that nobody will complain about x86 emulation performance. Although Apple will likely remove x86 emulation support in the OS in about 5 years.


I'm looking forward to seeing what the top end of this chip can do. 64 cores ought to be nice! 32 cores at least.


And I think it's safe to assume that is in the works for the iMac Pro and Mac Pro, it's just a question of when it will see the light of day.


The Mac Pro isn't a high volume unit. So it's unlikely that Apple will spend the money developing huge, workstation class CPUs.


Interesting question is whether they go with 2x/4x M1 or have a specially designed M2.


It'll definitely be a different chip either way. To have a 2x or 4x M1s would require interconnects between the CPUs for sharing RAM and general communication, which isn't there on the M1 currently. So at a minimum you're talking a rev of M1 to add such interconnects. But that all seems very unlikely. If they do go that route it'd almost certainly be something like AMD's chiplets instead of just straight "multi-socket"


I think M2 would be next year, no? So maybe M1X or M1S or M1P or something like that.


Air vs Pro, is the benchmark to look for.

I'll be interested to know how much active cooling and binning real do for Rustc or LLVM compilation times.


I mean the only relevant difference I see is that the Pro comes with a fan. Would like to see that benchmark too.


That is what I mean by active cooling.


I really really hope that in real life scenarios this will be much faster than Intel CPUs, in case a wake up slap from AMD will not do. Maybe Intel will swallow its pride and design a new CPU for TSMC.


If they haven’t woken up by now, when will they?


These benchmarks are impressive. The non upgradable ram capping out at 16gb is very disappointing though.

I'm hoping the rest of the PC market benefits from this increased market for ARM compatible desktop software. ARM has less of an issue with patent encumbrance and more competition. Desktop CPUs could get a lot cheaper in the future if ARM Windows and ARM Linux get more use as well as Mac.

(In case you didn't know, you can run ARM Windows now (sort of unofficially, you'll need to google)! App support is a bit spotty, but it can also run 32 bit x86 apps through a translation layer (why it's limited to 32 bit I don't know, I guess it's easier to do?))

Edit: The Switch is ARM too, so there's a reason for some AAA games to come to the otherwise small market too.(for traditionally Console/PC games, I'm aware mobile gaming is a huge market, it's just not one I find to produce much quality output)


Impressive, but pretty unsurprising given the A series. 100 points higher than the A14 is a less than 10% improvement.

From Apple’s somewhat uninformative slides, we should expect peak power draw around 15W? The A series draws half that for > 90% monocore performance, so scaling seems on target.


Really impressive. I was skeptical from just the Apple-released info, but it seems like ditching x86 for ARM really does mean that much performance increase per watt. I'm excited to see how this plays out in both the processor and OS space over the next few years.


Reposting from other thread:

So can anyone comment on how we should interpret these benchmarks:

a) Across the various Mac models (i.e. why buy a pro vs the air vs the mini). If they all benchmark the same, what am I paying the difference for?

b) The M1 chip vs a ryzen desktop (naively interpreting this its punching up near a 5950X, which seems TGTBT?). How do these chips compare against the larger TDP competitors if portability and batteries aren't an issue?

b2) Compared to the AMD mobile chips? (i don't have much insight into these)

c) someone else on here posted a Tiger-lake intel chip benchmark beating the M1. Now that makes even less sense to me now, because now there's an intel mobile chip beating the top of the line Ryzen desktop?

Help out an honestly confused individual.


I think those number are peak performance and not necessarily sustained performance, but I don’t know enough about geekbench to be 100% sure.

The pro and the mini have fans so they should be able to maintain these speeds for longer than the air which will presumably have to throttle down as it heats up.


My 15" MBP (2019, 2.4 Ghz, 32 Gb, 560x 4 Gb) is probably the most disappointing machine I've ever owned. The keyboard is meh, the battery isn't great and it's no where near as performant or efficient as it should be for the price. My 5.7k AUD laptop shouldn't sound like it's taking off to Mars because Firefox with a few tabs has been open too long. Heaven forbid I try and compile or transpile anything.

These types of results make me hopeful for the future of Apple hardware. I've always been a huge fan of OSX, but I was pretty sure this MBP would be the last Apple laptop I would buy. Looking forward to being wrong.


Is geekbench accurate at comparing across different architectures?


Geekbench tests daily tasks like AES, PDF, HTML5. If the score is higher, then daily tasks are faster too.

IMHO it is reliable across different Arch/CPU/OS.


If anything, I feel like many tests are based on x86 special instructions like AES, which does not directly translate to other tasks. But for most of the tasks I think it wouldn't be possible that the processor are only better at those and performance of most program should correlate with geekbench score.


ARM has AES instruction extensions, and IIRC iPhones have had them for some time.


Generally, yes. The benchmarks are calculating/computing the same things on both architectures, and the various sub-benchmarks are based on real-world non-trivial computational problems rather than microbenchmarks that are easily manipulated by instruction set differences.


Unless they are written in optimised Asm I would not compare across architectures, because there is considerable leeway in compilers and my experience has shown that a good human can often beat a compiler solidly --- at least on x86.

https://www.realworldtech.com/forum/?threadid=185109&curpost...


How often is software written by a good human?


Couple one of these with a good USB-C hub and a cloud based Windows 10 VPS, and you have the future of professional workstations. Though, how good are these hubs yet nowadays? I'd want 1x USB-C power, 1x thunderbolt USB-C passthrough, 2x USB-A, 1x HDMI and 1x gbit ethernet in a portable format. maybe throw in SD card and a sound card with separated microphone port, and you'd have pretty much the universal I/O hub.


That's interesting take on it. I guess with 5G around the corner we will see MacBooks with 5G integrated and Remote Desktop won't be a pain in the ass. May be that's another reason Apple made this move.


Maybe one of these hyper hubs are for you: https://www.hypershop.com/collections/usb-c-hubs


Thanks! Looks promising.


Forgive my ignorance, but is it possible that the various decades of legacy code which hold up the x64 instruction set were simply bogging it down.

I just ordered the new silicon MacBook Pro and I fully expect to record a full album on it using Logic X. As presumably Apple rewrote it from the ground up for the silicone, I expect to be absolutely blown away.


Can someone ELI5 why this new M1 chip would be better than decades of Intel's experience?


Well the M1 is not some "new" and "overnight" phenomenon. Apple has been designing its own ARM chips for iOS for a decade now, and with great success. The iPad Pro's ARM is still faster than at least 90% of all laptops made by any company. Moving that into their desktop/laptop line made a lot of sense and leverages many years of research and production experience.


Apple pays a lot better and can therefore get the best engineers. Having experience isn't very helpful when the competition can just poach it by paying salaries your organization can't afford.


Are these benchmarks run for long enough to allow throttling the CPU to be considered?


It's funny - x86 has been the king of computing for basically my entire life, and we're finally seeing that change. This is kind of like a throwback to the early days of innovation in the CPU space, before everything ran on the same ISA. I'm both nervous and excited.

Excited because x86 has been stagnant for years. I'm not an EE but my understanding is that its a pretty messy architecture with lots of "glue and duct-tape" fixes.

Nervous because, although x86 had flaws, since pretty much everything ran on it allowed for more open environments and development practices & fewer walled gardens.


x86 is an extremely messy ISA, but what about it has been stagnant? New instructions are pretty much every year.


I guess I shouldn't say that development on it has been stagnant - certainly engineers at Intel and AMD must be doing something - but performance improvements have stagnated, especially compared to the rapid improvements we're seeing from Apple in the ARM space.

I am not at all an expert in this field so my remarks are based on my own observations and performance benchmarks from experts in the industry. For example, this graph from Anandtech shows the stagnation (or at least slow improvements) in x86 performance gains while performance gains from Apple have been massive.

https://images.anandtech.com/doci/16226/perf-trajectory.png


What are current viable alternatives to the M1 MacBook (Air and Pro) laptops in the PC world?

I'm looking for those 2% computers "in the same class" (from the apple keynote 2 days ago) that are faster than these M1 laptops. From the looks of it they surpass their Apple store siblings, but how about x86 PC laptops? I'm currently shopping for my next dev machine, in the lighter side of the form-factor.

I've seen some name-dropping here and in other threads, like HP Zbook and Lenovo, but I'm completely out of the loop since I moved to a fully specced i7 2014 Macbook Air back in the day and.


The ASUS ROG Zephyrus G15 GA502IV-AZ115 is likely to be faster. It is cheaper than the Macbook Air M1 with 16GB/512GB, too (in Germany: 1465€ vs 1588€). It has an AMD Ryzen 7 4800HS CPU and a RTX 2060 Max-Q GPU.


But that's a gaming laptop. Macbook Air is a "notebook" laptop, aka "ultraportable". Also the G15 Geekbench 5 is 1100 for single core, which is like 35% slower than the single core performance of the M1. Obviously for gaming benchmarks, then that's a different story...

I'm starting to wonder if Apple's "faster than 98% of PC laptops" is an understatement, at least within its class.


Also the resolution of the screen is a totally different beast.


For top performance you're looking at high end intel H series laptops (https://www.ultrabookreview.com/20056-core-i9-portable-lapto...) or Ryzen 4000 H series laptops (https://www.ultrabookreview.com/36004-amd-ryzen-7-4800h-lapt...). They seem to basically match the M1's benchmarks though, not exceed it. You will not find anything as thin and light as a macbook air with that performance in the x86 line-up, as those CPU's are power-hungry beasts that necessitate big battery and cooling setups.


This is amazing! My wife just got a new MacBook Air delivered a few days. The courier is coming to pick it up tomorrow morning and we'll be getting a full refund, and looking forward to ordering the M1 version.


I am wondering if M1 is winning many of these due to main memory latency?

I think Geekbench 4 reported memory latency, but it's not in v5. Does anyone know of a benchmark of this directly?


Are we moving into a world where OS venders will try to do vertical software and hardware integrations? Because it seems that only Apple is doing it and it is paying off.


*back into a world


All those thinking of buying the new MBA, are you not concerned about availability of applications? Not all x86 applications will have an ARM alternative, yet.


Rosetta 2 sounds pretty good

> fun fact: retaining and releasing an NSObject takes ~30 nanoseconds on current gen Intel, and ~6.5 nanoseconds on an M1

> …and ~14 nanoseconds on an M1 emulating an Intel

https://mobile.twitter.com/Catfish_Man/status/13262384342355...

I won't be replacing my workstation this weekend but then, I won't be updating it to Big Sur just yet either. I am getting a new personal machine sometime in the next year though I don't imagine I'll be buying an Intel Mac.


Had access to a DTK, and know even more devs who have one. Even that machine could run essentially all x86 software just fine with Rosetta (including things like dynamically loaded plugins). Only thing missing is virtualization, but that's coming soon (Docker & Parallels are working with Apple).


Nope. This transition will be fast. Not like PPC -> Intel at all.

Everyone is on Xcode these days and using much higher level frameworks. Adobe and Microsoft already announced early 2021 availability of native binaries.

Besides, Rosetta 2 is even more impressive than the original was. I’m betting it will be a breeze. 6 months in and almost everything will be native.


I haven't encountered any problems running x86 programs on the DTK. Performance is fine (with a slower CPU than the MBA) and everything I tried running worked. I'm sure someone will hit problems once a few orders of magnitude more people are using it, but most people won't.



Several benchmarks have shown that popular x86 apps running in Rosetta on the M1 still run faster than running on native x86 on other machines, so it's probably not going to be an issue.


I wonder what the Macbook Pro 16 inch late 2019 is on the single core score. Not having the same devices in both single core and multi core makes me sceptical.


No need to wonder [1], it's right around 1070

[1] https://browser.geekbench.com/v5/cpu/search?q=MacBook+pro+16


Anyone know if linux can be installed on these or has apple gone full lockdown? I'm guessing custom graphics drivers will need to be made...


The bigger question is, will Windows also shift to ARM now that raw machine performance is miles behind Mac on similar pricing/weight/battery/loudness.

Last time I read, Windows ARM version isn't ready for everyone yet not to mention there's no transition kit at all for everyone to move.

Why would anyone use Windows anymore unless your infra site needs IE?


Well, yes. Microsoft has already announced they're working on an x86-64 to ARM emulator (i.e Rosetta 2 equivalent)[1] and they have partnered with Qualcomm to make ARM CPUs for their surface line up[2]

[1] https://www.extremetech.com/computing/315733-64-bit-x86-emul...

[2] https://www.microsoft.com/en-us/surface/business/surface-pro...


Windows has tons of stuffs Macs don't have. Unless you are speaking about development machines.


One interested little side aspect of Apple Silicon and Apple's war against Epic Games is going to be Unreal Engine.

I can see Unreal Engine not being available for Apple Silicon for quite a while. During the presentation of the new notebooks, Apple showed quite a bit of gaming. If there are no Unreal Engine titles for Apple Silicon, this could hurt them.


The case between Apple and Epic Games is about the app store on iOS. Epic can sell games without having to give Apple 30% on Mac OS. I doubt the investors is on board boycotting Apple as a whole.


Ive an iMac Pro from 2018, and thought it would be interesting to compare my Geekbench scores.

Single thread: 1015 Multi thread: 7508

M1 is far ahead of the 3.2Ghz Xeon in my machine single threaded and I suppose having double the cores(vs M1 Hi-po cores) helped me in that respect. The fact that I paid 5 grand vs a ~1500 dollar portable is not lost on me here...


Despite my grumbling at the lack of ram on the MBP, I might just have to buy the Air in honour of my 1988 Archimedes.


I think I'll wait until some real world testing can provide definitive results. I would be very happy if these results translate, I just don't believe they Intel and AMD engineers are so inferior to Apples that Apple could pull this off with a known CPU architecture without some smoke and mirrors.


If the M1 mops the floor with everything else, why didn't they put it in the high-end 16-inch MBP too?


I think we have to assume something else is coming. Certainly something that supports >16GB of RAM, and something that has more I/O.

While we're there we may as well a bunch more cores...


It also needs a proper application for marketing.

Something “previously not possible on a portable” specific to Apple’s other strengths.


Portable? I guess they will put something 'bigger' than M1 in the new 16" MBP, but what I'm really interested in is what they put into the new iMac Pro and eventually Mac Pro.


That must be it. They’re building a few bigger SoC’s


It does by performance, but RAM and IO are still very constrained (1 external display on laptops, max 16GB RAM) We'll see. If they replaced all products with an M1 version, they're diluting their pro brand, then un-diluting it when they figure out the pro features?


Unless you happen to need a new MacBook right now, I think the right move for most people is to hang on to their existing MacBook another year or two (get a battery replacement if necessary).

A future MacBook with an M2 chip will be an even better buy, and software availability will be even better.


There will be always something better next year. The general advice I give is - wait as long as you can before you buy and then buy the best version you can afford.


Not all annual improvements are equal. The current M1 laptops have some real disadvantages- old industrial design, only two Thunderbolt ports, limited RAM, and limited software support for many apps. There are real compromises to switching to an M1 device as your main machine right now (unless you also have a separate x86 device).

MacBooks are due for a new industrial design, so it makes sense to wait. If you absolutely need a new computer, that's a tricky place to be right now.


That's true, but some people need the larger screen and discrete GPU. I personally run bootcamp Windows all the time (for development as well as gaming with an eGPU) and I just run macOS for compiling my iOS apps. ARM Macs won't be able to use an eGPU it seems.


Hardware is able to use an eGPU, but Apple decided to not compile the Radeon drivers for arm64 macOS at least for now.

Hopefully they'll reverse that decision. (or their GPUs in the higher end machines will have to be really good)


I'm assuming (for now) that discrete GPUs will still be featured in higher-end machines, so the Radeon drivers will appear over the next couple of years.


I think Apple is done supporting other people’s GPUs. Their graphics are pretty good, they can match NVidea and AMD with TSMC’s help if they want to.


There is still PCIe bus and Thunderbolt, technically it is still possible to connect NVIDIA/AMD GPU to Apple Silicon, but we need enough bus (x8 or x16) and drivers.


Looking at Geekbench charts, I see that my Samsung phone has a faster processor than my $3000 gaming rig, and it sure as hell draws less power ... so why aren't datacenters just use phones instead of Xeon and Epyc servers, what am I missing?


Nuvia is a startup that is planning to do just that https://nuviainc.com/blog

You need to be competitive on single thread performance to have a chance at datacenter. Amdahl's law is still very relevant. Up until very very recently, the CPUs were not up to par.


Nice charts ... so performance per watt is indeed 10x better for S865 vs the Ryzen.


Probably due to overheating. Your phone can do bursts of processing every now and then, but if you let it execute heavy compute operations, the phone would probably overheat and shut itself off.


So would a Xeon if you pulled the heatsink off.


So pull the insides of the phone out, attach a heatsink ... heatsinks cost less than LCD screens I reckon.


Are you sure that you compared results from the same major Geekbench version? (v4 and v5 have quite different results).


I'm just looking at the charts, v5


Apple shit when they saw M1 benchmarks and delayed their data center for years to fill it with their own server chips. https://businessrecord.com/Content/Default/All-Latest-News/A...


Because geekbench results are not comparable across the platforms/OS. The scale is not consistent.


Cell phone plans cost an arm and a leg!


I'm super impressed by those M1 CPUs. But it's so sad that the only way to get them is buying one of the few Apple options. I wish that Intel, AMD, Nvidia, Qualcomm et al catch up quickly. We really want competition in this field.


There's still a lot of work to do on these chips, but these benchmarks give hope and I'm curious to see how much more speed Apple can actually squeeze out of the M1 now that they can do full vertical integration with the OS.


Cries in 6core Ryzen 5 3600... https://browser.geekbench.com/v5/cpu/4656718

The air is basically 50% faster... incredible.


It's interesting that the Mac Mini is clocked slightly lower than the laptops.


There seems to be only one public test of the Macmini9,1. Sometimes GeekBench is not accurate on the clock speeds. Take the clock speed with a little skepticism until more results are posted. I can't see why the system with the best cooling would be 3.04 GHz instead of 3.2 like both the MacBook Pro and Air.


Yeah, that's odd. I figured it would be clocked higher as it can probably have the best cooling of all of the M1 devices.


It might be a lower-binned part that can't run as fast. Assuming the best chips will go to the Pro.


Has Geekbench for Mac been ported to ARM or is it running in Intel emulation mode?


Looks like it has (as of 6 hours ago) https://www.geekbench.com/blog/2020/11/geekbench-53/


The text under score shows "macOS AArch64", I guess it runs on ARM natively?


> I still feel that neural networks and genetic algorithms are far underused methods of solving problems. (Especially the latter, which don’t require training.)

Funny how he is more ambitious about genetic algorithms.


Are there any manufacturers working on similar high-performance-per-watt CPUs for PC, i.e. Win10 on ARM? I know that ARM servers are a thing but it all seems very quiet on the laptop/desktop front.



This must have been enough to convince people to buy out the first shipment. Apple.com was showing me an 11/17 estimated arrival at checkout. Now it's showing 11/27-12/4.


Are Geekbench implementations in X86 and ARM identical to be compared?


Anyone have thoughts on where are these benchmarks might be coming from? Part of me wants to put in my order today, and part of me wants to wait until after the YouTube reviews.


At this point you would expect key developers like Microsoft, Adobe, OmniGroup etc to have retail hardware for testing.

And whilst they are under NDA you do often see them running anonymous benchmarks like Geekbench.


There are also review units certainly out there


Ah, I didn't realize devs got anything other than the developer kit up until the release.


Since the 10,1 is not selling yet, where are these numbers coming from? Are Apple employees publishing benchmarks? Or are randos on the internet just uploading fake numbers?


Review units


I think the really cool thing is that this would theoretically rank the M1 as #1 on the Geekbench Single Core leaderboard, above both AMD and Intel's latest offerings!


Looks like I’m getting a new Macbook air instead of MacBook Pro


If the graphics performance also get good boosts, mac minis could become a more versatile playstation/xbox alternative! Assuming the games are there of-course.


There're questions about both QB4 and QB5 results not being comparable across different architectures and operating systems. This should be discussed more.


So we should be seeing some serious discounts on 13" Intel Macbook Pros soon. Any estimates on the May 2020 models once the market responds?


2019: "I'll build a Hackintosh, its cheaper & faster"

2020: "I'll build a Hackintosh, its slightly cheaper"


Could it be that a batch of CPUs that should have had subpar performance ended up having good performance due to luck?


How much trouble would I get into if use the new Macbook Pro for software development(i.e. Python, Java, ReactJS)?


Does anyone know how fast the unified memory is? Is on package is likely to mitigate some of the lack of cache?


I am confused on the new macbook pro vs macbook air, they have tme same specs, are we simply choosing the form?


We don’t know yet, there could be some binning involved.

They could be identical, and the choice is just different thermal envelopes. Active vs passive cooling can make a huge difference for some work loads.


I'm still in denial. I feel like there's just no way it's possible without huge compromises.


I think the thing you have to look at is the velocity of Apple's improvements to see the real story.


They are going to sell these things like cup cakes ... why buy a pc laptop when these are around?


Benchmarks came in and I stared at an older 2013 MBP 15” and decided, “Yup! Time to upgrade”. Tons of others at my work are looking at these with drool coming from their mouths. 2020+ is going to be the decade of extreme-computing on ARM I guess.


Because they're expensive and the best selling laptops are half the price?


You’re right... these M1 MacBooks ARE half the price of a high end pc laptop AND are 4x-12x the performance. :D


Because pc laptops can run Linux.


Is there a common linear algebra benchmark?

Would love to see a non-Intel chip + OpenBLAS beating out Intel + MKL


Why would you use OpenBLAS rather than Apple's Accelerate.framework? Since you're already using the Intel MKL, the only fair comparison is Accelerate.framework.



In browsing the web wirelessly! What about running docker, kubernetes, brew install?


So on a cutting edge process, Apple Silicon is ~5-10% faster than the AMD offering.

Interesting times.


I am probably going to get one.

How much thermal paste are they planning to use on these things?


The thermal paste is there only to improve heat transfer between two imperfect surfaces. The real question is what the heatsink on these puppies will look like.


The Jim Keller effect. He was involved in architecting both Ryzen and this.


I can't believe HNers believe sales pitch benchmarks.


Would love to see it compared against surface books


Assuming I’m reading it right, it isn’t in the ballpark.

https://browser.geekbench.com/v5/cpu/search?utf8=&q=Surface


whow! fanless with 3.2Ghz base is really good.

so now they only have to fix their OS and their keyboard or just revert to the versions from 8 years ago.


How much that on-die memory impact these scores ?


Confirmed. Apple chips are the future of gaming!


Should I sell my 2020 intel air and upgrade?


Air to Air seems like a no-brainer from where I'm sitting. Maybe give it a couple months to see if some obvious issue surfaces.


considering the intel macs spend their whole lives thermally throttled, i'm not suprised by the numbers.


Now show some non-synthetic benchmarks


Isn't max 16GB ram a downer?


For the target audience of the Air and the 13" MBP, no, it is not a downer. These are not flagship machines (despite that they are performing well against existing flagships).


They said it was faster than 98% of laptops. I think the only thing faster right now is AMD's Zen mobile chips.


I wonder how they compare the machine learning performance of the integrated GPU. Does anyone know if any of the ML libraries support the new architecture? AFAIK most libraries don't support anything other than nVidia. Any tests on training models? Or is it mostly inference?


Is Geekbench that trustworthy? I always see them saying that new iPad CPUs are faster that the latest x86 desktop ones, but they are different architectures so I'm not sure the scores are comparable.


Oh wow


But can it run Crysis?!


RIP Intel/AMD.


This idea of “actual professionals” that always comes up in response to apple’s “Pro” moniker amuses me to no end.

Everybody throws the term around and no two people have the same definition! What in the world is an actual professional? There are professional journalists that just need a browser and text editor. There are professional programmers working on huge code bases in compiled languages that do need a beefy machine, and there are professional programmers that just need a dumb terminal to ssh into a dev machine in the cloud.

And then of course what the largest subset of people seem to mean is professional video editors or content creators. What percent of the working population are video editors? Some tiny fraction, how did that become the default type of professional in the context of talking about computers?

And then a lot of things that people also complain about like how replacing the wider variety of ports with usb c or thunderbolt is contradictory on a “professional” machine also don’t really make sense. Professionals can use dongles like anyone else. In fact many professionals will have more specific needs that require a single a way, for instance having a builtin sd card reader doesn’t help a professional photographer using cfexpress cards.


> What in the world is an actual professional?

I would say that generally, a "professional" user of pretty much any tool, is someone for whom the tool's quality is a constraint on their professional productivity.

A professional paint user is an artist. A professional telescope user is an astronomer—or a sniper. A professional typewriter user is a stenographer. A professional shoe user is an athlete.

In all these cases, it's the quality and innovation in the tool, that's holding these professionals back from being even better at their job than they already are.

Also, take special note of the case of the stenographer: professionals often require special professional variants of their tools, which trade off a longer learning curve for a higher productivity ceiling once learned. A stenographic keyboard takes years to learn, but all non-stenographic keyboards cap out at 150WPM, while stenographic keyboards allow those trained in their use to achieve 300+WPM.

And to make one more point: a professional car driver isn't a race-car driver. A professional car driver is a chauffeur. Rolls-Royce's cars aren't famous for how luxurious they are to drive; they're famous for having all the amenities needed by professional drivers — chauffeurs — to allow them to efficiently cater to their clients' needs. Limousines are the same kind of "professional tools" that stenographic keyboards are: they increase chauffeuring productivity.

> How did [video editing] become the default type of professional in the context of talking about computers?

Because all tech vloggers and most tech pundits — the people who review tech — edit videos as part of their jobs, of course ;)


I'm a professional programmer... hardware constraints aren't exactly the limiting factor when i use vim all day.


I've been around software development for over 20 years.

Never met a single engineer who didn't want faster build/test cycles.


Except a lot of people work on remote servers. My work MBP is nice, but really it’s just a terminal to an 80vcpu server.


You missed their point slightly. You wouldn’t be one of the users that is limited by the constraints of that tool.

If I’m a professional driver, but my passenger likes being discreet for example, then maybe I drive a Camry instead of a Rolls Royce. In your case, you probably don’t need a professional-grade laptop.

Me however, also a professional programmer, I run about 10 docker containers, a big ide, and lots of other hungry programs. I definitely am less limited when my computer is faster.


I envy writers in a sense. Their job doesn't require fast hardware so the computer is always waiting for them.

Programmers have to deal with compilation wait times, lack of RAM slowing down workflows, network latency, etc. We have to wait for the computer to do their job.

I await the day that I won't have to deal with this and computers would be as fast as our train of thought(human input latency nonwithstanding). But I won't hold my breath.


It depends on the project. E.g. if you ever have to compile the linux kernel, or firefox you'll be wishing for a beefy cpu.


A professional user of a given tool. It's a qualified phrase.

You can be a professional, without being a professional user of all your tools. In fact, for any such tool, there's probably only one or two professions that are professional users of that tool specifically (i.e. where that is the tool that constrains their productivity.)

Many professions aren't constrained by any tools, but rather are constrained by human thinking speed, or human mental capacity for conceptual complexity. These people aren't "professional users" of any tools. They're just regular users of those tools.

So, to sum up — when a tool is described as being "for professionals", what that means is that the tool serves the needs of people who are members of a profession whose productivity is constrained by the quality of that tool. It doesn't mean that it's for anyone who has a profession. Just people who have those professions. They know who they are. They're the people who were frustrated by the tool they have now, and for whom seeing the new tool elicits a joy of the release of that frustration. An "ah, finally, I can get on with my work without [tool] getting in my way so much."

-----

Programming is a profession that is most of the time constrained by thinking speed. (Although, some of the time, we're constrained by grokking speed, which is affected by the quality of the tools known as programming languages, and sometimes the tools known as IDE code-navigation.)

Very little time in a programmer's life is spent waiting for a build to happen, with literally no other productive tasks that they could be doing while they wait.

(Someone whose role comes down solely to QA testing, on the other hand, tends to be a professional user of CI build servers. Faster CI server? More productive QA.)


You don't compile your code?

Obviously it depends greatly on what kind of software you are writing, but at all my dev jobs, I eventually end up waiting for some amount of time while the CPU heats the room up.

Notably, VIM is inevitably more gentle on my CPU than VSCode and similar.


You could comfortable develop say a large e-commerce site on a 7 year old Macbook Pro, so I’d assume an M1 based Air would be equally fine.


That's fair. I think a lot depends on your stack and the specific project. Right now I spend a good chunk of time waiting for builds (usually 10-20 seconds) and my previous job was worse. Prior to that, not so much.


What if you wanted to drive a few more monitors for logging, monitoring and docs or have a local k3s cluster running for development or run some exploratory scripts on large data sets or build a large project from source? Professional programming is a vast and varied field.


You just came here to say you use VIM.


Esc key vs touch bar does not influence how you can use vim?


What are those Rolls Royce features needed by chauffeurs?


Rolls Royce isn't a tool for the chauffeur, but it's a tool for the passenger. It's a tool for distributing the information that the passenger is wealthy enough to own it, and it enables more conversation options with certain NPCs ;p


Absolutely incorrect. Watch e.g. https://www.youtube.com/watch?v=TBIkJSpqcFs

Rolls-Royce’s cars are designed for use as service cars, in corporate/government motor-pools. They’re essentially the ultimate Uber car. Rich individuals are actively discouraged[1] from buying them to drive themselves. (They’re actually kind of crap for driving yourself in!)

If a rich individual owns an RR model, it’s always because 1. they have retained the services of a professional chauffeur, and 2. the chauffeur has requisitioned one, to use to serve their client’s needs better.

Ask any ridesharing-service driver who worries about customer-experience — the ones that deck out the back with TVs and the like — what they wish they were driving.

A few example features these cars have:

• a silent and smooth ride allowing for meetings or teleconferences to occur in the back seat (this “feature” is actually achieved through many different implementation-level features; it’s not just the suspension. For example, they overbuy on engines — or rather, overbuild[2] — and then RPM-limit them, so that the engine never redlines, so that it’ll never make noise. They make the car heavy on purpose, so that the client won’t even feel speedbumps. Etc.)

• A set of automated rear seat controls... in the front. You know who’s coming, you set the car up the way they’re expecting, quickly and efficiently. This includes separate light and temperature “zones”, in case you have a pair of clients with mutually-exclusive needs. Yes, you can deploy a rear side window-shade from the driver’s seat (presumably in response to your client saying they have a migraine or a hangover.)

• An umbrella that deploys from the driver’s door. This is there for the chauffeur, so they can get out first, have an umbrella snap into their hand, and then use it to shield their client from the rain as they open the client’s door.

• A sliding+tinting sound-isolation window between the front and back, controlled by the client in the back; but then an intercom which the front can use to communicate to the back despite the isolation window — but only one-way (i.e. the front cannot hear the back through the intercom.) Clients can thus trust that their chauffeur is unable to listen into their private conversations if they have the isolation window up.

• A lot of field repair equipment in the boot. These cars even have a specific (pre-populated!) slot for spare sparkplugs; plus a full set of hand-tools required to get at the consumables under the hood. The chauffeur or their maintenance person is supposed to populate this stuff when it’s been used; such that the driver is never caught without this stuff in the field; such that—at least for most problems the car might encounter—the car will never be stalled on the side of the road for more than a few minutes.

Etc etc. These cars (from which most “limousines” are cargo-cutting the look, without copying the features) are built from the ground up to offer features for chauffeurs to use to serve client needs; rather than to offer features clients use to serve their own needs.

Which is why these cars are expensive. They’re really not luxury items (as can be seen by the fact that they retain most of their value in the secondary market), but rather:

1. it’s just expensive to build a car this way, because these use-cases, and the parts they require, are somewhat unique;

2. the people who buy these cars — who are by-and-large not individuals, but rather are businesses/governments with a motorpool component — are willing to pay more to get something that can be used at sustained load for decades with low downtime and high maintainability; to serve many different clients with varying needs, changing configuration quickly and efficiently; and to offer a smooth and reliable set of amenities to said clients. In other words, motorpools buy these Rolls-Royce limos instead of forcing regular sedans into that role, for the same reason IT departments buy servers instead of forcing regular PCs into that role.

—————

[1] RR did build their Wraith model so they could actually have something to offer these people who wanted a Rolls-Royce car to drive themselves. But it’s really kind of a silly “collector’s model” — most people in the market for a luxury coupe wouldn’t bother with it. It’s just a halo product for gearhead collectors with RR brand loyalty.

[2] Rolls-Royce Motors, the car maker, is actually owned by their engine manufacturer, Rolls-Royce plc. RR plc exists to engineer and build engines and turbines for these sort of server-like high-reliability low-downtime SLAed use-cases, as in planes, rockets, power plants, etc. RR plc went into the car business for the same reason Tesla did: as a testbed and funding source for their powertrain technologies.


I admit that my experience with RR is the same as experience with deep space exploration (equals zero), so your comment was a quite entertaining read. Thanks ;)


My friends brother runs a landscaping business, purely off his iphone. He asked me for advice on how to get his iphone onto a laptop, like with an emulator or something. This was about a year ago, so the ARM mac stuff was just rumors at that stage, and I mentioned that it wasnt possible.

Now, I can say hey if you get one of the new macbooks / mac minis, you can run your iphone apps natively. He probably will be one of their first customers.

I wouldnt be surprised if there was a huge 'professional' market that is untapped. not the traditional professional market but the ones which want something as simple & familiar as a iphone, on a desktop.


Why bother with all that? Isn’t it easy enough to pair a keyboard and mouse to an iPad?


Only since the most recent version of iPad software, before that connecting a mouse was either impossible or through a hacky accessibility system.


Interesting. Just out of curiosity, can you let us know which apps he uses? I am guessing a CRM id one of those


I’ll take a stab at the problem: an “actual professional” in this context is someone who makes their money by using the computer.


See, I think that is very far from the definition a lot of others have in mind. By your definition, probably 95% of professionals don't need any more power than a macbook air.


That does include practically every white collar occupation at this point.


I’ve seen a screenshot of someone in consultancy roles, an obvious professional by dictionary definition, trash talking hard and mean on some software developers for demanding computers that can build software, that however has no tangible merit in their eyes, like PowerPoint or Excel performances. There might be something interesting in divisions across different types of “Pro”.


It seems like a reasonable definition to me as long as it's clear that "the computer" means that specific computer.

e.g. In this definition, if you use a PC at your office and a laptop at home for non-work stuff, your work PC is being used by a "professional" and your home laptop isn't.


I’d extend it to mean someone for whom the non-pro model is less sufficient (for example, they are using dev tools or video editing as opposed to Excel, Word and email).


That would exclude many programming positions, though, since really the old Air is perfectly fine for most SWE positions. Not optimal, but you can run iterm2, you can run a web browser, and you can run vim.


I don’t think it’s necessarily good that people say Word or Excel is non-Pro tasks fit for cheap and “lite” computers.


It’s a generalization, of course there are exceptions


> someone for whom the non-pro model is less sufficient

Given how powered up the Air has become, this is a thin envelope.


Gruber addresses this in his latest post [1]. Short answer: Pro usually means nicer, not necessarily for professionals only.

Think about it: the M1-based MacBook Air has better CPU and graphics performance than Intel-based laptops—including Apple’s—that are targeted at professionals.

[1]: https://daringfireball.net/2020/11/one_more_thing_the_m1_mac...


Of course there are specific niches, but it's pretty easy to answer in general: 'actual professionals' need some combination of two criteria - capacity and reliability/durability/usability. A device which can handle 'large' loads - be it CPU-bound, memory/storage-bound, or even screen size bound, and can be worked hard for 6-8-10-12 hours per day, 5-6-7 days per week without either the machine or the user breaking.

The point is not a professional using a computer, it's a professional computer user - someone who doesn't just use a computer to do their work, but someone for whom the limits of the computer are the limits of the work they can do.


To be fair, it's pretty outdated terminology.

It comes from an era when computers were so genuinely slow that doing almost anything - even using page layout like Pagemaker, or setting up just a single track song in a DAW, had tons of render lag. Even spreadsheets would take significant amounts of crunch time to run their calcs, in the very early days. This meant that most work done on computers, with anything larger than a "toy/hello-world" dataset, was going to be painfully slow. That's why they called it "professional" - because you were actually using datasets large enough to burden the beast. Actually balancing a whole company's finances with a spreadsheet, rather than tallying up a little 10-item list.

I want to emphasize that "majoritarian" aspect of it - it wasn't just a few specific kinds, it was a majority of ALL kinds of work.

And that's changed.

We're now at a point where only a tiny minority of tasks done on computers actually have throughput limits based on the computer rather than the operator.


It's just a name, if you need long battery life, bigger screen, dedicated gpu, etc. then you choose appropriately. Who cares what the Pro means.


> What percent of the working population are video editors? Some tiny fraction, how did that become the default type of professional in the context of talking about computers?

They get outsized influence because they're the ones that make the shiny youtube reviews (using their video editing skills).


In my experience, every teacher I know for the last 6-8 months.


Here is my definition:

In this context, a professional is someone whose productivity is limited by the power of the computer. A developer compiling large code base on their computer. A person doing video editing. These are examples. And maybe “professional” is the wrong term, but I think that is what people are aiming for when they use that word in this context.


In my experience in IT support there are very few people where this is actually the case.

For most people their ability to use the computer is the blocker. This also involves the ability yo make sane file format/compression decisions if they work with graphics or video.


We say the 'pro' in MacBook Pro hasn't stood for professional grade equipment in at least a decade, but for equipment for someone who considers themself to be a professional.

People also regularly misplace their importance and prevalence in industry. For instance, you see more Linux than Mac in the big color and VFX houses.

It's important to trust and value your tools, which is how prosumers generally feel about their Macs, and they do make nice frontends for the computers that perform the actual work.


"Pro" is really just a marketing misnomer for "premium".


I for example use the computer to develop ERP systems. I have always around 10 Docker containers running with different versions of the ERP's. I also need to run SQL server natively on my machine. Also need to run VScode, Chrome, have few Excel's open, use Teams all day long, open Visual Studio from time to time and work with it etc.


YouTube has grown by triple digits. Those video “content creators”, from those just messing around to the professional film makers, are a huge opportunity for Apple to secure mind share in. Billions of views on YouTube. Billions of videos too.

I suspect the M1 powered Macs will be hugely successful and very useful for multiple types of users.


I don't see these being very useful for video editing with RAM maxing out at 16gb. Maybe 1080p, but not 4k.

(And the small storage, but that can be remedied with a NAS or other external storage, and then fast local scratch space only needs to be big enough for a couple of projects at a time)


They have shown impressive performance with 6k and 8k ProRes footage, even ipad pros and latest iphones are pretty incredible when it comes to editing 4K video and exporting it faster than latest macbooks. With Thunderbolt it's also pretty easy to extend high performance storage.


I assume there's some clever caching from fast storage + prerendering lower res footage to scrub through?

I was going off normal recommendations for video editing on x86 desktops/laptops. But it makes sense they'd go the extra mile on the software end to make it work on phones/devices with less ram.


People are regularly doing 4K video editing on their iPhones and iPads.

In particular with Luma Fusion.


> What percent of the working population are video editors?

Most of the working population could get away with using basically any computer - if you're marketing a computer as 'Pro' you're talking to a specific subset of that. (Or you're just banking on it working as aspirational marketing and don't actually care about professionals)


”Pro” doesn't implicitly mean ”Pro Video Editor”.


I wasn't specifically referring to video editors (although the comment I replied to was), just anyone that needs/benefits from more powerful hardware to do their job.

I was just calling out the idea that professionals being a small portion of literally everyone is a useful argument to make. How many markets aren't?


> What percent of the working population are video editors? Some tiny fraction, how did that become the default type of professional in the context of talking about computers?

I think it’s because youtubers like linustech etc are the type of people who review these laptops, they live in their bubble that only “real” professionals are those who edit videos.


Pretty sure content creators are the core of the Pro series. It's not uncommon to have multiple design apps open at the same time, along with team apps, multiple browsers each running multiple apps, and multiple monitors.

And that's just web design.

Now open your eyes and look around your room. Everything you see was designed by someone, most likely on a Pro.


Unlikely. Most CAD users are on Windows.


Regardless of the "Pro" meaning, my 2014 MBP is significantly faster than my top end 2018 Macbook Air. The MacBook Air was frustratingly slow. The most noticeable place was switching apps (like Cmd-Tab) and having it not keep up but there were plenty of other places.

Pro to me = Provides more power. That's it.

Looking forward to the next MBP


I share an office with quite a few “professional users” I guess you’d call them. There are directors and 3D graphics people using mac pro towers and I’m sure they’d fit the definition, in that performance is a significant bottleneck for them and their output is tied to their hardware.


(We detached this subthread from https://news.ycombinator.com/item?id=25065664.)


i spend 1/3 of my workday waiting for xcode to build


> What in the world is an actual professional?

Well, that is an easy question - actual professionals are people involved in making, marketing and distributing porno


Dang, you do a very nice job of moderating here. Light touch approach, helpful guidance and enforcement when needed. Just wanted to let you know it's appreciated!


To be honest, I think that these comments evidence a failure of the HN website UI.

Wouldn't it be easier to just create standard pagination links at the top of the comments page, rather than manually forcing a mod to post this "list of pages" comment on each and every article?


I think dang would agree.

1) there is a more link at the bottom of the page, Dan started making these comments because the UI is too subtle and a lot of people miss the link

2) I believe that he’s said they are working on performance improvements so that all the comments can be shown on a single page to solve the problem completely.


Oh yes but there are two failures: (1) the UI doesn't display pages clearly; (2) the software is too slow to render entire threads. #2 is the real problem, and I don't want a temporary workaround to become too easy.

The real prize is getting the software fast enough to just render entire pages again, which is what it did for most of HN's existence.


Yes similarly I’ve started to notice that other users never notice I’ve replied to their comments since there are no notifications. A bit sad since sometimes quite a bit of effort is put into the replies.




Oh, this one is great! I was previously running hnreplies through kill-the-newsletter.com!


Thanks to both of you


Hmm I wonder if dang's message might already be automated rather than manual?


Thank you! and sorry to proceed by detaching your comment, but it's best for this subthread not to be at the top. (I could collapse it, but think of the non-JS users...)


Mr. Gackle is in large part what has kept HN from degrading into just another Internet forum.


as a raspberry pi enthusiast what * most* excites me about all of this is, all software has to run on arm going forward.


[flagged]


What a horrible comment, and a largely false one.

There's no record of Jobs being anti-vax (your comment is already the fifth Google search result for "Steve Jobs anti-vax", and the top four are nothing to do with him being anti-vax.

As for "eschewed almost all forms of modern medicine": completely false. He delayed surgery for his cancer - which was of a form that was known to be slow-growing and not especially lethal - for just 9 months, then decided to have conventional surgery. This is not "eschew[ing] almost all forms of modern medicine". It's just delaying for a relatively short period.

He later spoke of his regret about this delay to his biographer, so others would be warned against doing the same thing [1].

Even still, he lived for another 8 years after the diagnosis and was in good health for most of it.

Some experts dispute that his approach likely caused his death, and even suggest it might have extended his life [2].

[1] https://www.forbes.com/sites/alicegwalton/2011/10/24/steve-j...

[2] https://www.livescience.com/16551-steve-jobs-alternative-med...


Well I mean yes, that's what we are doing right now because they are unrelated

The fan-less nature is a great design pattern


very treatable form of cancer

Nope, he pretty much died right on time in terms of median life expectancy for his form of cancer: https://pancreas.imedpub.com/prognostic-factors-in-patients-...


Nope, that study is about survival of patients with his form of cancer and hepatic metastases. As far as I'm aware the cancer in Jobs was detected long before hepatic metastases.


This analysis disputes that, indicating that the metastases not only may have been present, but also that waiting 9 months likely didn't impact anything. Of course it's based on probabilities, so his individual circumstances may have been different, but it's certainly sufficient to avoid the harsh determination that the only reason he died was due to his own hesitation for modern treatments. Personally I think that delay was misguided as well, but it looks very possible that the final results would have been the same:

https://centerforhealthjournalism.org/blogs/2011/11/10/what-...


Those seem unrelated to me?

– a pro-vax fan-liker


Back in the 1980s I used spend a fair amount of work time in rooms with a VAX and can attest to their powerful and noisy fans. If you are a pro with a VAX, you had better be a fan liker.


I’d like to say that I enjoyed first learning machine code on a VAX due to its orthogonal instructions and large fans, but it wouldn’t be true, because I learned on a MIPS machine.


I don't think VAX was the intended meaning in anti-vaxxer.


It’s a joke son, it’s a joke


I figured, but honestly wasn't sure. And either way, there VAX certainly had its share of anti-VAXers.

Personally I never much liked VAX myself, but that was primarily because my first experience of it was with VMS, and I'd previously used Unix. The difference was jarring.

Later in my career, I had no choice but to use VMS on an Alpha cluster, and grew to really appreciate it.


Go ahead, buy into that and before long Apple will lock the OS down so tight you will only be able to install software from the app store.

Apple, it's been good knowing you, time to move on.


I don’t see Apple going further on this front than what they’ve already done.

They don’t want to discourage developers from using their machines because guess who works on MacOS? Developers using Macs.


Been inside an apple store lately? They no longer need to care about developers.


That will eventually happen and they will claim 'security'. They'll make it harder to install 3rd party software. Indie devs are going to suffer because of this tight control.


Exactly. Their revenue model going forward is to have the millions who have passed, and will pass, through the apple stores downloading all apps from the app store. Kaching!


What about this new hardware has anything to do with the App Store?


Why do they call it Apple Silicon, when the core of the chip was designed by ARM and it was fabbed by TSMC?

Can't any other customer of ARM do the same thing, in principle?


Apple aren't using ARM's core designs, they're just licensing the architecture.


Are you sure about that? I didn't see this mentioned anywhere in the news, or on the M1 Wikipedia page.


Considering their A12 chip is designed by them I think it's safe to say the M1 is too. Not only that it states in the first sentence on Wikipedia that the M1 is designed by Apple.


In this day and age of Electron apps, I would say that RAM matters as much or more than CPU performance but seeing these results competes the story. M1 is good and has the potential to power pro laptops.


I have seen an interesting point raised elsewhere. macOS on ARM will be able to run iOS apps (assuming the author doesn’t opt out). That means that a lot of the uses of electron or similar may no longer be needed. Slack, Spotify, etc. Some of course will still be electron, there’s no native version of VS Code for instance. But it could work out better than expected...


With the storage IO right on the SoC I can imagine swapping to feel especially snappy as well.

Although it's worth remembering the m1 is just Apple's CPU for their low-end machines. They haven't announced their high-end 13" & 16" MacBook Pro, iMac and Mac Pros.

Also, I know Electron has a history of high-memory, but right now my Slack client is currently running at 500MB and my VS Code windows (3 projects) at < 1.5GB.

And even though my current MBP16 is spec'd at 64GB RAM, I don't think 16GB would probably feel all that slow these days.


I agree, I would have probably bought the pro if it had a 32GB option.


Nobody ever saved a dime by switching to the apple ecosystem. Whatever real innovation lies within this silicon will be available to the normal PC ecosystem in no time.


My Mac products (MBP and iPhone) usually well outlast devices I've owned that were Windows or Android by a factor of 2x. So I'd say I definitely saved more than a dime by using them.


Said Android developers about the iPhone 4s...


And this test is very likely not the native test, but ran with translation from Rosetta. Kudos to Apple. This has the highest single core performance of all processors including highest end desktop processors from Intel and AMD: https://browser.geekbench.com/processor-benchmarks


These are native. The ARM/Fat Binary release was today on the App Store.


Ah, I stand corrected. This was native test.


If Apple can create this magic with ARM, I wonder what they can achieve using RISC-V.


Probably much worse.


I see a lot of people debating the finer details of benchmarks and trying to argue whether or not Intel or AMD are still ahead.

But consider:

1. Apple just introduced their “starter” chip for everyday consumers. They would not WANT that chip to smoke their top-end Intel Macs, as it would cannibalize margins until the more powerful chips are ready.

2. This is a “two year transition,” with a team that has been shipping a meaningfully better chip every 12 months for about a decade.

From those two observations, I would expect that we’ll see an M1x or M2 chip in the next 12 months which nudges the 16”MBP to 20% or so better than what we’re seeing today, and 12 months after that the transition is completed with the introduction of the M3 series, where there’s an M3, M3x, and M3z for the low, mid, and max performance machines.

And when that happens, I expect the max-performance M3z is going to smoke anything else on the market.

This is not the first time Apple has had a “five-year lead” on the industry, and I wouldn’t be surprised if it takes Intel and AMD some time to catch up.

And frankly we should all be thrilled about that, because more competition and innovation is just going to accelerate our entire field. I can’t wait to see all the new stuff this will power over the next decade :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: