From the article's conclusion... "The M1 undisputedly outperforms the core performance of everything Intel has to offer, and battles it with AMD’s new Zen3, winning some, losing some. And in the mobile space in particular, there doesn’t seem to be an equivalent in either ST or MT performance – at least within the same power budgets."
This is the first in-depth review validating all the hype. Assuming the user experience, Rosetta2 things, first generation pains, kernel panics, are all in-check, it's amazing. At this point I'm mostly interested in the Air's performance with its missing fan.
Unsung hero here is TSMC and their industry-leading 5nm fabrication node. Apple deserves praise for its SOC architecture on the M1 and putting it together, but the manufacturing advantage is worth noting.
Apple is essentially combining the tick (die/node shrink) and tock (microarchitecture) cadences together each year, at least the past 2-3 years. The question, perhaps a moot one, is how much the performance gains can be attributed to either? The implication is that the % improvement due to tick is available to other TSMC customers, such as AMD, Qualcomm, Nvidia, and maybe even Intel.
We'd have to wait until next year (or 2022) once AMD puts Zen4 on 5nm and see an apple-to-apples comparison on the per thread performance. But of course by then Apple will be on TSMC 3nm or beyond...
Worth mentioning is the insane and disruptive technology making TSMC's 5nm possible. ASML and it's suppliers have built a machine[1] that has a sci-fi feel to it. It took decades to get it all right, from the continous laser source to the projection optics for the extreme ultraviolet light[2]. This allowed photolithography to jump from 193nm light to 13.5nm, very close to x-rays.
The CO2 laser powering the EUV light source is made by Trumpf[3].
Edit:More hands-on video from Engadget about EUV at Intels Oregon facility[4]
Thanks for the great links / resources. Those machines look insanely complicated. I can just imagine how they get shipped to Taiwan and elsewhere (they apparently cost $120M each in 2010 [1]).
A bit offtopic but I've always found it amusing that a form of lithography, of all things, is fundamentally powering our tech revolution for decades. Especially after a girl I knew learned lithography in an art class, watching her do it in a primitive form, which inspired me to read about it's history in art and professional uses (signage, etc).
That combined with vacuum tubes (which also rank high up there in the revolution thing) are the two things I one day wish to learn how they really work. Not just surface level nodding along.
They say “decades in the making” - when did it first become viable, and then how long to master the process and become confident enough to mass produce consumer goods from it? I’d love to see a timeline w milestones.
Apple has built many second-source partners for cost-reduction for a long time. But most of their CPUs are made by TSMC right now.
I'm wondering will Apple find another semiconductor factory partner (they tried to build A9 by both Samsung and TSMC, but Samsung one seems like has heat issues)or stick with TSMC?
That conclusion is quite misleading, in my opinion.
They write "outperforms the core performance" and the keyword here is "core". What they mean is that if one had a single-core Zen3 and a single-core M1, then the M1 would win some and lose some.
But in the real world, most Zen3 CPUs will have 2x or more cores, thus they'll be 2x to 4x faster.
So what they mean to say is that they praise Apple for having amazing per-core performance. But it kind of sounds as if the M1 had competitive performance overall, which is incorrect.
The Zen3 processor that they are comparing it to is the 5950x - the fastest desktop processor with a TDP of 105W. The entire system power of the M1 mini under load was 28W.
What the article is pointing out is that the mobile low-power version of the M1 (as the mini is really just a laptop in a SFF box) is competitive with the top-end Zen3 chip; the benchmark gap is smaller than 2x.
We don't know yet how far the M1 scales up, e.g. a performance desktop will presumably have a higher TDP and probably trade the integrated GPU space for more CPU cores. But we don't known if/how this will translate into performance gains. Previous incarnations of the Mac Pro have also used multiple CPUs so it is not yet clear if "in the real word, most Zen3 CPUs will have 2x or more cores".
> The Zen3 processor that they are comparing it to is the 5950x - the fastest desktop processor with a TDP of 105W. The entire system power of the M1 mini under load was 28W.
This is a very misleading statement. They primarily only used the 5950X in single-core tests, and in those tests it doesn't come remotely close to 105W. In fact per Anandtech's own results[1] the 5950X CPU core in a single-core load draws around 20w.
Take the M1's 28W under a multi-threaded load, that's going to be somewhere in the neighborhood of 4-5w/core for the big cores probably (single-core was ~10w total, ~6w "active" - figure clocks drop a bit on the multi loads, and then the little cores are almost certainly much less power draw particularly since they are also much, much slower). In multithreaded loads the per-core power draw on a 5950x is around 6w. That's a _much_ closer delta than the "105W TDP vs. ~28W!" would suggest.
M1's definitely got the efficiency lead, but it's also a bit slower and power scales non-linearly. It's an interesting head-to-head, but that 105W TDP number of the 5950X is fairly irrelevant in these tests. That's not really playing a role. Just like it's about as irrelevant as you can get that the 5950X is 4x the big CPU cores, since it was again primarily used in the single-threaded comparisons. Slap 16 of those firestorm cores into a Mac Pro and bam you're at 60w. Let it run at 3.2ghz all-core instead of the 3ghz it appears to now since you've got a big tower cooler and that's 100w (6w/core @ 3.2ghz per the anandtech estimates * 16). That'd be the actual multi-threaded comparison vs. the 5950X if you want to talk about 105W TDP numbers.
Critically though the M1 is definitely not a 10W chip as many people were claiming just a few days ago. You're definitely going to see differences between the Air & 13" MBP as a result.
> This is a very misleading statement. They primarily only used the 5950X in single-core tests, and in those tests it doesn't come remotely close to 105W. In fact per Anandtech's own results[1] the 5950X CPU core in a single-core load draws around 20w.
It would seem that the switching of AMD chips in the various graphs have caused some confusion. I was referring to the "Geekbench 5 Multi-Thread" graph on page 2. This shows a score of 15,726 for the 5950x vs 7715 for the M1. This is about 2x. I do not see any notes that the benchmark is using less cores than the chip has available.
I don't follow your argument for why it is misleading to characterize the 5950x as a 105W TDP in this benchmark. Could you expand a little on why you believe this is misleading? The article that you have linked to shows over 105W of power consumption from 4 cores - 16.
Edit: I put in the wrong page number in the clarification :) Also, I see later in the linked article that the 15726 score is from 16C/32T.
If you're referring to the single time the 5950X's multi-threaded performance was compared then sure, the 105W TDP is fair. But you should also be calling that out, or you're being misleading, as the majority of the 5950X numbers in the article were single-threaded results, and it did not appear in most of the multi-threaded comparisons at all.
But in multi-threaded workloads it also absolutely obliterates the M1. Making that comparison fairly moot (hence why Anandtech didn't really do it). It's pretty expected that the higher-power part is faster, that's not particularly interesting.
It's really not clear what you are trying to argue here. The number of single-threaded benchmarks are irrelevant to this point: when the M1 was compared to the 5950X in a multithreaded comparison:
* The 5950X was 2x faster
* The 5950X was using 4x the power (28W system vs 105W+ for the processor).
* The M1 only has 4 performance cores, the 5950X has 16.
Even counting the high-efficiency cores as full cores in the comparison has the M1 with 8-cores providing 1/2 the performance of the 5950X with 16-cores, i.e. it implies that the lower performance cores are providing as much as the 5950X cores.
That is certainly not the 5950X obliterating the M1, as the article stated (and was the quote that started this thread) the M1 is giving the 5950X a good run for its money. If you think otherwise could you provide some kind of argument for why you think so?
The 2x number you're claiming was only for geekbench multithreaded, which was the only multithreaded comparison between those two in the Anandtech article. You're trying to make broad sweeping claims from that one data point. That doesn't work.
Take for example the CineBench R23 numbers. The M1 at 7.8k lost to the 15W 4800U in that test (talk about the dangers of a single datapoint!). The 5950X meanwhile puts up numbers in the 25-30k range. That's a hell of a lot more than 2x faster. Similarly in SPECint2017 the M1 @ 8 cores put up a 28.85, whereas the 5950X scores 82.98. Again, a lot more than 2X.
This is all ignoring that 2x faster for 4x the performance is also actually a pretty good return anyway. Pay attention to the power curves on a modern CPU or what for example TSMC states about a node improvement. For 7nm to 5nm for example it was either 30% more efficient or 15% faster. Getting the M1 to be >2x faster is going to be a lot harder than cutting the 5950X's power consumption in half (a mild underclock will do that easy - which is how AMD crams 64 of these into 200W for the Epyc CPUs, after all). But nobody cares about a 65w highly multithreaded CPU, either, that's not a market. Whatever Apple comes up with for the Mac Pro would be the relevant comparison for a 5950X.
Calling me obtuse doesn't add anything of value to the discussion. I was pointing out the multithreaded benchmark in response to the claim that there were none. Read kllrnohj's response that is the sibling to your comment to see how to make a point effectively.
That's not the obtuse part. The obtuse part is ignoring all the other multicore tests in the same uArch and then saying that the 5950X is comparing unfavorably and ignoring the fact that single core perf on Geekbench for the 5950X doesn't scale like any of the other tests and is much lower relatively to the other tests, then taking this one test were a 105W TDP is actually used as significant to all the other comparisons, then saying that their are comparing it to a chip with a 105W TDP, when in all multicore tests except two it gets compared to the 4800S (which beats it with half the power consumption).
It's not getting compared, at the scale of the article, to the 5950X in anything but single core performance except for one expection, and the claim that it's being compared generally to a 105W TDP part is also false because in multicore comparisons, where the total TDP makes sense, it's getting compared to parts with half or 150% the TDP and losing.
In reality, it's getting compared to a 6-7w core, and to 15-45w chips.
Yeah I think it is incredibly tiring how everyone said "it's both faster and more energy efficient" when the benchmarks have shown something far more obvious and boring. You can make ARM chips that are just as fast as x86 chips and they will end up consuming roughly the same amount of power during heavy calculations but much less in idle. The fact that ARM is king in idle power consumption isn't a surprise. It's ARM's bread and butter.
All the wishful thinking was wrong but that doesn't mean ARM is doing badly.
I wasn't trying to insult you, I was just trying to say that that interpretation so off that it seemed to me that it came from a biased understanding, which I'm a bit tired off in these threads where people are acting like it's the best thing like sliced bread when it's obviously just another competitive chip.
That being said, I probably should've phrased it differently, I wasn't aware that word had such a connotation in English, in my mother's tongue it means that it's a narrow intepretation
An author who deliberately switches which chip to test in different versions of the same test in order to paint the desired picture isn't much different than one who literally makes up the numbers. The whole article ought to be flagged and deleted.
M1 has a lot of great things about it and I'm excited to see the what it can bring. Intel needs to be humiliated by something great, to remind that they have been crap for a long time.
But... Other than ST performance, the multi-core CPU isn't linear at scaling. At 16cores - core-to-core communications take a hit, that is not as bad as for 4 cores.
> This is a very misleading statement. They primarily only used the 5950X in single-core tests, and in those tests it doesn't come remotely close to 105W.
That’s true but keep in mind this is the power going into the AMD CPU only. The power number measured for the mini was the entire system power pulled from the wall, so that 28W included memory, conversion inefficiencies and everything. That’s crazy.
Actually a significant power, maybe around 20 W, is consumed by the I/O chip, which consumes a lot because it is made in an old process.
In 2021, when the laptop Zen 3 will be introduced, that will have a much better power efficiency, being made all in 7 nm.
Of course, it will still not match the power efficiency of M1, which is due both to its newer 5-nm process and to its lower clock frequency at the same performance.
> which consumes a lot because it is made in an old process.
And also because it's doing a lot. Infinity fabric for the chiplet design isn't cheap, for example. A single-die monolithic design avoids that (which is why that's what AMD did for the Zen2 mobile CPUs).
TDP is a useless marketing figure. Anand measures the AC power consumption of the Mini, which is a good measure, but that is not comparable against CPU TDP because TDP has a tenuous relation to actual power draw at best [0]. A better comparison would be ARM Mini vs Intel Mini AC power draw, and a similarly spec'd AMD system for good measure. Unfortunately, unless I missed something, the article only measured AC power draw from the ARM Mini.
The M1 is certainly more power efficient than Intel or AMD for the average user, but as far as performance per watt, we cannot make any judgements with the data we have.
Yes. I'd like to see a decent-ish Ryzen APUs such as the 3400G up against one of these as well.
I did notice that the cinebench for the M1 is only about 10% higher than my Ryzen laptop (T495s) which is laughable as it's a 3500U and the whole thing cost me £470 new!
Yeah, forgot about that. Everything else being equal (ostensibly), the M1 Mac Mini is $200 cheaper than the crappy Intel i5 Mac Mini, more if you upgrade the Intel CPU.
As an owner of a decked out 2019 Mac Mini, in hindsight I made a shitty purchase decision.
No matter what the purchase, I always force myself to stop comparing for a bit of time after the purchase. By the time I pull the trigger, I have shopped and compared as best I can. Inevitably, as soon as I complete the sale, one of the places I was looking will have lowered the price or release the next-gen.
I bought what I thought was a 2020 Mac Mini in April direct from Apple. The only significant difference on paper was that the base model came with 128GB for the 2018, 256GB for the 2020.
As it turns out, that's true: About This Mac says "Mac mini (2018)" even for the 2020.
I replaced the 8GB base RAM with 32GB of aftermarket and have been thrilled with it. But then I was coming from a 2018 MBP 4-Thunderbolt with only 8GB and the fan noise with it drove me nuts.
I got the i3 because I thought the CPU wasn't the weak point, the RAM was. And so far, for me, that's held up.
I actually just bought an Intel Mac Mini to run MacOS VMs with using ESXi. I expect it will be quite a while before stable Mac VM support is available for Apple Silicon Macs.
Yeah you did. Why would you buy something you don't need? It doesn't even matter if the Mac Mini with Apple Silicon existed or if from now on the only computer Apple sold is a Mac Mini.
Okay lets be serious. You bought the x86 Mac Mini because you wanted a x86 Mac Mini, not because you wanted to make perfect purchasing decisions with infinite foresight. A lot of software is broken on the M1 Mac Mini so you made the right decision at that time. It's entirely possible that you would regret buying the M1 Mac Mini.
There’s this word that we use in poker: “resulting.” It’s a really important word. You can think about it as creating too tight a relationship between the quality of the outcome and the quality of the decision. You can’t use outcome quality as a perfect signal of decision quality, not with a small sample size anyway. I mean, certainly, if someone has gotten in 15 car accidents in the last year, I can certainly work backward from the outcome quality to their decision quality. But one accident doesn’t tell me much.
I looked up multi-core Cinebench R23 and the AMD 2990WX comes in at 33,213 vs. 7,833 was given for the M1 in the article.
Apple markets this as a "Pro" device for professional video editing. That's why I believe it is fair to take their word and compare it against my other options for a professional video editing rig. And in that comparison, which Apple has chosen itself, the M1 comes out woefully inadequate at a mere 24% of the performance.
Of course, for a notebook, the M1 is amazing. But I feel irked that Apple and Anandtech pretend that it's competitive with desktop workstations by having such a misleading conclusion about it being on par with Zen3 - which it clearly isn't.
> Apple markets this as a "Pro" device for professional video editing. That's why I believe it is fair to take their word and compare it against my other options for a professional video editing rig.
That's ridiculous. Threadripper has 8 to 16 times as many cores, runs on hundreds of watts of power and such a CPU alone costs the same as several Mac Minis. Them claiming you can use it for video editing doesn't mean you can expect that a 1.5 pound notebook will measure up to literally the biggest baddest computer you can buy.
He knows it’s ridiculous, but you’re going to see a large group of people who hate macs take this turn of fortune quite poorly. My hope is that it really puts pressure on intel to start firing on all cylinders but who knows? A MacBook Pro 16 with higher clocks and more gpu cores would be a really hard system to not buy.
I wonder what is going on at Intel. A resurgent AMD has more or less surpassed them in COU offerings already, and now so has Apple. They have fallen so far. Can it just be institutional complacency? I don’t get it.
> Apple markets this as a "Pro" device for professional video editing.
No they don’t. Claiming something is capable of video editing and marketing it as a video editor are two very different things.
The 3 macs introduced this week are apples lowest end devices, 2 of which still have ‘big brother’ intel versions for sale today.
If you’re truly ‘irked’ that the lowest-end, lowest power, first release devices aren’t comparable in performance to the highest end desktop chips, then you’re putting the wrong stuff in your coffee.
Apple markets this as a "Pro" device for professional video editing
No, they don't. Because Apple keeps raising the ceiling on low-end devices like the 13-inch MacBook Pro, in many aspects, it's more performant than a high-end laptop or desktop Mac from just a few years ago.
Wait, wait, wait, you might be saying, the MacBook Pro is pro. But as I’ve written numerous times, pro, in Apple’s product-naming parlance, doesn’t always stand for professional. Much of the time it just means better or nicer. The new M1 13-inch MacBook Pro is pro only in the way, say, AirPods Pro are. This has been true for Apple’s entry-level 13-inch MacBook Pros — the models with only two USB ports — ever since the famed MacBook “Escape” was suggested as a stand-in for the then-still-missing retina MacBook Air four years ago.
You can do toe to toe best of the best speeds and feeds...
But I think the broader strategic outlook is: yes, the M1 loses on a few benchmarks, but the fact that it gets ballpark to some monster rig multiple times in price and power - is this not the whole picture of the Clayton Christensen disruption curve?
The other point is - Apple's Logic and Final Cut software are probably optimized for the M1, and they can likely achieve much of the capabilities of the monster AMD rig for a fraction of the cost/power budget.
They’ll be 2-4x faster in some multicore tasks. CPU benchmarks specifically break out single core performance as a separate metric, because as of 2020 a lot of everyday work is single core bound (stuff like 3D graphic design, video editing or compiling large codebases not considered “everyday work”).
Not to mention that even in multicore tasks, you don’t usually scale perfectly linearly due to overhead. And also, the biggest Ryzen processors are usually in desktops, and Apple Silicon hasn’t entered that market yet.
For most everyday work - Raspberry Pi is fast enough, so it's not even an argument. Raspberry Pi 8GB is 10x cheaper? There are mini desktops starting at $250 that will do everyday work.
If you throw in "everyday work" - then we have passed the need for new chips altogether.
That's a bit of an overstatement. Booting from SSD instead of SD card has an enormous uptake in performance. I have yet to hear of a Pi 4 that couldn't overclock to 2GHz which is a pure uplift of 25%. Moving to 64-bit PiOS gives another double-digit jump in performance too. Not record-breaking, but not unusable either.
Passively cooled first generation macbook air chip isn't quite as fast as an absolute monster grade PC Ryzen chip on its 3rd generation. Color me shocked.
I think you're just trying your hardest to convince yourself that these chips aren't competitive.
IIRC the biggest Zen3 mobile CPUs are 8-core. So they'll have at most 2x cores. And that's ignoring the low-power cores on the mac which probably still count for half a core each.
AMD is likely to be faster in multicore overall, but not by much it seems.
There are no announced or released Zen 3 mobile CPUs at this time. You are correct in that the Zen 2 mobile CPUs currently top out at 8 cores, and up to about 54W TDP - the top CPU is the "45W" Ryzen 9 4900H which can be configured up to about 54W by the OEM. We might see Zen 3 mobile early in 2021.
That new MacBook Pro replaces the low end of the Pro line which had a slower CPU and only 2 ports.
I would expect that, when Apple brings out their next iteration of chips, they would target the higher end of the Pro line with more cores and ports along with higher RAM capacities.
On performance, ya I agree. Although, they basically doubled the battery life over the previous generation so that alone might be worthwhile for some users.
I think we'll see an additional higher end Macbook Pro 13" when they start to release Apple Silicon models with discrete GPUs.
Single thread performance still matters a lot for personal computer use. It’s not everything, normal people do benefit from some degree of parallelization, but there’s a reason all of the major PC chip designs continue to push single thread performance even as that becomes more difficult. Most end users see more benefit from those improvements than from more cores.
The M1 has a big.LITTLE design with only half of the cores being performance-oriented, so if there is a gap, it would almost always be in Ryzen's favour.
That’s not how CPUs actually operate though, outside of some very narrow tasks. If you actually regularly max out your CPU, you already know that and wouldn’t touch a Mac mini no matter what chip is in it.
Chrome and 20-50 tabs, and my Intel Macbook can be used as a blow dryer. Assuming Chrome's power needs don't change, it seems that the only way to control for overheating an M1 is going to be throttling down - slowing everything down. Curious how M1 machines feel during day to day usage.
The reviews I saw said that using Chrome gets good better life, but if you want great battery life you need to use something like Safari.
I switched to Safari a few years ago, and I couldn't be happier. Chrome's performance and battery life are atrocious. I only use Chrome when I need something specific from it.
I saw that comment in a couple different places. Presumably Chrome is running through Rosetta 2, whereas Safari is native to the M1. I imagine once Chrome is available natively, performance will be somewhat better, though probably still not as good as Safari.
On the new machines yes, but none if the people you read about before today had AS machines, they were all comparing Chrome and Safari on Intel, so it’s unlikely that stays quiet will change once Chrone is native on AS.
Actually, I think it will. The M1 chip takes a lot smaller proportion of power of the laptop to run (compared to say, the LCD, ssd, etc). If the new macbook air idles at 6W and runs chrome at 10W with no fan, people are barely gonna notice. That’s a big difference compared to an intel machine running chrome at 35+ watts.
1. Yes, it does. I use AdBlock Pro.
2. Yes, it does. I've been using Safari as my primary browser as a Rails developer for at least the past decade and have always found the developer tools at least adequate. I don't use the developer tools on other browsers heavily, so I don't know if I might be missing something.
[Edit] I'm wrong about this- "Adblock Pro no longer exists for Safari (in the form of an "official" extension)." It still exists, as "AdBlock Pro for Safari" developed by Crypto, Inc. but was not listed on Apple's extension site for some strange reason: https://apps.apple.com/us/story/id1377753262
The listed adblocker is: "AdBlock for Safari" developed by BETAFISH INC, which offers in-app purchases including "Gold Upgrade" which "unlocks" some basic features that gorhill's uBlock Origin already has for every other browser.
I have no trust in an ad blocker extension (which has access to any site you visit) published by an entity that is in the domain of crypto currencies. An adblocker is the best way to hide malware that steals money.
I used to run Safari on my mac and it was the best thing in the world:
- It integrated perfectly with the OS
- It saved battery like heeeeell
- It integrated natively with Airpods and media keys
- It clearly had worse performance than Chrome and a couple of incompatibilities, but it was perfectly acceptable
- I could run most of my extensions, namely uBlock Origin, HTTPS Everywhere and Reddit Enhancement Suite
- The native PiP (before it was on any other browser) was AMAZING
I had been a diehard Chrome user since it came out (with the comic book!) on Windows, Linux and macOS. I got fed up with how slow it was becoming and how it was running my fans all the time.
Unfortunately, two things happened that made me quit Safari:
- I found some weird bug wherein whenever I typed an address in the address bar it would always slow down to a crawl
- Apple deprecated and abandoned old extensions. So I lost most of my very valuable extensions, with emphasis on uBlock Origin and Reddit Enhancement Suite. I could live with a different adblocker (I saw adguard at the time), but I could not live without RES. No way.
So I left Safari and have since moved to Firefox. It seems almost as fast as Chrome, has nice integrations and features, but it's no Safari. It still drains my battery and has issues. Firefox has since progressively added PiP (even if it's not native) and support for media keys, which was a godsend, so that's nice.
I'd like to get back to Safari. It would be amazing. Do you know if there is any way for me to get what I used to have back? uBlock Origin (or something with compatible filter lists and custom rules) and Reddit Enhancement Suite?
Safari is migrating to a new system of extensions that will make it much easier to port from Chrome. However, I understand it still requires Xcode (which non-Mac folks can't run) and a developer license (which not everyone wants to pay for). I hope to bring my Chrome extension to Safari, but honestly it's not a priority because most people who install extensions are not running Safari (when you consider that most people are not on Mac, and a large chunk of folks on desktop Safari are there because it's the default — and therefore would not likely install extensions).
I particularly like Chrome profiles. I have a few profiles with their own bookmarks/histories/open tabs/etc. For example, one of my profiles is "Shopping". Another is "Work" and yet another is "Social Media".
Context switching profiles at a macro level - as opposed to intermingling work/shopping/social - is beneficial to me.
When I switch over to "Shopping", I have my tabs on whatever purchase I'm researching open. I can drop the whole project for a few weeks and resume it later right where I left off. None of it can bleed over into my "Work" profile. I like the separation. Helps keep my head clear.
Firefox has something like this called containers. The best example is one for facebook, where any call to any facebook servers only works in the facebook container. It has similar setups as well, Work, Home, Commerce, etc.
I switched from Chrome to FF as my daily driver, and miss being able to have multiple simultaneous instances with different proxy configs (via a --proxy-server="socks4://localhost:####" command line flag).
FF as far as I know does not have a way to do this as easily, you have to spin up different profiles and click through each one to configure it.
I still have chromium around for primarily this reason.
Not quite the same. I want to set-up and tear-down entire macro groups of windows and tabs while keeping others active.
Opening my 'Shopping' profile brings up windows and tabs from where I left off. Same with "Social". When I don't want distractions, I just close those profiles. No notifications, no updates, etc. I like the separation.
Simple Tab Groups [0] + Multi-Account Containers [1] are my workflow for that exact case. Simple Tab Groups hides the tabs based on the group you're in and the Multi Account Containers can keep them segmented from a contextual standpoint.
I can't stand Chrome either and so I've been using these two together for about a year now I believe. Using a naked version of Chrome is jarring given my browser feels like it fits how I use it being setup like this.
I don't use Chrome, so I don't know what Chrome profiles are like. But Firefox also has profiles. Launch it with the -P option to open the profile manager and create additional profiles, besides the default one. Each profile is an entirely separate browser state: settings, tabs, cookies, storage, cache, etc. You can use them simultaneously. (This has existed for as long as I can remember... since 0.9 and probably back to Netscape?)
Speed mostly, though the last time I tried out Firefox seriously was over a year ago, it was noticeably much slower on pages (ab)using lots of javascript.
I have a 2017 MBP (base, no TB) and found that Chrome made my fan rev like crazy. A friend told me about Brave and I tried it out. Now my fan only kicks on when I'm doing serious work. I know some folks don't like Brave for various reasons, but I love it because my MBP is almost always silent.
Wow, I never knew about the print view. It's way more readable, and the lack of comments section makes it quite fast to load despite the very long page.
The tiny drop-down menu in the default view is very hard to discover and quite annoying to click on (many other review sites, like Phoronix, have similar annoying drop-downs).
Yes, but with a big huge button "NEXT PAGE" or something like that. Look at the number of comments here that didn't even notice there's more content other than the 1st page.
As for kernel panics, with iOS likely sharing most if not all of it's kernel code with macOS I'd be surprised if Apple hasn't had an iPhone macOS build since before they released the first iPhone.
> At this point I'm mostly interested in the Air's performance with its missing fan.
I think the clear store here is that the Air will definitely be slower than the rest over time. This isn't a 10W SoC, clearly, so it definitely can't run at its best while being passively cooled.
How it behaves when throttled will be interesting for sure though.
Seeing reddit reports of gaming throttling more quickly than that, which would make sense since the GPU is going to sit at or near 100% pretty easily with a game and you'll still be seeing a decent CPU load.
The macbook air is probably the worst machine you can think of for gaming. Even without the M1 you won't have a great time. Its the perfect machine for students because it can last all day in a browser and word.
Yeah me too. I expect that there will be another release sometime next year with updated "package". It is cool that they put it in old Air box, but I think I can wait for better camera, overall package. By that time it will be clearer all the benefits and issues that come with it.
I wouldn't mind plastic edition of MacBook /w M1. Aluminum and metal overall, not the best for everyday use. I prefer "warmer" feel of plastic, like ThinkPad for example.
This is the first in-depth review validating all the hype. Assuming the user experience, Rosetta2 things, first generation pains, kernel panics, are all in-check, it's amazing. At this point I'm mostly interested in the Air's performance with its missing fan.