Man, the M1 Macs are the first piece of tech in a while that I’ve felt myself actively pining over. They just seem… really goddamn fast. Everybody talks about it so glowingly.
Hoping I can pick one up soon. I figure the Pro is probably the right move for a developer workload, although I do like the size of the Air.
I got an Air last spring, and yeah--it is really quick. Though I will say, my workload didn't see the sort of absurd speedup that some reviewers ogle about. I also still don't have a comfortable dev experience on the M1, because the world just isn't ARM-centric (yet). It's a great secondary computer, but unless you're an Apple developer, you should expect friction.
I recently built a 12th-gen Intel workstation for my main dev machine, because x86 is still the path of least resistance for the type of work I do. It's got a ton more cores, a higher memory ceiling, and faster storage than the MacBook. I'd choose this over a Mac Studio, too; because I built it for a third of the price. I love my Macs, but I'm enjoying linux (again, for the type of work I do). Different strokes, etc.
> Though I will say, my workload didn't see the sort of absurd speedup that some reviewers ogle about.
Same here. It’s a very efficient laptop, but the way reviewers repeated the Apple marketing hype made me expect something even faster.
It’s almost as fast as a modern AMD/Intel desktop, which is really amazing in a laptop form factor. But if you’ve been using anything other than last-gen Macs to compare against, the M1 feels more like catching up to current performance standards as opposed to the media narrative about being faster than anything else out there.
Same goes for the graphics. It’s really impressive for a laptop but the slides about the Mac Studio performing like a 3090 are a joke. At least the media has started to call Apple out on some of the exaggerated GPU claims.
> But if you’ve been using anything other than last-gen Macs to compare against, the M1 feels more like catching up to current performance standards as opposed to the media narrative about being faster than anything else out there.
I don't know. The M1 Macs (we have two of them now) accomplish what Intel/AMD spent 2013+ trying and largely failing to do - maximizing battery life without compromising on performance. The suckers are fast. They're not the fastest kits of hardware out there, but they're certainly within shooting distance.
They do this without having a bulky chassis, multiple fans running constantly, or needing to be tethered to a power chord. They've been available for close to a year and a half now(?) and, unless if I've missed something, Intel/AMD still don't have a viable competitor.
Now, I completely agree that the software environment still hasn't completely caught up. But for my use case that's ok - give me a working shell terminal, web browser, and word processor, and I'm happy. I'm also lucky enough that I have a very well equipped remote system that I can push my actual work to. If I had to use my laptop for development, it may very well be a different story, but that's the trade off you make for being an early adopter.
> Same here. It’s a very efficient laptop, but the way reviewers repeated the Apple marketing hype made me expect something even faster.
You make it sound like the reviewers mindlessly regurgitate Apple’s material. Actual proper reviews like Anandtech’s have numbers very close to Apple’s figures.
Now, you get diminishing returns in a lot of daily tasks so perceived performance is not proportional to benchmark scores (a 2x speedup when you open an application is not very meaningful when it took 0.5s anyway). But it does not mean that the reviews were inaccurate.
> But if you’ve been using anything other than last-gen Macs to compare against, the M1 feels more like catching up to current performance standards as opposed to the media narrative about being faster than anything else out there.
More like leapfrogging. Which is fine, AMD and Intel are doing it all the time and the world is not ending for either company.
The narrative is that it is faster than anything else at equivalent power consumption. Sure, you can get overclocked i9s that are faster, but not in a laptop with a ~1 day battery life.
> Same goes for the graphics. It’s really impressive for a laptop but the slides about the Mac Studio performing like a 3090 are a joke. At least the media has started to call Apple out on some of the exaggerated GPU claims.
We’ll see. As usual, the truth will be in the numbers.
It’s not completely absurd that something larger than a 3090 with vastly better memory bandwidth could be competitive. The M1 is going to be worse at things like ray tracing, most probably, but the 3090 itself is not more magic than M1s.
I feel that the M1 still has the real distortion field working for it. But on the other hand I just bought the new iPad Air (my original Air still works fine but I thought 10years was enough). The fact that a decent pro laptop class or mini desktop class CPU fits in a tablet with no cooling and gives me over 10hrs of useful work is kinda crazy.
> I also still don't have a comfortable dev experience on the M1
Interesting. What is that you dev?
I'm not an apple developer at all, but my dev experience with both Java and Go has been pretty frictionless.
My Air is my secondary machine, but that's only really because my workstation is a 16 core/64GB AMD machine and does our product build and test run in about half the time the Air does, which itself is half or less the time previous laptops did. But in the few months that this machine was in transit, the Air did a great job.
> I'm not an apple developer at all, but my dev experience with both Java and Go has been pretty frictionless.
Same. Java, Go, and web dev is my every day. Not an exactly fair comparison, but my first M1 MBA replaced a 2017 MBP, and it cut my java test suit times in half. 16GB wasn't quite enough RAM for all I do so now I'm a 64gb MBP M1.
I've thought about building a Ryzen desktop and just running linux, but I don't want to split between two machines and need to be mobile.
I'll chime in here. I have the same experience (Julia, R, JS, and a little bit of C/Python). I have an M1 laptop that I use constantly, but I'm lucky enough to have the opportunity to remote into my working environment.
If I had to use my M1 laptop has my main development machine, I would think very hard about trying to pick up a T14 or something similar.
Library compatibility issues, in particular with Julia and R.
It's gotten much better over the last year, but I still run into enough problems that I try to keep all of my actual development work on a remote machine.
A few years ago I was using R on OpenBSD. OpenBSD somewhat aggressively removes deprecated features as compared to Linux and macOS. It was really eye-opening to see how bit-rotted some parts of the scientific / numeric ecosystem are. I ended up submitting some trivial PRs deep in the stack for use of system calls that have been deprecated but not removed from Linux for a decade or more.
I ran into some of these. Many npm projects that use native plugins to node don't (or didn't) work on M1. Had to spend a bunch of time finding alternatives. For my needs though things are now working.
Have you tried AMD? I don't know if they're suitable for your work, but I personally think AMD is great. Works well with Linux. They're definitely more open than Intel.
And I felt (perhaps incorrectly) that you get more horsepower for your buck with AMD.
I always buy AMD, but for another reason altogether. The process market could very easily become a single-player monopoly. Maybe less today, but that was certainly the case for a very long time. Thus, I always try to buy from the second- or third- place market leader. Diversity in hardware, just like software or biology, is good for the environment as a whole.
Love me some AMD, running them on all my machines currently.
However on desktop where power usage isn't an issue, Intel 12th gen performance has pulled ahead of AMD. Theoretically even on laptops too but I suspect they have the common problem of powerful Intel cpus in laptops where they cooling just can't hack it and they throttle to make it worthless getting top end Intel. So I believe AMD is still better on laptops presently.
I don't disagree, but it is worth pointing out that Alder Lake released Nov 2021, one year after Zen 3 (Nov 2020). It'll be interesting to see what happens in the fight between Zen 4 and Raptor Lake. Right now feels like an awkward time to build a PC.
Add in ATX 3.0, ATX12VO 2.0, the new ATX12VHPWR connector, the I_PSU% feature, the beginning of PCIe 5.0, and the beginning and relative immaturity of DDR5…
… right now I say to wait until the end of 2022 for a build. Let Zen 4 and Raptor Lake come out. Then realize Zen 5 and Comet Lake are likely another very real step above that the following year, though at least you’ll be able to take your new PSU, memory and GPU along with.
Never mind the reality that Apple is just getting started.
Granted, you can’t wait forever. I did a build last year, and now get to see how things shake out as all the new stuff comes available over the next few years.
That would be a departure from the norm for Intel.
Do you have any sources? I was unable to find any info on Meteor Lake and board compatibility.
That said, AM4 is a dead end socket, so building right now is a coin toss between the 12900k and the 5950x as far as price and performance. I'm not sure the 13900k is going to be much of an upgrade.
I just bought a 12400 mostly because it was 15% cheaper than the 5600x and it comes with an iGPU. You're right that it's unclear whether 13th or 14th gen will be an upgrade or possible, respectively.
> It's a great secondary computer, but unless you're an Apple developer, you should expect friction.
I moved 100% of my python web development to my M1 Max. Everything is running native. Zero downside. I run Debian ARM in Parallels VMs for my test suite, and deploy to both Graviton and Intel AWS instances.
The real beauty of M1 architecture isn’t the raw speed, but the computing power per watt. This allows apple to throw a decent bit of wattage for solid performance. And very little wattage for perfectly acceptable performance (and amazing battery life). But it’s not blisteringly fast.
Would you be willing to share the details of your PC? I’d really like to build a custom Linux machine with the latest Intel CPUs, but I’ve never built a PC before and I don’t know which components are Linux-friendly.
I did no specific compatibility research, because it’s more than likely going to work out of the box with distros like Ubuntu and Fedora, and you can do some work to get things going on distros with less hand holding.
It’s an i9-12900k on a Z690 motherboard (tons of these, I got a gigabyte model). I got a terabyte of nice NVME and 64GB of memory; made it out the door for about $1600 with a case, taxes, and everything. I didn’t buy a graphics card because I don’t need to drive anything other than some text editors and browser windows, and it’s really not a buyers market right now. I’ll probably pick up something in a year or so.
If you’ve never built a computer, highly recommend! It’s really not difficult, though can take time and it can be frustrating. Really rewarding when you boot it for the first time, though.
This is extremely helpful, thank you! This is pretty much the sort of config I'd want (fast CPU and lots of RAM). I'm glad to hear you've had a good experience with this so far! :)
Historically network and graphics drivers (and maybe sound?), and sleep mode were the things to check for Linux support. These days, all three major graphics manufacturers have Linux drivers, and I haven't heard any problems with networking for a long time. Sleep is probably still an issue... And most everything is already included for you (networking, sound, graphics) on the motherboard. So just check how good Linux support is for the motherboard you'd like to buy (and any PCI cards you'll add, if any).
Someone can correct me if I’m mistaken, but you should be fine with most parts for most Linux builds. The main thing I would say is if you go with a dedicated graphics card go with Nvidia as their driver support is much better (though I’ve used AMD cards with no issue on Linux also). Outside of that and maybe the rare motherboard feature that doesn’t work quite as you expect (and noting that pretty much all first party software monitoring/control tools are built for windows) you should find little issue with making a build suitable for Linux.
I have two PCs at home, one with an AMD card and one with an NVidia card. I'd characterize the NVidia card on Linux as "Windows-level hassle" (i.e. small, but present) while I think about the AMD card about as often as I'd worry about USB mouse compatibility (i.e. haven't given it a thought).
EDIT: small edge to AMD for graphics cards, but both are fine if using a distribution like Ubuntu
Yeah, like I said with my own anecdote I also didn't experience any issues w/ AMD graphics cards. Actually after searching a bit maybe I had that backwards and it was actually Nvidia that was the problematic one. I haven't had problems with either, but have read about problems and I thought it was AMD, but perhaps I was mistaken.
That's really good to know! I assumed that it was generally hit-or-miss. I'm relatively new to using Linux (Ubuntu 20.04) as a daily driver and would like to keep things as turn-key as possible. At least with laptops, you seem to only get that if you're selective with which configuration you purchase.
Laptops are definitely more hit or miss given the proprietary nature of each manufacturer layout. Because PC building is so common on desktops you don't have to worry about idiosyncrasies to nearly the same degree. If this is your first build you should just expect some things escape your purview and thus you may have to troubleshoot a thing or two, but there's nothing fundamental about Linux and desktop hardware the precludes a turn-key experience.
Awesome. I’m okay with a bit of tweaking to get things going, I just want to avoid a consistently poor user experience or instability. It sounds like a sensible custom Linux PC build should be (relatively) smooth!
I agree with several siblings that if you're building a desktop the only thing you really need to think about with compatibility is graphics card, because the driver situation is complicated.
For best compatibility (and lots of other reasons) I recommend Fedora - it usually has a much more up to date kernel than most other distros.
Yeah, I'm completely satisfied. The M1 Pro is definitely the best computer for the money I've purchased, despite it costing an arm and a leg to get more RAM and SSD space.
For my workload (music production), and despite still having to run Ableton through Rosetta, it can handle roughly twice as many tracks as my 2018 MBP (which luckily is _not_ something I purchased). All my previous "problem" projects now run perfectly. And this is all while running completely silently and barely getting warm compared to my 2018 MBP, which runs fans full blast and gets super hot if I so much as glance at Ableton. Can't say I miss the hiss of the fan when I'm trying to mix tracks.
If the developers of plugins I use all the time ever get their ducks in a row and finally get around to updating them to support native ARM (that's the current limiting factor), I imagine it will be even better.
Kind of embarrassing that it's taking some devs so long.. Steve Duda managed to get Serum updated within a couple months of the original M1 release.
> Kind of embarrassing that it's taking some devs so long.. Steve Duda managed to get Serum updated within a couple months of the original M1 release.
This is seriously overstating things.
Steve has spent years continuously optimizing his code base. It was already clean and relatively free of cruft compared to code bases of similar age.
If you are developing on something like JUCE and don't have an extensive amount of optimized assembly or AVX instructions to deal with, yeah, porting is fast. Likewise if your suite uses a common framework (ala MeldaProduction, FabFilter, uHe, etc.).
NI's code base is not clean, and they have the organizational problem of developers coming, developing a product, and then leaving, orphaning the code base. Brian Clevinger is doing his own thing now with Rhizomatic, so good luck every seeing Absynth native. But NI has maintained active development of Kontakt because it is their cash cow, and thus their first native release. But Reaktor? Massive? Massive X? Nowhere in sight.
Further, like many developers, they have the added problem of VST3, which is a royal PITA to get right. Since Steinberg is trying to pull the rug out from all native VST2 development on M1, many larger developers like NI don't want to risk the potential lawsuits. They also have no choice but to push out native VST3s, since Cubase 12 does not support native VST2s.
So there are a lot of economic and technical pressures at work that you have to take into account.
I think I definitely misspoke here - I really didn't mean the devs so much as the companies that employ them. I know full well how difficult such a transition could be. Native Instruments has a ton of resources, and you'd think at the very least their most recent "flagship" synth Massive X would have seen an update by now.. which definitely points to organizational cruft more than anything. Most of the plugins I use that are created by smaller teams have already been updated.
The fact remains though that a loooot of producers use Macbook Pros, and I'm assuming many will be upgrading to M1s within the next couple years. I'm genuinely curious when the pressures will actually force these large organizations to take the transition seriously.
This is why I am very hopeful for the future of CLAP. It will give all developers a common format to target and test that can be wrapped in VST2/3/AU/AAX relatively easy. Since it has an ABI vs. and API interface, it is not subject to the vagaries of any one company's proprietary idea of how plugins will work. This will also make porting to new architectures easier.
VST2 used to be the standard development and testing target, which was then wrapped to other formats. But all of the developer workflows built around it are now at risk because of Steinberg's asshattery. I know for a fact that this has delayed a number of plugin releases, as developers have to put time into refactoring their code around a new standard target. Thank God for u-He and Bitwig leading the way here. I've tested the Surge XT CLAP build in Bitwig and it just works.
> Kind of embarrassing that it's taking some devs so long.. Steve Duda managed to get Serum updated within a couple months of the original M1 release.
Is it? Steve Duda has no choice, his lunch is getten eaten by Matt Tytel and the folks at Native Instruments. If he wants to keep selling his $150+ plugin, he better stay on the cutting edge.
For everyone else though, I find it hard to blame them. Overnight you get a complete architecture change that you need to buy test hardware for, test-compile for ARM, find out what breaks, source new ARM-compatible libraries for what dod break, re-write some/all of your codebase to account for these changes, profile the performance difference, re-evaluate if the native version is worth it, then set up a testing and CI pipeline for a second architecture. Since most of these plugins are written with the notoriously fragile JUCE framework in C++, I can see why it's not just an overnight task to get it working on Apple Silicon unless you drop everything and make it your top priority.
For everyone else though, I find it hard to blame them. Overnight you get a complete architecture change
Except it didn’t happen overnight, Apple announced it 6 months in advance. There has been affordable hardware available for porting since june 2020, almost 2 years ago. If a developer hasn’t gotten around to it by now, I doubt it is a priority for them, let alone their top priority.
Duda.. getting his lunch eaten by Native Instruments? Vital I understand (a sort-of-free Serum-like synth is certainly a competitor).. but Native Instruments hasn't done anything interesting in the VST space in years. Odd to throw that in there.
My point is that companies like Native Instruments have vastly more resources and developers than one individual developer, and still very few of their supposedly flagship products are M1 compatible. It's been a year and a half, and Massive X still isn't updated. You'd think they would toss at least one developer at it. I guess that maybe points to organizational rot more than anything (I guess Massive X generally being an outdated flop is also evidence of this), but the point stands.
I'm currently testing an M1-based Air to see if I'd like to purchase something similar. I can confirm this for sure; the project files that made my current, not-even-old Windows laptop with similar stats choke are running like a dream even on this Air.
BTW, if you have a license for Live 11, version 11.1 forward has a Universal build, so it can run natively on M1 Macs rather than through Rosetta. [0]
The parent commenter's problem sounds like it's third-party plugins rather than Live itself.
FL Studio also has a build for ARM, but it runs each non-ARM plugin in a wrapper which somehow uses an entire CPU core. After loading more than a couple of plugins I see something that is otherwise unheard of on my M1 Macbook - 1000% CPU usage and fans at full speed.
So I stick to running the app via Rosetta - higher-than-average CPU usage in general, but it means my plugins behave.
Also, you can't compare io work on OSX to anything else, it's fsync () is a no-op. When file cache can stream to disk at a delay, io always looks super fast.
We've compared docker performance between Intel and M1 and the M1 falls quite a bit far behind in response times. We can't do exact comparisons due to CPU differences, but for Docker I can't say it is in a good state right now (yes we enabled the docker filesystem beta enhancement for M1). This is for x86 containers at this time.
It is much better than it was not so long ago due to actually working at all, and it appears there is much more room for optimization, so the comparison is not over yet. But as of right now, it isn't that great.
That is a strange way to put it. Rosetta 2 (and orig.) is not virtualization, it is emulation, though there is a half-decent argument that Rosetta 2 (and orig.) is not emulation either -- it is recompilation.
It’s more complex than that and the specific point being made is accurate - Rosetta 2 doesn’t support virtualisation in the sense that it cannot perform translation in the context of virtual machine.
If you run an x64 Mac binary on the host, Rosetta (simplified) translates the code to arm64 then runs it. If you run a Linux VM and execute an x64 Linux binary or container inside it, Rosetta can’t help you. You’ll be running inside a big emulator - in fact, Docker Desktop uses QEMU under the hood I think.
More accurately, Rosetta doesn't support kernel extensions, and it will not run software that virtualizes x86 machines.
But it's not as if you can't run a virtual machine native to the platform. Qemu works on M1, Parallels works on M1 (ARM VM only) and there is a preview for VMWare's Hypervisor that will run even Windows 11 on the M1, but the VM software is native code (but not necessarily the system in the VM), so Rosetta isn't used.
It’s worth noting explicitly what others have said - if you are doing what you describe, you are running your Linux containers under QEMU.
The performance of this will be terrible; there is likely little room for optimisation and you should not expect this to get any faster in the future.
Using aarch64 containers changes the game entirely. Common base images like Ubuntu are already multiarch so will just work out of the box. But this has obvious downsides and won’t be a suitable solution for everyone.
It doesn't make much of a difference. macOS Docker performance is heavily limited by really poor volume mount performance.
In one of our projects the benchmarks look like this to run our test suite for a web app using volume mounts:
- 2020 Intel MBP (10th gen CPU, 32GB of memory, SSD): 37 seconds
- First generation M1 MBP: 31 seconds
- WSL 2: 3 seconds
The WSL 2 box is an old workstation I have from 6-7 years ago with an i5 3.2ghz CPU, 16gb of RAM and one of the first SSDs. All in all it was about $800.
The WSL 2 box is so fast for Docker because the volume performance is pretty much as good as native Linux. It's really fast.
'In the real world' it isn't just Apple switching to ARM. I've been moving AWS workloads to Graviton for the cost savings. More and more default containers have both x86 and ARM available.
Well, then probably you're using fairly exotic stuff, because most widely used base images nowadays have a arm build, and for the rare cases where this is not true (usually internal company stuff) you can build it yourself.
It's also unfair to use MacOS instead of an operating system with proper container support, but sometimes "fairness" doesn't matter as much as "how the machine is going to be used"
It's actually pretty impressive how much faster Docker builds containers on M1 running in aarch Linux VM vs on Docker Desktop. But to your point about "how the machine is used", I've encountered very few devs who care.
I had a top of the line 2019 MBP docked in a dual-display setup. A YouTube video, a video call, any sort of slightly more demanding workload and the fans would be on full blast. Switch over to the battery and it’d last 2 to 4 hours for typical web dev work.
I picked up an M1 Pro last week and it really has lived up to expectations. It’s silent no matter what I throw at it, far faster, and after 5 hours of use I still have 76% remaining.
The new MBP is as game-changing as everyone claims.
It is the first laptop I can use in my lap for development without frying my balls in a long long time, I expect a jump in the fertility of software engineers in the next years.
I fully believe the M1 is game-changing, but that sounds like you're comparing it to an atrocious alternative. I have an HP Spectre x360 from 2018. The fans don't spin up unless I'm running heavy computation, and on battery, after 4 years, it still lasts about a workday.
I agree entirely. My point of reference is a laptop that was, in hindsight, a total disappointment. I'm bought into the Apple ecosystem though and it was the best Apple laptop that money could buy 3 years back.
I did look at Windows alternatives at the time but ultimately decided that I wanted to remain on Mac.
Sitting on a 2019 MBP now, I agree - this computer has been a disappointment. I'm tied to my charger for any days with more than 3 hours of planned use time.
> I picked up an M1 Pro last week and it really has lived up to expectations. It’s silent no matter what I throw at it, far faster, and after 5 hours of use I still have 76% remaining
That's what i expected too. I was really disappointed that after a two hour Teams call(browser) the battery was at 50%. I know Teams is a dumpster fire, but still, my multi year old Ubuntu XPS 13 did better than this.
I wound up in a bit of a analysis paralysis figuring out what to replace my old 2015 13” mbp pro with. As a lot of others I looked to the pros, and went down the “well if I’m spending x, then I might as well spend a little extra to get so much more”.
Long story short, I ended up buying the smallest Air available with no upgrades, and I’m so happy that I did. Unless you’re doing some serious heavy lifting it’ll be fine, and at it’s small price point it’s not something I worry about breaking. That last part may be an ADHD thing, but it does add to the experience of using a laptop around the house where you also have a car and a 3 year old.
I was on the fence about the 8gb ram, but it frankly out performs my work t14s thinkpad which has 32gb for any sort of programming I do on it.
The only real downside is the lack of the MagSafe charger, but it holds power so well that it’s rarely in the charger when I’m using it. So my advice would be to get the smallest air, unless you know why you’re getting something more powerful.
I had one of the original M1 pros. Upgraded to a 16” max fully loaded. I notice no difference in performance. The extent of my “hardcore” use is having dozens of safari tabs open.
If/when MacBook Air gets the M2 redesign (hopefully with the super nice screen), that will be the no brainer purchase of the decade.
I have moved from a 13" M1 Air to the 16" M1 Pro only because I wanted a bigger screen for traveling. I feel some buyer's remorse because it was more than twice as expensive, and I don't really care about the extra speed or memory (yet?). It's also really heavy, and I don't love its looks.
What I'd absolutely buy is a 16" MacBook Air, like an Apple version of the LG Gram.
Edit: For comparison, the 16" LG Gram with a discrete Nvidia GPU is exactly as light as the 13" M1 MacBook Air, and supposedly it has great battery life. And all those ports...sigh...
You would've not found much performance difference downgrading either if that's all of your use case. Why did you upgrade a machine that was already good enough?
> but it does add to the experience of using a laptop around the house where you also have a car and a 3 year old.
I got a 2019 MBP and my 3 year old spilled water on it. The repair cost more than an M1 Air. My next computer will be a cheap air I think. I won't feel bad upgrading early or if something happens to it.
You only need the Pro if you have some very parallizable workloads (large compilations, or heavily threaded long-running stuff), or you need ram > 16GB. In regular use or development I know I don't really touch all the extra cores developing webstacks. When I need to run VMs for mobile development I kind of miss having more ram available, I feel like I have to close all my extra applications. An Air/Mini with 32GB would be perfect for me.
> You only need the Pro if you have some very parallizable workloads (large compilations, or heavily threaded long-running stuff), or you need ram > 16GB.
Or you want a large screen, or IO, or more than one external display, …
There’s a billion reasons to want/need the pro (unlike the max).
Yes yes yes. I think Apple is scared to cannibalize the 16" Pro, although it's already quite differentiated. Pro is quite focused on media workloads rather than pure performance/dev workloads. I can see some dev workloads needing the pro, but most would be great on the base chip.
With the level of performance on the M1, I'll take the battery life and less weight on the Air over more cores. I hope the next gen base chip (M2?) will have more display output and more RAM, and the MBAir line up will have a 15/16" size (with the Pro speakers).
Lots of corporate users here who love their Macs but don't need the media features. 13" Airs are lovely, but plenty of us would go for a bigger screen if it was an option.
I bought the Air as I had a bit of a deadline for getting a new machine and could not wait for the 14 or 16 inch models to be released.
The Air is silent, it's powerful enough for fairly serious dev workloads, it has a battery that lasts longer than I've ever seen before and it can run multiple displays if you get a compatible displaylink dock.
If you like it, there's little reason not to get one AFAICT. Make sure to grab the 16GB of RAM though!
I'm not a 'product' person... but the macbook air m1, is a GAME CHANGER with having no fan whatsoever, and generating very little heat. It brings me joy everyday to use it and that rarely happens from using a physical device.
Its the first 'laptop' that I can comfortably have on my lap for hours on end
Just a heads up with this. If you want one, make sure you get an off the shelf config if you can. The turnaround times for replacements and repair is likely to be bad on non stock configurations.
This pushed me to the 14” MBP with 16GB and 1TB. You can literally get it swapped out same day at apple stores if anything goes wonky.
If you don't need more than 16GB of RAM the Air will more than likely be more than good enough for you.
The only reason I returned my Air and waited for the MacBook Pro I'm typing on now is I made the mistake of loading some games on the Air - and one game I love suffers due to my mod addiction and can really use more RAM.
And that's something that many people don't seem to realize - if you buy from Apple directly they have a two week no questions asked return policy. Obviously the machine has to work and you need to return all the parts, but if you aren't sure you could always get an Air and you have two weeks to make up your mind if it's sufficient or not.
Man, it was SO hard taking it back and waiting 9 months before the MacBook Pro's came out. So be careful - once you taste and M1 Mac it's almost impossible to go back to anything else :) At least if you decide the Air isn't for you after all you'll only have to wait at max a few weeks.
I have the air and the pro. The air is plenty fast but can be difficult when you have large RAM consumption use cases such as doing front end workloads. It also can have some visual lag if you’re plugging into a larger monitor.
The pro, though very fast, can also hang when leaving too many apps open. But I did leave way too much running. It feels blazingly fast.
You should really think about that workloads you will use in the day to day. And also consider if you’re docking it. If you are, the pro is a much safer bet and the screen brightness and size is less relevant.
So the tldr is that the 64Gb of ram is the big differentiation. The other is plugging monitors into the pro.
Anecdata but I haven't seen any display lag from the Air M1. 4k + internal display on the 8GB works fine. 4k + internal + iPad Sidecar works fine on the 16GB.
I'm also doing FE workloads, 8GB is nearly there but the swap is still fast. I still have some swap usage on the 16GB but mostly because of Chrome tabs rather than workloads.
I have a bug that when I'm using sidecar (and I think sometimes my hotkey remapper triggers it too, somehow) that causes the window server to just spiral up to a whole CPU core and like 4GB of ram... if I restart the process, all is good until the machine sleeps, or a display reconnects.
I got a MacBook Pro with the max cpu and it’s made such a difference in my day to day Kotlin work and the battery time is stunning considering how high the cpu usage is when working in IntelliJ
While this is AMD-focused, you can scroll down to see "Server Shipments by CPU Type" and get a feel for how Intel is doing currently, and how quickly the market share changes. (Pretty good, and slowly.)
Of course, not being on top in efficiency should light a fire under the market leader.
> Given i develop on a ARM Mac, i’d like to deploy to an ARM server.... Amazon have been trying to solve this issue (and many others) by developing a line of ARM servers call Graviton."
Just remember, containers don't just run everywhere. Your team will need to devote time to understanding multi-arch images and builds. If you're a python/data science team, I hope you have someone on your team that actually knows how to debug these problems. Otherwise you'll be using your $3k Mac as a thin client to an x86 resource to do your development work.
What are the reasons for deployments to be more complex by default between machines?
Incredibly large and complex programs are deployed on user devices all the time and most of the time all is needed a "Next Next Next" or copy to a folder.
Yeah, sure you may need a specific runtime to be installed but that can be achieved through a deployment script too. Remember installing DirectX when installing a new game?
Sure, there are cases where specific app may need specific version of it and another app may need a completely incompatible version however this is more of an edge case.
Containers are definitely very powerful in some cases but I find it very distasteful to make EVERYTING into a container. It's not like apps are deployed on a server with custom version of Linux made by North Korea and the other one on a Nintendo. Most of the time, it's the exact same environment everywhere or it is a few command away from being exactly the same.
Containers can run anywhere you have emulation (Docker, Podman, etc. support it) but it’s not like it’s hard to build them, either - that’s the point. I was curious how much of a problem this would be but in practice it’s been something I’ve needed to care about only a couple of times (e.g. Java 8 has some race conditions under emulation causing the JVM to panic).
That's a good point. I have started building x86 images on my m1. It's slooow. I don't have precise benchmarks, but think a 3-5x longer times to build. As far as running the application, will if you're cpu bound, you're not gonna have a good time.
Oh, sure — basically for me it ends up being the case that most of the containers are either native or ephemeral enough that I don't care, and the ones I notice are a quick build away. It's definitely not perfect — I'm hoping GitLab CI lands integrated buildx support soon — but it's been pretty manageable and I think it's a fair tradeoff for everything else being faster & twice the battery life.
In my mind, "decent screen" is retina-quality display, which limits things. I doubt a cheap Chromebook will have one. The Chromebook Pixel has a 13" retina display option, but why not get an M1 Air and get all the functionality of a real, mainstream OS. (Unless Chromebooks have expanded from minimal Linux using a browser?)
Personally, 15" is about the minimum screen I'll accept if I'm going to do actual work on it, and at that size I expect the choices to be sparse to none. Certainly not cheap.
Yea it can be picking your own poison in some respects. My preferred workflow is something like spinning up a dedicated dev VM within your provider, install a standard image, then use a local text editor that's saving files over ssh or sftp.
I have found virtually no one has problems with "well I can't get this to run on my machine and here are the n undocumented quirks". Everyone is using a homogeneous environment with local IDE help.
This is in many respects similar to the idea of GitHub codespaces and other remote dev environment tooling that has started to pop up.
While I haven't used any of them, I think the general principle makes a lot of sense. You can also develop on very light local hardware, while taking advantage of an elastic resource depending on what you're doing.
“ Lets start with the M1 Mac mini. It’s a nice baseline as it’s the Mac i was developing on up untill recently. The Mac mini completed the tests in 45.13 seconds across 8 cores.
…………
The AWS Graviton instance (c6g.metal to be specific) delivered a score of 14.63 seconds across its 64 cores. Around 68% faster than the Mac mini.”
“Faster” would be a matter of rate.
So if we have times and we know D = R*T, we’d have:
(D/T2)/(D/T1) as the speed up of R2, which is just (T1/T2), or 3.08. If we were to say how much “faster” c6g.metal is than a Mac Mini, wouldn’t we say 208% rather than 68% if the speed of the Mac Mini is our baseline? Am I missing something?
I'd actually go with "108% faster than the M1", since the "baseline" of the M1 is 100%. Alternatively, "208% of the rate of the M1" since that explicitly includes the baseline.
I would say 2 times faster or 3 times the speed of the mac mini. Faster implies a delta already; I can say X is 0.5x faster than Y and it means its speed is 1.5x that if Y.
Common usage doesn’t mean it’s correct or clear. Yes, a lot of people say 2 times faster but it isn’t. The whole point of this comment thread is just that.
I highly recommend this blog post, which makes the same point among many (great) others: https://sled.rs/perf.html
> When we speak about comparative metrics, it is also important to avoid saying commonly misunderstood things like “workload A is 15% slower than workload B”. Instead of saying “faster” it is helpful to speak in terms of latency or throughput, because both may be used to describe “speed” but they are in direct opposition to each other. Speaking in terms of relative percentages is often misleading. What does A (90) is 10% lower than B (100) mean if we don’t know their actual values? Many people would think that B is 1.1 * A, but in this case, 1.1 * 90 = 99. It is generally better to describe comparative measurements in terms of ratios rather than relative percentages.
> The phrase workload A is 20% slower than workload B can be more clearly stated as workload A was measured to have a throughput of 4:5 that of workload B. Even though many people will see that and immediately translate it to “80%” in their heads, the chances of improperly reasoning about the difference are lower.
You can't really deduce anything without knowing more about the test suite: How much network and file system does it do? How well can it use the full core count?
Faster is used both in the context of time spend and rate. In sports for example we say someone was x seconds faster across the course all the time. There really is no definition that ties faster to only be associated with rate, so I would say this is just pointless nickpicking. It's like complaining that people use higher/lower when talking about price
It's worth highlighting that the M1 series has efficiency and performance cores. For our test suite, the efficiency cores struggle and can tank the whole test run.
We set our node test suite (jest) to use the number of performance cores, not the total count (which is the default).
That actually sped up the test process, particularly on the M1 which is 50/50 efficiency and performance cores. It's less pronounced on the M1 Pro, but still a benefit.
99% of reviews of M1 talk about video editing and music production. As a programmer I care more about RAM , storage, no reflections display , lightweight laptop long lasting replaceable battery great keyboard trackpad and self service repair. Give me mediocre cpu with all these call it Z1 and I'm happy programmer.
Video editors and music producers are using far more ram and storage than virtually any programmer out there, and loads of them are traveling with their work and draining batteries faster than anyone else.
Asking them whether a laptop’s hardware is good is like asking a power lifter whether a gym has good equipment.
The point is M1 is extremely fast at video editing because it has DSAs designed for these workloads. So using them as a general purpose workload proxy is just bad.
Not a lot of music producers do use that much RAM (speaking as someone that has run multi-machine music making setup before). Its only the large sample libraries that do and you need multiple instances running to really eat it up.
All our devs need than my current music making setup, and I am thinking of going to 128Gb because I do dev work around video.
Does music actually require that much RAM? How big is an uncompressed music stream? Bigger than 1 or several VMs running at the same time, a Java IDE, a database browser, etc.?
I would argue that from a technical perspective a programmer would actually care less about most of those features than a video editor or music producer.
The reason you care about those features isn’t because you’re a programmer specifically, it’s because you happen to have a personal preference for those things.
- Why should a programmer care about the display really at all? Text can easily be displayed with very high contrast. Matte or glossy shouldn’t matter very much.
- Why would a programmer ever need more RAM than an Adobe Creative Suite or Blender user? I do all my programming on a system with 8GB of RAM and my biggest memory bottleneck is Chrome.
- What would a programmer need a lot of storage for? Text?
- Everyone needs long battery life, and the M1 is industry-leading in that regard for basically all tasks. Why does a programmer need to remove the battery if it lasts beyond a full work day?
- Self-service repair is important for people who know how to do that. While I agree that computers should be built with repairability in mind, and that Apple scores poorly in that regard, programming itself is a software skill, not hardware. You don’t have to know how to repair or build a computer to be a programmer, and if you don’t know how to do it then your ability to do your own repair is less important. I.e., if you have to pay someone a labor cost make upgrades and repairs, you might not prioritize that aspect of the system.
One of the best programmer laptops on the market is the MacBook Air. You can regularly buy it for under $1000, the battery goes for 12+ hours real world usage, it has no fan and stays cool, it’s got a great keyboard and trackpad, it’s faster than laptops that cost hundreds more, and it’s very light and portable with a small power brick.
8GB ram eh... multiple VM's running, VS Code, few browser windows (many tabs), Cypress, make sure spotify is going. I'm sitting at 22gb right now... and that's before firing up Android studio.
Replicating prod environments with lots of data means large VM's in my case.
Seriously good screens are lovely.
Different needs for different devs but I for one am very happy with maxed out machines, and have no qualms in upgrading :)
Why do people keep spreading this myth? There is NO magical memory compression. If you need 32GB on an Intel Mac you will likely still need 32GB on an Apple Silicon Mac.
I wish it really did work that way or I'd still have an M1 MacBook Air - but it doesn't. If you really need RAM there is no substitute for having the proper amount of RAM.
And don't even bother with the "but it swaps fast" argument with machines that have SSD chips *soldered onto the motherboard*.
> If you need 32GB on an Intel Mac you will likely still need 32GB on an Apple Silicon Mac.
I'm comparing non-macos operating systems to macos.
I'm not comparing m1 vs intel, though the M1 brought some improvement in this regard, most of the ram magic was already in place since around MacOS Leopard I think?
> I wish it really did work that way or I'd still have an M1 MacBook Air - but it doesn't. If you really need RAM there is no substitute for having the proper amount of RAM.
That's like saying "calories in calories out" and then only eating vegetable oil for maximum savings on food.
If you need to load a 20gb file into memory, yes you'll need the 20gb of physical ram, but for /multitasking/ 8gb in MacOS really is equivalent to near 16gb in Windows.
Programmers whose work isn't writing UNIX CLI and daemons, and are responsible for the applications that those users expect to find on Apple ecosystem.
I do dev work that involves the Adobe creative suite so I need to run that plus a full development environment. I also need a lot of storage, because I work with video, I care a lot about the display because I am staring at it all day - but also I want to check visual artifacts. What works for you does not work for me.
I also have a M1 Macbook Air, and it’s s great laptop but far from the best programmers laptop for all use cases.
Are there any non-Apple ARM laptops that can reliably run "normal" Linux distros (e.g. Fedora, Alpine, Debian derivatives)? Currently the only options on my radar are the Pinebook (and Pinebook Pro) and MNT Reform.
AFAICT, The Pinebook Pro can run "normal" distros. Fedora and Manjaro are supported directly, and Armbian has Ubuntu images. It also looks like Arch is somewhat supported.
Now, reliably on the other hand... that's another story. I still have hardware issues surrounding booting, but getting an OS installed isn't an issue anymore.
And there's been some models that are pretty much nightmares (basically 2016-2020) even years later. I think their point stands. Some have been okay and others near useless.
To clarify, I meant to make a point against the "has never been a good choice" claim. I agree the relative rarity of good Linux Mac models is not ideal, hopefully the current Arm model Linux activity will bring improvement.
If the application and test suite are built in Node.js, I'm really not sure why there are architecture issues to begin with. Especially if the test suite ran without alteration on the ARM machines, then I don't see why you'd get different results with running the tests on Intel vs ARM.
> I installed node v16 on both my MacBook and the cloud instance, installed our app and then ran the full end to end test suite.
Given this statement, it doesn't seem like there should be any cross-platform issues at all here (even in included npm packages). It doesn't sound like any of the tests are arch specific.
But I don't use Node.js very often, so maybe I am missing something?
The app itself is not ARM specific, but the base Docker image and its dependencies (Debian + libraries + nodejs binaries...) are platform specific, so they can't build that image on a different platform than their target.
See that, for the same tag (version) you have images with different hashes (hence content) for different architectures:
That makes no sense to me. It doesn’t matter what the base image is — the application and test suite are platform agnostic.
So, why not build two different Docker images that are the same except for arch? It seems that the problem isn’t the difference between dev and production environments, but the way Docker is being used. Sure, run an arch specific test before moving something to production, that makes sense. But for development, just use the Docker image that matches the dev arch. This seems like a much easier problem than the author is making it out to be.
Before Docker, I don’t think this would have been an issue.
It makes sense if you take in account that you're packaging the whole stack in a container. It's the same than when you make 'golden images' for a VM with Packer or whatever tool is available: you end up with a platform specific image.
But you aren't far fetched at all with that idea you have, and can be done using a Docker multi-stage build: You build all your application with whatever platform you have (either locally or for CI), and then you import your application layer from an ARM64 image in one side, and from and AMD64 in the other.
We aren't still very used to multi-platform development environments, so the tooling isn't still perfect (at all).
In this case, if there are binary dependencies besides node.js. They would be rebuilt on the box when 'npm install' is run. I've never seen a setup for cross compilation in this case. Though I don't think you really need it.
Also you can run x86-64 and arm64 VMs side by side with UTM on the M1’s. That’s what I am doing. Although I don’t need to use the x86-64 one these days as Debian/arm64 is fine for my needs.
Continuous integration also needs a server to run on. The point is, if you have a dev laptop running M1 ARM and a prod server running Intel x86, you need something in the middle to build/test on unless you want to cross-compile on your laptop and yeet it to production.
Graviton 2 instances are really amazing in my experience. I can't believe every other major cloud player (Google, Azure) still don't have any ARM instances available. Get with the times folks.
Given Google have made their TPUs for machine learning and Tensor processors for phones it may be possible they are investigating it. AMD is killing it though from the server side so it's hard to justify your own CPUs right now.
Nothing them is stopping GPC from buying Ampere servers and renting time on them. AWS made a great move in the space to get ahead of it, but there are totally viable commercial options: https://amperecomputing.com/
And the test was for under a minute so even with additional start up and tear down time, it’s peanuts to run this type of thing using on-demand instances.
The price comparisons are odd/wrong, because it appears he simply calculated the AWS price for a year of that machine. But on the one hand he didn’t need to run the AWS machine for a year. And on the other hand, he owns the Mac’s for life, as he pointed out. If you wanted to calculate the cost of having the AWS machine always available to you like the Mac, you’d probably at least want to use three years, which is a fairly normal timeframe for a business to write down the cost of a machine.
The benchmark itself has some issues; namely, the unit tests are run on a single thread but the end to end tests are merely parallelized one per thread. I would guarantee he's not getting 100% CPU utilization on the M1 machine if those tests are anything like the ones I see. BDD/E2E tests have a lot of waiting for buttons to load on pages, clicking/moving the cursor, and waiting for backend results. A proper test would simply run _all_ the BDD tests in parallel to properly max out the CPU and let the OS handle the thread switching.
> each Apple CPU core is performing twice as quickly as each AWS Graviton core. It’s 1.5x slower than the AWS server, but it’s ~6x cheaper. Thats remarkable value and makes me very excited for higher core count ARM Macs in the future.
This isn't necessarily true. I'd be surprised if the CPU was the single bottleneck. Lots of other factors, like memory and the disk, are going to make your tests run faster or slower.
The company that I have built develop mostly for amd64 (Linux), because our customers run things on amd64. Mostly Kubernetes and OpenShift. Kubernetes does exist on arm64, but not OpenShift. So to test/develop I have to go sometimes to my MBP 16" i9 2019 (top specs). I use 2 5K LG monitors. And just development experience is not that great on Intel, especially after switching to M1 Max, when I don't hear fans, don't see glitches in window manager, don't see constant CPU throttling, because laptop of hot running GPU and CPU to run those 2 5K monitors.
I'm quite impressed with the M1 to the point that I'm considering my own MacBook, not to mention familiarity with it from work etc...
But the pricing is driving me crazy. Ok the MacBook Air base model can be had for a reasonable price, but quite modest upgrades to 16gb/512gb ram/SSD effectively increase the price by about £550 (from ~£850->£1400 for cheapest available comparable units from trustable retailers) or £400 on the Apple store. At which point it seems worth considering the £1800-900 14" pro model since basically everything is a lot better on it. But I guess that's the point.
I suppose one way of looking at it is that the base air is discounted because the market sees it as something with a short shelf life and the 16/512 model is paying the full Apple premium for something that is actually likely to still be useful for anything more than being a glorified Chromebook in 3 years. Assuming, you know, the screen doesn't crack, if that's a real problem.
I mean even base models will normally be fine for 4-6+ years depending on your work. Also Apple computers hold their value unlike any other computer in the industry. Go look on eBay or similar for what 4-5 year old MBPs are selling for, I think you'll be surprised.
Well this is what's tricky for me. I'm into software development, etc. but I also have a desktop which is suitable for such work. That's fine.
But I'd like it to at least be suitable for light development work as a more portable device for the foreseeable, so 8GB feels like it's already being pushed on my existing Ubuntu laptop (which is ~4 years old). But it's hard to justify such a price increase when it would still ultimately be a secondary machine.
You know, I found a refurb Pro 13" with 16GB RAM and 512GB of storage for the equivalent price of a new Air with that same config. I realise it's basically the same machine with a fan, extra graphics core and touch bar... but in combination with your comment and my own doubts I figured I'd go for that as it seems like the best compromise I've come across. Hey, at least it has 'pro' in the name.
Embrace cloud native development, so enjoying CLI, vi, Emacs and not wanting to replicate the missing part of what used to be UNIX development experience, development servers with thin clients.
I guess I’m confused about who this article is for. Graviton AFAIK is available for cloud compute on AWS. M1 MacBook Pro is a developer’s machine.
There are different aspects of performance that’s important depending on who is inquiring.
1) AWS cares about performance/watt (operating expenses) and performance/$ (capital expense)
2) As a user, if I provision an EC2 instance, I have no interest in hardware costs or power consumption. I just care about performance/$ (EC2 instance pricing).
So raw performance is basically moot point here. There are some non-linear concerns about scaling (Amdahl’s law), but generally, compute boils down to even more abstracted picture something like: requests/$, etc. that’s specific to your workload and company.
You can get Mac minis at many cloud providers (expensive at AWS, cheap at Hetzner). They weren't designed as servers, but are good value for many server use cases.
I love how every time someone wants to say the M1 is fast, they actively avoid comparing it to Intel or AMD, except in some nonsensical way like "single core performance"
Go buy an M1 machine, use it for a week, and return it. Except you won't return it, because you'll recognize it's amazing beyond its benchmarks.
And single core performance can't be discounted - turns out, it's often a huge component of user experience. Lots of software out there that hasn't been, or can't be, multithreaded.
Literally the only thing an M1 MacBook would be useful for me is web browsing. The hardware is totally wasted by the OS. I have an Intel MacBook Pro 16, but the M1 MacBooks have an impressive GPU that's only useful for extremely niche applications. If Apple ever added Vulkan support which would quickly allow proton/wine/dxvk to work, it would be something to consider.
There are so many issues with the current implimentation that I think saying "it runs" is a bit of a stretch. I'll give it to you when you have a hardware-accelerated desktop, until then "it runs" Linux as much as a toaster SOC can "run" Linux.
Sounds like you haven’t used Asahi at all. The software rendering is actually very functional and perfectly usable for programming. It’s just multimedia/gaming where it fails.
For me it seems to be the latency rather than raw speed. It's like how a sprinter can get off the line far quicker than a muscle car even if it's not going to win over a mile. For 90% of what I do, the snappiness of things on the M1 beats raw high end performance. It feels like when I first got an SSD.
I did that. When I bought our first M1 Mac, it was faster in multi-core Rust builds than my Ryzen 3700X workstation. I have since upgraded to an M1 Pro and a Ryzen 5900X. The 5900X is faster (12 cores vs. 8 performance cores and 2 efficiency cores), but not by much. For example, building one of my Rust projects:
If we count the two efficiency cores as being as powerful as a single performance core, the performance, the per-core performance is on-par ((9/12.) * 16.25 = 12.1875).
In other tasks, the M1 Pro blows the Ryzen 5900 away. For example, the M1 Pro reaches 2685 GFLOP/s in matrix multiplication, while the Ryzen 5900X peaks at 1555 GFLOP/s [1].
Oh and all of that while the Ryzen CPU alone consumes 120W during builds and needs a big loud fan (Noctua), while the MacBook Pro is inaudible and runs on battery. The MacBook I can throw in my backpack and use everywhere. The best computer is the one that's with you.
(And no, mobile Ryzen doesn't even come close. I had a ThinkPad with a high-end Ryzen CPU last year, and builds where much slower than even the vanilla M1.)
The point here is not so much the M1 side, but the Graviton side - if you're developing on an M1-based Mac and comfortable deploying to ARM stuff anyways, you may as well go for it.
I've seen tons of such comparisons over the last year. Also, I don't consider single-core performance nonsensical since it correlates highly with some real-world uses like Web browsing. Intel is currently winning in single-core performance, so if people were cherry-picking benchmarks to show the M1 in a good light they'd choose something else.
Intel with decades of experience is barely "winning" against Apple's first desktop chip.
Yeah, slight disparity there!
And that's completely discounting the whole power thing. If Apple were ever to develop a desktop chip that chucked efficiency out the window and focused on raw performance by throwing as much power and cooling as possible at the chip? Things would probably get spicy indeed. If only Apple had a desktop chassis optimized for maximizing power and cooling. Oh wait, they do with the Mac Pro - and it's the only Mac not yet migrated to Apple Silicon.
I'm sure Intel is sleeping well with their "lead" :)
> Intel with decades of experience is barely "winning" against Apple's first desktop chip.
That's because Apple's benchmarks are against seven year old Intel chips.
Wait'll you look at a benchmark made by a skeptic.
.
> If Apple were ever to develop a desktop chip that chucked efficiency out the window and focused on raw performance by throwing as much power and cooling as possible at the chip? Things would probably get spicy indeed.
Sure thing. Call us when they do.
.
> I'm sure Intel is sleeping well with their "lead" :)
Well, no. They have real competitors, like AMD and TSMC and so on.
Hoping I can pick one up soon. I figure the Pro is probably the right move for a developer workload, although I do like the size of the Air.