I've tried one of the preview builds as part of their Developer Preview Program[1] and it's super fast when you're running an arm64 image, as opposed to an amd64 that will automatically be run via qemu (and therefore incur some overhead). By "super fast", I mean I was seeing speeds comparable to running the commands natively on the M1 mac (npm install and a heavy Gulp build).
You can use manifest inspect[1] to see all supported architectures, and you should also be able to see on Docker Hub[2] as well.
EDIT: If you're asking specifically if you can tell what architecture a running container is using, a simple `docker inspect CONTAINER_ID` should show it.
I just checked myself, and you get notified with a nice WARNING upon a `docker run`:
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
This bit about Multi-platform development is most interesting to me.
> Many developers are going to experience multi-platform development for the first time with the M1 Macs. This is one of the key areas where Docker shines. Docker has had support for multi-platform images for a long time, meaning that you can build and run both x86 and ARM images on Desktop today.
If multi platform images work, a lot of the concerns people have about x86 versus ARM should go away.
Of course, once you can do everything on ARM as well as you can on x86. Perhaps changing more of your infrastructure to ARM might make more sense.
Yeah, until the developers decide to release only for Arm targets and when you search issues for x86 build, the maintainer says "just compile it yourself"...
And you look into the dockerfile manifest and see that it wants to pull the whole stack of history of computing...Then you figure maybe you don't need that shiny utility in the first place and move on...
It's "64 vs 32-bit" or "Ubuntu vs Arch vs Fedora binary" all over again...
> Yeah, until the developers decide to release only for Arm targets and when you search issues for x86 build, the maintainer says "just compile it yourself"...
Until we start seeing reasonably priced, performant ARM desktops and laptops, there is little worry of that. Right now outside Apple, performance on ARM isn't good enough for any kind of great developer experience (or any kind of pro experience). Unless that changes significantly, x86 is going to be dominant on non-Macs for some time.
On desktops yes, ARM rules on mobile devices and plenty of us code for them, and on Android unless you are doing CRUD like apps, most likely there is some C or C++ code as well.
Apple is 15% of the personal computer market, if I'm not mistaken. Their intention to abandon x86 isn't a clarion call for the obsolescence of x86.
Personal computers generally go where Windows goes. Heck, that's arguably part of why Apple left PowerPC for Intel in the first place. Apple has tremendous impact on design, form factor, and other visionary steps that the market takes. I do not deny that. Still, in terms of hardware and the development ecosystem, Microsoft simply has a lot more market inertia than does Apple.
This is nothing like 64bit vs 32bit. That was a difficult (MacOS didn't drop 32bit app support until Catalina) but obvious (unless you hate RAM) leap forward. In contrast, Apple Silicon is a leap sideways whose opportunity arose because Intel's flagship process node has had a rough few years. There may be inherent advantages to Apple Silicon as a technology, but nobody should make the unproven assumption that Apple Silicon is universally superior to any and every possible TSMC 5nm x86 chip.
X86 support is a big one for me. I deal with a lot of customer projects; typically with some kind of dockerized stuff as part of their tool chain. So, being able to run those things as is, is important for me. And I don't see those build systems being updated any time soon to be able to accommodate some Apple only hardware.
Yes, it's definitely "performant" if by "performant" you mean "exactly matching the well-established performance of qemu emulation" which means: slower than Rosetta2 with higher CPU/memory/power usage but still functionally correct for the most part.
If by "performant" you mean "as fast as it used to run on x86 Mac hardware" than the answer is: No, it's emulation, and it's slower than Rosetta2.
> If by "performant" you mean "as fast as it used to run on x86 Mac hardware" than the answer is: No, it's emulation, and it's slower than Rosetta2.
I wonder if this would motivate someone to build an x64-to-ARM translation layer for Linux which is closer to Rosetta2 than qemu-user-static in performance?
One of the secrets of Rosetta2's performance is Apple Silicon processor extension enabling ARM code to use the x86 memory model. There is no reason in principle why some x64-to-ARM translator running on Linux on Apple Silicon could not exploit the same processor extension.
Another secret is that it predominantly does AOT translation, and only uses interpretation/JIT for x64 code generated dynamically (such as by an x64 JIT). There is no reason in principle why something on Linux couldn't do the same thing.
Perfect world scenario, Apple would open source Rosetta2 and it would be ported to Linux. Apple probably won't do that because they've invested a lot of money into it and it gives them a competitive advantage over other ARM-based platforms (such as Microsoft Surface Pro X). (I do hope I'm wrong about this though.)
I’d say the reason qemu is slow isn’t aot vs jit, but that qemu is very generic allowing translation between all sorts of different architectures. It uses an intermediate representation where you lose those possible 1-to-1 mappings between x86 und arm.
I was fiddling for a while on x86 emu for arm, it’s definitely possible to be faster than qemu, but it’s a very big project and unclear whether somebody will just do it in their free time.
So it means plain old binary translation without hardware assist. I wonder that whether QEMU's binary translation is not optimized well nowadays because no one uses it for production (we use with VT-x/EPT or AMD equivalent).
> If multi platform images work, a lot of the concerns people have about x86 versus ARM should go away.
Not really. Just because you have ARM and AMD debian, you wouldn't have same behaviour in both. You wouldn't have same list of packages that can be installed.
You'll have extremely close to the same behaviour in both since Debian ships exactly the same versions. It's always possible that there's a compiler bug which only affects one platform but that's relatively uncommon. At one point I was supporting Debian on x86, x86_64, MIPS, and PowerPC. The major portability concern was drivers, which is irrelevant for Docker, and sometimes you'd find performance issues with e.g. OpenSSL having a hand-tuned x86 ASM implementation but relying on the C compiler for another platform but it was rather uncommon.
I am skeptical that anyone will notice a problem unless their workflow is to build a container on their laptop and push it straight to production without testing.
That is fundamentally what I mean when I say "If they work". If the two platforms behave differently, then multi-platform images are pointless.
If 1 in 20 multi-platform images has some kind of oddball cross platform behavior, developers won't be able to rely on them at all. I think we can get far better than that, but we'll see.
Intel's fat margins are largely based on their duopoly on x86. With ARM, it's far more competitive and margins are far lower. With Nvidia owning ARM, it seems likely we'll see more ARM CPUs from them. Samsung, Qualcomm, and Mediatech also have ARM CPUs all the way down to a few dollars per CPU. Intel isn't going to be able to come in and charge $50-500+ unit the way they do with their x86 chips.
Unless Intel can reboot Moore's Law on Intel, they are going to get hit hard over the next few years. Even if Intel makes the move to ARM, their profitability will take a huge hit from the ARM migration.
I don't disagree. In fact, I'm assuming that's a major reason they've not done this sooner.
Intel tried to setup a different more efficient architecture with IA64, but they failed there.
That being said, with major players like Amazon and Google tinkering with ARM in the cloud and Apple getting ARM into the hands of a lot of developers, we could be on the eve of seeing ARM make a big entrance into the server realm.
IMO, a big reason ARM hasn't taken off there in the first place is because most devs aren't using ARM machines for development.
> Intel tried to setup a different more efficient architecture with IA64, but they failed there.
A big part of the IA64 strategy was to thoroughly mine the area with patents. They were held in a company jointly owned with HP. In theory it would be impossible for anyone else to reimplement IA64 and "impossible" for IA64 to be licensed to another manufacturer.
I have been coding since the mid-80's, not having the same local UNIX machine for doing development than what the server was running was quite common in the world of commercial UNIXes.
Then I moved into managed languages, where the actual CPU and even underlying OS, only matter to low level coding, again not using the same local OS/CPU combo as the server.
Finally, cross compilation for ARM exists since years.
ARM hasn't taken off, because most of time it doesn't matter enough to displace the existing stacks.
> I have been coding since the mid-80's, not having the same local UNIX machine for doing development than what the server was running was quite common in the world of commercial UNIXes.
Things change. Why is x86 the primary server platform? It isn't a great micro architecture. The fact that ARM and others (Like power pc) have been eating their lunch in terms of price, performance, and power consumption is proof enough of that.
So how do you explain the rise of x86 and the fall of pretty much every other platform on servers?
To me, it's simple. x86 got fast enough on consumer hardware to be able to run the same software that would run on servers. Developers like to test their software locally. Emulators for anything to x86 have been terribly slow.
That's why since about the late 90's pretty much everyone has been running x86 servers.
That you can cross compile isn't really the issue. Even running managed languages isn't the issue. The issue is that there are always differences that are hard to compare when switching platforms if the one you are developing on isn't the same as the one you are targeting.
That's like 90% the reason why most consoles have switched over to x86.
Mobile devices would have gone to x86 were it not for the fact that licensing costs were too high and the performance/watt ratio too low.
I know your question is rhetorical, but worth addressing because it's relevant.
> Why is x86 the primary server platform?
Cheap PC prices plus Linux meant you could deploy Linux for a fraction of what deploying any other Unix (save the *BSD family which shares similar advantages).
Cheap ARM CPUs and reasonably priced, performant ARM development machines pretty much harpoons that big advantage x86 had.
> The issue is that there are always differences that are hard to compare when switching platforms if the one you are developing on isn't the same as the one you are targeting.
Yes and no. We've been developing on MacOS and deploying to Linux for some time. The issues we end up fighting are minimal and we're going from one OS to another. I suspect most of the ARM-x86 issues that haven't already cropped up and been dealt with will be soon.
Agreed that lack of desktop Arm wasn't the biggest issue. Arm didn't take off because:
- Intel had process leadership and x86 was 'good enough'
- Process leadership was in turn supported by the volumes / margins that Intel had on their consumer PC business.
But now Intel has lost process leadership partly due to smartphone volumes supporting huge investment at TSMC / Samsung and the hyperscalers have an incentive to differentiate their offerings (e.g. Amazon Graviton).
> IMO, a big reason ARM hasn't taken off there in the first place is because most devs aren't using ARM machines for development.
Also suspect this is largely true. Most developers have local environments and until Apple, having a local environment with ARM has been awkward. Docker multi-platform images is supposed to mitigate this, but they are still fairly new and I suspect many developers don't trust them yet.
It would be interesting to see Apple launch am M series server. After looking at the M1 Mac mini logic board, it seems like a blade server with stacks of Mac mini boards would be fairly easy to engineer. Apple would just need to build the chassis.
Perhaps a dumb question but I'm curious why people don't use a VPS or a cloud linux machine more for Docker/K8s development instead of running Docker locally on a Mac.
In my experience Docker Desktop has been such a resource hog, and Apple's hypervisor implementation pretty poor. I much prefer to have all that heavy lifting isolated away from my development machine to keep it responsive and cool.
If you have multiple developers working on the same product, you want each developer to have their own environment. In my experience, for cost, convenience, and productivity its made most sense to do this with local Docker. It also eliminates a lot of connectivity issues with running on a remote machine.
> In my experience Docker Desktop has been such a resource hog
Not sure if this is related to the specific project or what, but I haven't had tons of trouble with docker being a big hog.
That said, they are significantly changing the way Docker on M-Series CPUs works, working directly with Apple's Virtualization Framework which I believe improves performance.
Some root causes have been addressed. If they are significantly changing the architecture with the Apple Silicon rollout, I'm very much looking forward to see if that improves things!
Docker for Mac is a really bad resource hog for devs at my company. It's a common refrain that comes up again and again.
We build a SaaS web app with a MERN stack deployed to AWS. Local development usually involves running about 12 containers.
Personally, I got fed up and installed Ubuntu 20.04 in Parallels and have been running the Docker containers there for several months. What a night and day difference... the entire VM sits at about 13% CPU when idling, whereas Docker for Mac would whine my fans at around 150% CPU continuously when idling, and spike up to 300+% when clicking around the app.
I wasn't trying to argue the point about Docker being a resource hog! I've always run Docker on computers with lots of RAM or with fairly simple images (or both sometimes) so it's likely I've just dodged that bullet.
Back in the old days, each of those developers would just use telnet and X Window sessions on the same UNIX box, which the cloud is nothing else than the same stack in new clothing.
The main resource issue is because Docker runs in a VM, it can't share memory with the OS. So it's using the configured amount of memory (2GB by default) whether you're running busybox or an entire Kubernetes cluster.
I think the Virtualization Framework mitigates that to some extent. The way Docker was required to run on x86 it needed a Linux VM which I believe the Docker images ran atop. Under the new model, I think they run on bare metal and share memory.
Regardless, for me the cost of adding enough RAM to support Docker is less than the headache and expense of spinning up Docker images in the cloud.
M1 Docker still needs a Linux VM, Virtualization Framework is just a different way of doing that. I'm not aware of any VM that can share memory with the host, but maybe Apple has some black magic up their sleeve.
Yeah, fundamentally Docker is Linux on Linux. If you are running Linux on MacOS, it's not Docker, it's a VM. So Docker installs are Linux Docker Image -> Linux VM -> MacOS Virtualization Framework.
For that reason alone, I suspect Linux will always be the best host system for Docker images.
Seems like Apple's Virtualization Framework supports some sort of memory ballooning mechanism (https://developer.apple.com/documentation/virtualization/mem...), which could hopefully enables in the near future dynamic memory support like Docker Desktop on WSL 2 is able to do (if it's not already the case, I haven't tried the M1 preview).
If I'm writing code on a shitty laptop keyboard instead of a proper mechanical one, it's because I am, or intend to often be, outside an office, quite possibly travelling. This also usually means that I will not have a reliable internet connection.
It can get surprisingly expensive when left on 24/7. If someone can come up with a remote (dare I say, serverless?) development environment that could seamlessly suspend/resume or share resources, then it would be a lot more compelling to me.
Network volitilty and debugging were the reasons I moved back to a mostly local dev setup.
I also run linux though so resources aren't much of an issue for me. I'm actively considering setting up remote environments for one of my teams that is all mac users working in a containerful dev environment.
Without kubernetes, Docker Desktop is fine. Been running it for years on my current and previous mac book pro (the 2012 model). Docker performance has never really been an issue for me. The bundled kubernetes seems to burn a lot of CPU though; so I keep that disabled.
I work with other people in teams and I don't get to tune every project I deal with to be just right for my tastes and hardware. So being able to run their stuff as is with emulation is preferable to me having to customize everything before I get to run it.
Clarifying a bit - I can't start up the Docker Desktop Kubernetes - If I take a look at the image downloaded (docker/desktop-kubernetes) it only shows tags for amd64.
When I try to spin up kubernetes, the UI just shows 'starting...' while the logs show the image won't work with this architecture.
Probably because it’s not very convenient or easy to setup. I have a 2013 i7 MacBook Pro and I don’t have that much trouble running Docker with 16GB of RAM.
I’m building a startup that aims to help with this - but as others have mentioned - who wants to pay for two computers?
One solution is re-purposing an old PC into a home-server - once that’s setup, tools like Skaffold start to shine - local network speed, a cool laptop, containerized development, and a platform for home-hosting!
Anyways I’m very excited about this future - much more than paying AWS for another machine!
> I don't want to be _too_ pedantic but the M1 macbooks are significantly cheaper than their Intel counterparts.
The M1 MBP is about 15% cheaper, which I guess might be significant to some, but isn’t a huge selling point to me like you’re making it out to be. GP was talking about a 15/16” MBP at $3k, not the 13”.
Developers for the most part have or can look forward enough to enough income to make a tool like a mac relatively inexpensive. My biggest problem with buying them has been how absurdly bad the hardware has been for years other than the trackpad. The screensize/weight ratio, the screensize/body size ratio, the keyboards, and the performance were all well behind the competition. MacOS only running natively on bad hardware was a great reason not to use it, the fact they charged a $500+ premium on their software was really just another nail in the coffin. You had to need or fucking love MacOS to justify buying a mac.
How things change in a year or two. Pretty much only the weight/size relative to the screen size remains substandard and I expect them to improve on those fronts.
- overall dimensions 14.06 x 9.27 x 0.66in or about 86in^3
Apple 16” MBP: via Apple.com
- 16” screen
- 4.3lbs
- overall dimensions 14.09in x 9.68in x 0.64in or about 87.3in^3
—-
So weight/screensize ratio:
Dell = 0.27lb/in vs Apple = 0.27lb/in
Screensize/bodysize ratio (higher is better):
Dell = 92.7% coverage vs Apple = 93.6% coverage
So just using your first two arbitrary criteria, seems Apple’s 16” laptop is ahead of or tied with, but not “well behind”, this competitor.
I’m sure you can pull out some edge case laptop that beats it, but as the XPS is arguably the most popular non-Apple laptop among devs, I feel it’s fair to say the Apple laptop is definitively not “well behind the competition”, at least using your criteria.
For a laptop that’s <3oz heavier and only 1.3in^3 larger dimensions, you get a screen almost a half inch bigger, almost 20% larger battery, the worlds best trackpad, 8 core i9 vs a 6 core i7 processor, option to get 64gb ram rather than being limited to only 32gb, etc etc.
The 16" slipped my mind, that varient to me was the first "good" laptop apple released in years but they were pushing a dated 15" model not so long ago. This laptop had quite a bit of overheating issues to the point you can't even drive all the monitors it says it supports on the spec sheet off the iGPU anywhere close to comfortably.
The 13" models which I'm more interested in are more dated and more behind in terms of weight although the performance makes up for that. What truly stunned me a few years back was comparing a 13" MBA to a T480s and seeing Macs just get tied or crushed in pretty much any area that wasn't related to speaker quality or the trackpad. This laptop had an extra inch of screen space and an extra 2 cores and better input overall and higher memory support and way more ports and about the same battery life and the same weight and all this for substantially less money especially if you bought memory aftermarket. It was nothing short of a humiliation - and this wasn't even Lenovo's top product - Apple wasn't even competing with second rate products. I seriously wondered how Apple had fallen so far from the heyday of the MBPr and if it was just going to let the MacBooks decay into irrelevance.
I also would compare Apple to the X1 line, not the cheaper XPS line with a 32gb memory limit just because it's popular. I could compare Apple to Inspiron which is even more popular and Apple would be even further ahead. Hopefully Dell can make some good laptops one day.
> This laptop had quite a bit of overheating issues to the point you can't even drive all the monitors it says it supports on the spec sheet off the iGPU anywhere close to comfortably.
Also, which laptop is this claim in reference to?
Personal experiences:
13” M1 MBP - one LG 5K display, zero heat or fan speed issues.
16” MBP - I daily drive two LG 5K monitors with no increased heat or fan speed. Prior to picking up the additional LG 5K, I used one LG 5K and two 27” Apple Thunderbolt displays.
15” MBP, with 4 TB3 ports: LG 5K and two 27” Apple TB displays all working without higher heat than w/o them connected
13” MBP, with 4 TB3 ports: same triple monitor setup, same no heat / fan speed issues.
15” rMBP (2015): used two 27” Apple Thunderbolt displays + two 34” ultrawide LG displays, no heat issues or fan issues
The only time I’ve ever seen my laptop stressed by connected monitors is when I did an experiment and connected NINE external displays (ten total screens, as laptop screen was still on) as a test to a 16” MBP to see if it could do it. If I’d had more TB monitors to test with, I think I could’ve done higher count.
Setup in that test:
TB3 port back left: LG 5K display
TB3 port front left: two 27” Apple Thunderbolt monitors daisy chained and using TB3->TB2 adapter
TB3 port back right: eGPU w/ AMD Radeon 580 card connected to two 34” LG ultrawides and two 27” Apple Cinema Display monitors (the older non-TB models)
TB3 port front right: same setup as left, two 27” Apple TB displays
Even then, it definitely wasn’t “overheating”, but processors did idle much higher and fans stayed on during idle (but not at high speed).
I was looking for a laptop capable of running 3x4k displays and there is currently a 186 page thread on macrumours about the 16" being too hot with external displays.
I can't remember where I saw the benchmarks but I saw a HUGE difference in performance for the CPU depending on if you were running an eGPU or not simply because you're not stressing the internal GPU. Closing the lid reportedly also makes a huge difference. We may be using different definitions of "overheating" but generally what I saw was a laptop which advertised it could handle FOUR 4k displays but it could only do that while being loud and slow. Looking at updated reviews it seems like it's only certain configurations that lead to this problem. Nevertheless I saw reviews, they looked bad, and I held onto my money, and bought a solution with better cooling.
I have strong hopes the m1x will just support a ton of displays out of the box and just work in a compact package. If they released a 14" that could support more than 2 displays I'd freaking faint and wake up weeping with joy that my dream computer had arrived.
So definitely not way ahead either, actually behind the first two on one of your criteria.
Look, it’s obvious you have a bias against Apple, likely even a justified one. I have reasons I hate them too. But can we please not post baseless claims like you seem to be doing? I personally feel folks doing that are contributing to the huge backwards slide of content quality on the internet. If you want to say you prefer non-Apple laptops, that’s entirely fine, just don’t try to justify your decision with arbitrary ratios without actually running the numbers and seeing if Apple really is “far behind the competition” on those arbitrary criteria.
Bezels are rather bad on the T series back then but weight was fantastic, xps13 was slightly better in terms of bezels and weight. The T-series having the nipple also helped you use it in tighter spaces closer to your body without sticking your elbows out which helps in tight seated spaces (which is the big reason I care about body size vs screen size), but those tall bezels were dreadful. Still I was just stunned by Apples seeming lack of competitiveness in the 13-14" space, and their 15" laptops weren't good until the 16" came out which was a fine laptop compared to the competition but the thermals really disappointed me and it felt like some of the hardware they put in it went to waste. The 16" was the first laptop since 2015 I would honestly say was arguably the best on the market of that type, before that the laptops were just crap.
>Look, it’s obvious you have a bias against Apple
I have a bias against bad products like the 2016-2019 era macbooks which were dreadful. When Apple was making good laptops before then, I was a fan. After they started making good laptops again, I was a fan. When Apple was in the dark ages, I went to the store and gave their laptops an honest shot but I just couldn't bring myself to buy such garbage even though I was a fan of MacOS. Do I have to believe Apple has made the best laptops from a hardware perspective every single year to not be bias against them with no regard to what their competition is doing? There is not a single product in their lineup I do not consider arguably the best on the market these days, I've been in rooms where I was on of two mac users in a room of 30, my god I'm SO biased against Apple because I hate heavy slow computers with defective keyboards and dubious pricing.
> I have a bias against bad products like the 2016-2019 era macbooks which were dreadful.
Your biased opinion. My biased opinion? I can’t stand the pre-2016 MBPs anymore, and the keyboard is a major reason. Yes, I love the 2016 and later keyboards (newest ones being the best, but I never had issues with the 2016-2018 ones that others reported).
> Do I have to believe Apple has made the best laptops from a hardware perspective every single year to not be bias against them with no regard to what their competition is doing?
Nobody has said or implied that but you, so?
> There is not a single product in their lineup I do not consider arguably the best on the market these days
I've got a stacked 64Gb Dell XPS 5550 and an ass end M1 MacBook Air. I'd rather use the Air any day.
But I'd rather just use my desktop M1 Mini than both. It has a 27" screen and a proper keyboard (Durgod K320) attached to it. All my shit is in AWS now.
> I've got a stacked 64Gb Dell XPS 5550 and an ass end M1 MacBook Air. I'd rather use the Air any day.
I’ve got a 12 core 3.3ghz Xeon Mac Pro w/ 256gb RAM and a max spec 13” M1 MacBook Pro. Like you, I’d rather use the M1 unless the workload really needs the extra CPU cores/memory/Higher end GPU. Single thread, the M1 is faster. Even up to 8 cores, the M1 keeps pace with the much higher TDP CPUs.
Also, re: 64gb being an option on the X5550, it doesn’t surprise me doing a search for the spec sheet for a 15” Dell XPS didn’t include information that there was a X5500 and a higher spec X5550. Why make it easily discoverable for customers like Apple does?
We didn't know Dell did them either until we couldn't get a PO signed off for a more expensive one and ended up talking to a sales guy at Dell. So yep completely agree on the discoverability thing.
The M1 MacBook Pros are cheaper, because they replaced the cheapest Intel MacBook Pros in the lineup. Before M1, $1299 was the price of a MBP with 1.4GHz Quad-Core Processor with Turbo Boost up to 3.9GHz / 256GB Storage / Touch Bar and Touch ID: http://web.archive.org/web/20201105002246/https://www.apple....
make change -> compile -> build image -> deploy to k8s
then running docker/k8s remotely would be great but i need great upload bandwidth to be pushing fresh images everytime I make a change. In theory docker layers/caching should fix this but the reality is that you don't add individual files to an image, you had the compiled artifact/bundle/whatevs and if any file changes its going to upload it again, made worse by a change that effects many components/images/k8s pods. If you can solve the upload bandwidth problem (local servers? giant pipe?) then becomes more possible.
After reading the article, I still don't quite understand how this works. I've used multi-platform Docker containers before, and that makes sense to me.
What I don't understand is how they are running x86 containers on an ARM64 VM? Docker Desktop still works by building a Linux VM and running Docker there. But the VM would be an ARM VM. So, are they running qemu in the ARM VM to emulate an x86 processor in a nested VM?
I had imagined they'd try to do something like run x86 qemu through Rosetta, but it seems like this is not that. And the Apple Hypervisor Framework documentation leaves a bit to be desired, so I'm not sure if you can switch vCPU architecture, but I highly doubt it. (And I thought it was established that Rosetta wasn't emulating VM instructions).
QEMU can (slowly) emulate any architecture on any other architecture. In this case, they're using QEMU to emulate x86-64 on ARM64. No nesting or Rosetta is needed.
Why wouldn’t they use Rosetta though? I’d wager the performance of Rosetta would be better than QEMU emulation, but perhaps it’s more optimised for desktop apps
Rosetta is limited to running Darwin/x86-64 user processes on Darwin/ARM64 but Docker needs to run Linux/x86-64 containers/processes on a Linux/x86-64 kernel on a Darwin/ARM64 host.
I think the parent comment meant -- why not use Rosetta to run an x86 qemu process? Then the architecture emulation (translation?) would be done by Rosetta (potentially faster), as opposed to software emulation by qemu.
Now, this might not work, as I'm not sure Rosetta covers all of the x86 instructions/settings that qemu would need, so you might be stuck with ARM64 qemu emulating x86 anyway.
Qemu is only able to achieve native performance when running in conjunction with a hypervisor like KVM. Hypervisors don't do binary translation, so the guest architecture needs to match the host architecture. Running x86_64 qemu under rosetta would likely be much slower than running aarch64 qemu, because it would be running an emulator inside of an emulator.
From the point of view of rosetta, qemu's JIT would be completely opaque, and so would end up suffering severe performance penalties due to it having to translate code from what would appear to be an aggressively self modifying JIT.
That said, assuming Qemu runs entirely in user space I would expect it to be able to run under rosetta, and am genuinely curious if it does, and what the perf is - as I said, I would expect it to be much slower than arm64 qemu emulating x86_64, but I'm curious as to how much.
Efficient use of qemu on x86 requires the Hypervisor framework, which isn't available under Rosetta.
It's possible to run qemu without Hypervisor.framework, but that means it's doing its own second layer of translation. This would be horribly inefficient under Rosetta.
In addition, the CPU settings to have the high performance cores do TSO memory management are not supported under the hyper visor/virtualization layers.
But if they are doing that, then why do they need to use the Mac Hypervisor Framework to setup the VM? That wouldn't be required if you were using qemu, would it?
(How you mention it would be the simplest possible thing that would work)
Of course, but my question is how is it created... MacOS has a Hypervisor framework for creating VMs, which Docker is using. But I don’t know enough about those internals to understand how they are getting an x86 VM on an ARM host. I know it can be done with qemu emulation, but does that still need the MacOS hypervisor framework or does it run as a normal user process?
These are the questions I’m trying to figure out...
(5) Docker Image (amd64)
^
|
(4) QEMU Binfmt (arm64 <-> amd64 binary emulation layer)
^
|
(3) Linux VM (arm64)
^
|
(2) Hypervisor.framework (arm64, macOS native virtualization framework)
^
|
(1) Docker for Mac
Linux Kernel has a feature to allow using a wrapper to execute userspace program based on file header (binfmt[1]). In this case, Linux VM in (3) has QEMU user mode emulation registered as binfmt, so any amd64 binaries are automatically wrapped into `qemu-x86_64-static /path/to/bin` and run. Docker Image itself doesn't run a Linux kernel but use one from the VM host, so this scenario is possible.
This is also how multiarch[2] works (for amd64 to arm64/ppc64le/etc.) which might even be what Docker is using. In case of multiarch, the qemu-*-static binary is provided as a container running in privileged mode.
I would strongly advise anyone considering running this closed-source, proprietary software on their machine to examine the large amounts of private data it uploads to Docker Inc when it crashes before they install or run it.
I was surprised. You might be surprised, too.
I was so surprised that I decided to only ever run the open source command line docker client on my machine, and to avoid any proprietary software that comes out of Docker Inc in the future. It is absolutely not reasonable for them to upload some of the data that they do, and I no longer trust their judgement about what happens on my machine.
If they did that sort of stuff in the open source cli app, it would be patched out in minutes.
It seems that Apple's contribution for OSS is weak even though they used. I heard that Apple is Java shop, but they don't contribute to ship JVM for Apple Silicon.
The battery impact of having Docker running with a few containers seems greatly decreased on Apple Silicon. I used to be reluctant on my Intel MBP to use Docker if I was out and about - working in a cafe between meetings for example - because my battery life would shrink immensely. That doesn't seem to apply in the same way on m1 (running ARM64 containers) which is a huge win.
It won't be more than a tech preview until after Go is stable on the M1 which is likely to be January/ February. Once that happens, it should be pretty quick.
This certainly works for Home-brew apps. I think Docker is a bit of a different cookie though. Talking to a VM over the Virtualization Framework likely requires being a little more native.
Adobe and Microsoft took months/years after Apple's PowerPC and Intel transitions... complex software is complex, it's not like they get the new machines, flip a few toggles, and hit compile.
Source? The developer machines Apple shipped out didn't have virtualization support so they couldn't start on that until after the M1 and I haven't seen anything that says Docker got M1's early.
That being said, I'll also hold off a little while getting an Apple Silicon device, until various kinks are worked out. And until I can get a 16" MacBook Pro.
Does this mean I can run containers natively on Mac? And I don't need a VirtualBox VM running on my Mac to launch containers? This would be huge for me, and is always a big in my mind why I would consider switching back to Linux.
Edit: Docker on Mac has never felt as snappy as on Linux, because of the VM, though I have no hard numbers. Networking is a PITA, but it's not hard to figure out. The other main thing I hate is I have to give up a bunch of RAM to the VM that my containers may or may not use, instead of sharing with the host like on Linux.
> Docker on Mac has never felt as snappy as on Linux
It's extremely slow compared to Linux and I'm pointing my fingers at the virtualization layer without any hard evidence because it's the most likely suspect.
With all this focus on sandboxing apps of late, I'm wondering how far the OSX kernel is from having a feature set that resembles cgroups and network namespaces.
I used docker inside a vagrant+virtualbox VM running ubuntu, on macOS, for a few years. It's more reliable, and more debuggable, than docker for mac. It's some easy-auto-transparent storage and networking layers that make docker-for-mac so flaky.
> We aren't even close to a point where someone can just pretend a container is a magical box that just works.
I challenge this assertion. While it is of course true that there are situations that require a deeper knowledge of Docker, this is also not universally true.
Many projects these days have a Getting Started doc that has some options like:
- Build from source
- Install this package
- Use Docker
I often choose the Docker option because I know I'll (most likely) get a working version of the project in 5 minutes with minimal effort. I might not even fully understand the project yet or its architecture (much less that of Docker), but I can get it up and running with `docker pull` and `docker run`.
In many cases, I'll never need to know anything more.
I've personally spent more time using docker to build my own stuff, so I've had to learn more. But for many folks, it absolutely is a magical box that just works, and that's perfectly ok.
I agree that all abstractions tend to eventually leak. But depending on why you're using Docker, you may never have a reason to encounter that leakage.
Just because CorporationX says “don’t worry about it we got you bruh” doesn’t mean technologists - people who actively work with technology and write software - should be excused for just throwing up their hands and saying “it’s just ducking magic I don’t know how it works”.
I’m not talking about understanding it to the level of being able to contribute a patch to the project.
I’m talking about understanding that containers are inherently tied to the kernel, and thus are limited to running software written for the same kernel as the host running the container.
It isn’t rocket science. I literally explained it in one sentence, and I’ve never used docker in my life.
This is along the same level of knowledge as “no, you can just take an iPhone app and run it on android” or “no you can’t just take a SQLServer query and run it on anything that vaguely knows “SQL”.
> Just because CorporationX says “don’t worry about it we got you bruh” doesn’t mean technologists - people who actively work with technology and write software - should be excused for just throwing up their hands and saying “it’s just ducking magic I don’t know how it works”.
Entire industries are built on the premise that "don't worry about it, we got you". I'm not saying that it's appropriate to be completely blind/unaware of what you're using, but there's a line somewhere that's surprisingly difficult to draw in 2020.
I don't think anyone would argue that learning more is a bad thing. But the more salient point is that for many, it's just not necessary.
If you're doing work that requires a deeper knowledge of the thing, then of course you should learn it. If you're not doing work that requires this knowledge, it'd be a waste of time, the most precious commodity available to us.
Others have made the comparison to learning assembly. Useful? sure. Necessary in 2020? Usually not.
> It isn’t rocket science. I literally explained it in one sentence, and I’ve never used docker in my life.
This is what you said:
> ... if docker ran “natively” it’d mean using kernel hooks provided by xnu, which means you’d be able to run another instance of macOS in a container.
Not only does this tell the reader nothing practical about how Docker actually works, it doesn't even address the parent comment in a useful/informative way.
You followed this up with a statement that is a borderline personal attack on the parent comment.
I mean this as constructively as possible, but you need to work on your delivery, and ask yourself what you're trying to accomplish with these comments. So far, they've been unhelpful and borderline abusive.
The “one sentence” I was referring to is right above the bit you quoted:
> I’m talking about understanding that containers are inherently tied to the kernel, and thus are limited to running software written for the same kernel as the host running the container.
You've lost me. This sentence does not explain how Docker, or containers work.
It explains one aspect/limitation related to the execution of code in a container, but does not foster a deeper understanding of container architecture for the uninformed reader, and is actually somewhat misleading considering Docker's use of a VM behind the scenes in some situations (in which case the "host running the container" is technically the VM, not the user's PC).
I sense that you have useful knowledge to share. I'm afraid you've missed the mark. Instead of spreading vitriol and asking why everyone around you is so dumb, focus that energy on sharing that knowledge!
> considering Docker's use of a VM behind the scenes in some situations (in which case the "host running the container" is technically the VM, not the user's PC).
That is exactly the point. If you want to run Linux binaries you need a Linux container. On windows or macOS that means a Linux vm.
If you want to run windows binaries you need a windows container.
Conversely if you had a macOS container you’d only be able to run macOS binaries.
This is my point. It’s not a hard concept to understand. I’m not asking people to learn about cgroups or Chroots or network namespaces or any of that.
One, it might be totally fine to run macOS binaries! If your code is portable to macOS and Windows, you might still want to use Docker for dependency management, network isolation, orchestration of multiple processes, etc., but you might not care what the actual host OS is. (Just like how people are interested in running ARM binaries, even though Docker started out as x86-64.) At my day job, all the stuff we put in Docker is either Python (generally portable), Java (generally portable), or Go (built-in cross-compilation support). It's absolutely sensible to do local dev on a Mac and then deploy on Linux in prod - it's perfectly sensible to do so without Docker in the picture, and plenty of people do just that.
So, maybe all the people you're yelling at understand the concept you think they don't, and they're okay with it.
Two, it's not at all true that to run Linux binaries on non-Linux, you need a Linux VM. WSL1 is an existence proof against this on Windows, as is the Linuxulator on FreeBSD, as are LX-branded zones on SmartOS. Linux itself has a "personality" mechanism for running code from non-Linux UNIXes. You could do the same thing on macOS, and teach the kernel to handle a good-enough subset of the Linux system call interface - it would be far less work than adding containerization (namespacing and resource isolation) in the first place, so I'm not sure why you're so hung up about this.
> Two, it's not at all true that to run Linux binaries on non-Linux, you need a Linux VM.
So (a), this entire thread, the entire post, is about docker. (b) WSL1 worked so well, Microsoft not lonely abandoned that approach for WSL2, they also never used that approach for containers on Windows. Hence, Windows native containers are, drum roll... Windows.
Because we use abstraction as a way of lowering barrier to entry to reduce the requirements for people to be proficient in the career and be able to reduce overall costs downwards?
> which means you’d be able to run another instance of macOS in a container
This is not true, for multiple reasons. Strictly speaking it only means you'd be able to run another instance of Darwin in a container. And, as you surely know because your tone of voice implies you bear immense knowledge, a Docker-style container is not a full OS: it doesn't run an init or normal system daemons, so it wouldn't even be a full instance of Darwin, so it wouldn't have to support functionality only needed by launchd or system daemons (e.g. WindowServer). It would just need to let you run a stanalone program in a chroot + separate network, PID, and IPC namespace + apply resource controls.
Furthermore, since most people are using Docker for developing software that's going to run on Linux, there would be no real need to virtualize the parts of XNU that aren't also provided on Linux - notably all the Mach stuff. You'd just need to provide a BSD-style syscall API to programs in a container.
There’s no technical limitation stopping you from running init or system daemons inside a container, it’s just an anti-pattern and missing the point of a container in most cases.
I mean, they do provide the hooks via Hypervisor.framework. I'm not 100% familiar, but if it's anything like KVM then running a linux VM like that shouldn't have that much overhead.
I deploy on linux, but dev on macOS without a vm or docker or anything. If you're not doing anything OS dependant, which most web apps don't, you can run everything natively.
Me and I just about everyone I know that has a mac develops _for_ Linux. What is nice is that I can push, pull and run Linux images on my mac.
If the containers where native Macos docker images it would be about as useful as native Docker on Windows. Which I'm sure is great for the few ppl that need it - but pretty useless for most ppl.
But I sure wouldn't mind if was a bit snappier. But it is plenty fast enough for my needs atm.
You wanted Apple to add container support to the OSX kernel? Hah. I wonder if the virtualization API that Apple is pushing performs better than Hyperkit.
People griefing about to you that you have no idea what you're talking about, smh, don't listen to them. All I have to say to them is: look at the Wine project running Windows software natively on Linux. Don't underestimate nerds who have a vision in mind. And look at Kubernetes deprecating Docker. At the highest level of application development, all these details don't matter. I'm using all GNU command line tools compiled for my Mac, I'm sure we could figure something out to increase containerization efficiency on Mac. ¯\_(ツ)_/¯
Docker Desktop already ran on Macs. This is specifically for the new Apple Silicon support (M1). It's not native, technically, but it feels native the way Docker Desktop works. Basically they manage the VM for you, so you don't have too.
Can you run a container of Windows on an x86 machine? The answer is no, and for the same reason it won’t work on ARM. A “container” is not a virtual machine, you can only run the same Linux executables you would on a normal Linux system.
That said, as another person commented, you can run Windows for ARM in a VM on an Apple M1.
No nested virtualisation present currently, as such no virtualization support provided to VMs, so on Windows on an M1 only WSL1 works. Docker Linux containers on Windows require WSL2 instead.
Docker Windows containers aren't available on arm64 Windows yet, but stay tuned...
It actually did at one point; before Docket Desktop there was “Docker Toolbox”, which required separate virtualization software. The installer came with Virtualbox by default, but there were options to use Parallels and VMWare as well. This is probably what GP is thinking of.
I'm actually using a version of this setup today in order to run Docker on OS X 10.9.
Now come on Windows! Give me a solution for that (although I rarely use it) and I’m sold
or maybe I moved the goal post to requiring more than 16gb RAM again
but come on everyone you can do it!
edit: guys I'm talking about running Windows on a VM on a M1 macbook. Just going down the checkbox of virtualization options. Docker is one checkbox. Now want Windows and Vmware/VirtualBox/Parallels
We had painful experiences with Windows containers, from being incapable to run CUDA through it to solutions that are not battle-tested and an ecosystem that is not as matured as the Linux one.
I run WSL 2 here with Docker Desktop and it's really good. It has been since WSL 2 was available.
As for 16GB of memory, funny enough I just put out a video today around the topic of "is 16GB of RAM enough for web development?" over at: https://www.youtube.com/watch?v=SQS7XCgUPmc
The video demos running everything I run in my day to day plus more just to see how far 16GB of RAM really goes. It's all running on a Windows 10 workstation I put together in 2014 which has a weak CPU and video card from today's standards, yet everything runs really smoothly.
I never broke 13GB of memory used even with running VMs separate from Docker and heavy duty video editing tools, all while recording a 1080p video at the same time.
For folks who want an M1 and are capped at 16gb of memory, I would imagine the memory usage experience would be similar to that Windows based video. Good enough for most web development, even with a moderate amount of media creation and ops work.
Do you mean the arm version of Windows which runs on Surface Pro X? That can be virtualized.
If you mean the regular x86_64 version of Windows, forget it.
Docker Desktop uses qemu, which is the only way you can run an x86_64 OS on an arm64 machine. It can be usable for Linux kernel and command-line applications.
But for Windows, it is very likely that it will be too slow to be usable. People have tried running Windows XP on Raspberry PIs successfully, but it is very slow as expected. https://youtu.be/QQOP29yLOxQ
Absolute disaster, this platform. The opacity and lack of early availability for development made for a really ugly adoption experience. I feel that people should not encourage this and vote with their money in the DevOps / Dev sector to discourage other companies from pulling the same stunt.
It's pretty good for a hardware platform. But I guess if you're a frontend developer, 2 months is enough for a new JavaScript framework to come out, become ubiquitous, get superseded, and get abandoned as legacy code.
Other than being unavailable outside the US, I'm pretty sure everyone who ordered one received it. Apple also donated at least a few of them to OSS projects.
For the average, normal human, they won't notice which platform they're on. Only someone with a niche need will have any issues. and those issues are being fixed rapidly.
While there are definitely gotchas and considerations, i've seen people say there will never be audio software on m1 in the next 2 years. there's already daws with m1 support and bridging for vsts being released.
So, it's hard to buy into the idea this platform transition has been so horrible you'd abandon using macs.
[1] - https://www.docker.com/community/get-involved/developer-prev...