Hacker News new | past | comments | ask | show | jobs | submit login
Docker fails to launch on Apple Silicon (github.com/docker)
655 points by bartkappenburg on Nov 12, 2020 | hide | past | favorite | 272 comments



This link is a little confusing because the top comment and much of the conversation is talking about the DTK which was based on the much older A12X CPU and not the M1.

As far as I can tell, the M1 does have virtualization support, Docker just isn't ported yet.

Update: Also, from Apple docs it seems like you won't be able to run emulation and virtualization in the same process. So you can run x86 Mac apps, but it's likely x86 Docker images will be out-of-reach.


Right, lots of confusion in this thread.

A12Z/DTK: HW does not support virtualization at all.

Apple M1 / New Apple Products: HW does support virtualization for ARM64 guests (both windows and linux demonstrated).

What about x86 software in the guestOS? Not with Rosetta. Instead, the guest OS will have to provide its own translation (such as windows/arm's current x86->arm64 or upcoming x86_64->arm64 feature). I'm not familiar if there is any usable high performance x86_64->arm translation available for linux.

Docker w/ arm images: needs some work to be able to work on mac/arm virtualization, but it's coming.

Docker w/ arm linux kernel + x86 userland images: Any translation solution would be found within the linux guestOS, not macOS. I don't know if any candidates exist. Maybe qemu?

Docker w/ full x86 image (incl. kernel): I don't think this is possible?


This was a huge reason why I decided to go with a 16" MBP with an i9 vs the M1 today. I assume this will get worked out eventually, but as massive as Apple's gravity is in their ecosystem to pull apps along, they will have a much harder time against the massive library of x86 docker images.


Apropos of yesterday's thread(s?) around running a k3s cluster on Raspberry Pis - when I tried to do something similar, I very quickly ran out of steam trying to find manifests and charts that didn't have some random x86_64 image hardcoded somewhere inside them.

I spent more time trying to figure out ways to get the right images than I did learning k8s, so I kinda put the project down.

I do think this is exactly the sort of thing though that will start to flush that out, long term.


I guess the flip side is that, hopefully, the Docker ecosystem will start getting a lot better at multi-arch, at least for x86-64 and ARM. I just ordered an M1 MBP and I'll be quite happy to start thwacking bugs in images and upstreaming the fixes.


Just a heads up; I haven't tried using my DTK in a while but... development was still VERY rough last time I checked. I was pretty much forced to write code on an x86 machine and schlep it over via ssh to then run it.

Also, be real careful you don't accidentally build for x86 when installing libraries or it all goes to shit. I got bitten by this by using iTerm (building it from source was fine though) and not realizing it was the x86 variant. That is to say, make sure your terminal is the system terminal or built for ARM-- otherwise when you install things using package managers they'll be the x86 variant.

Like, I'm super stoked for the M1, but also just totally fucking irritated about the toolchain changes and MacOS in general... They've not been the best stewards of their software and developer communities as of late, and they're pushing a totally proprietary architecture onto them expecting them to foot the bill (hours spent fixing/debugging) to make their platform usable.

Truly, I get paid to write software that runs on Linux not on Macs. In fact, I've never gotten paid to write software for Macs, because no production systems use Macs. Now it is harder for me to do primary development on a Mac; because I must fix MacOS or live with MacOS only bugs... bugs which only exist on a platform I do not intend to deploy my code to... these aren't just some config bugs either they're going to be a fucking mess of irritating, show-stopping, moving targets with a negative ROI.

I still can't believe they didn't incorporate containers into the OS before switching platforms to keep developers around but that's a totally different rant.


We operate a k8s cluster with our own software and lots of opensource stuff.

If i can't run our own images on a future mac, i will no longer buy one.

I can't force my team to maintain two versions if there is no better business reason.

I'm quite curious how this will play out.


I naively assumed, until today, that the whole point of using linux in docker was that you get the multi-arch goodness of linux for free!


Not at all. Docker on a raspberry pi is a massive pain in the ass. Half the images are not compiled for ARM.


Not a bit deal. Building ARM images from Dockerfiles is trivial. And it has the benefit of not having to trust an upstream image content. If you want to go the extra mile, contribute to the project to provide multi arch images support.


What multi-arch goodness of Linux? Linux can't magically run code from any architecture even without Docker in the mix, and unless someone is maintaining packages for your specific architecture, you're building them yourself.


As a non-processor geek, isn't M1 an arm chip? How long will it take arbitrary software to target it?


Yes but afaik most arm softwares target arm7, not aarch64.


You did the right thing. This is not the right time for software developers to experiment with Apple Silicon, unless they’re willing to help develop software to work with Apple Silicon.


Largely depends on what kind of software developer you are. Not all developers rely on x86.

Our app is React/ Typescript and we don't use Docker/ x86 virtualization at all. I do need MySQL and a few other tools, but I don't expect they will be long coming (or maybe already running in Homebrew is there Homebrew ARM?)

If you make your living writing Mac or iOS software, this is a gift. Might be worth waiting for the next generation with more RAM and even beefier CPUs, but otherwise this is ideal.


> is there Homebrew ARM?

There is but it’s highly experimental still, with 50 % of formulae not working yet.

Better to run Homebrew with Rosetta for now.


> Better to run Homebrew with Rosetta for now.

For some reason I'd completely forgotten this was an option. Looking at the discussions on the home-brew boards, it looks like this will be the way to run home-brew for some time.


In my experience it's always python and/or ruby that's causing the failues. As soon as they are fixed, most formulae will compile just fine.


One of the formula not working in Homebrew’s core functionality is the gcc compiler, which they expect might be ready by mid-2021. Another is Go, which iirc they’re expecting will begin working around January. Rust and Erlang aren’t ready, MySQL doesn’t even get past the build phase, and Python is only partially working.

You can track progress here. Be advised that packages listed as ‘check again when XYZ is fixed’ may themselves have issues that can’t yet be discovered.

https://github.com/Homebrew/brew/issues/7857


I would definitely not take for granted that you’ll be able to develop such things on the M1 platform without encountering M1 platform obstacles in unexpected corners of the tool chains for the next thirty days. It might be fine, but if it’s not, you’re out of luck until it is.


Sure, M1 Apples are bleeding edge for at least six months, maybe a year or more.


I switched off Mac years ago.

My servers run Linux. Not bsd.

Having to deal with this sort of stuff is just a waste of time.

Mac's don't have a monopoly on quality, haven't for a long time.

And since it's a usual response, anyone claiming Linux is too fiddly haven't used it in a long time. And while it can be if you want to roll your own DE its completely optional. Using something like fedora you can literally do almost everything non developer based with point and click.


I moved to Linux last month. I currently run Windows, Linux and MacOS VM on my machine.

From being a Windows guy through all the years, I've found myself increasingly using Linux in this span of time. The only thing preventing me from a complete switch is MS Office. I barely used MacOS, except to diagnose issues on MacOS and run XCode.


I moved to Linux in early 2019 and was surprised that I barely needed Windows anymore. I was already using Libre Office so Ms Office wasn't a sticking point for me. I assume you tried Libre office and its probably not compatible enough for you.

Ms Office is very compatible with Wine - even the newer ones. The only problems seem to be with apps were direct replacements like Skype or are irrelevant like OneDrive.

For setting up wine and managing my apps I use Lutris. It is meant for games but is brilliant for setting up windows apps as well. I wish they would acknowledge that fact in their UI but it doesn't bother me too much that everything is called a game.


I didn't enjoy Libre Office, largely due to the functionality being too low. I work with Excel spreadsheets with lots of macros and formulae, which doesn't port well over to Libre Office.


I can understand this, I make do with Google sheets but it's seriously limited with my bigger spreadsheets, especially if there is lots of custom code / API response parsing involved.


May I ask what linux you are using and how you got a Mac VM running? I tried myself and was able to get Catalina running on Ubuntu 20.04 using Openboot. But then Catalina updated itself and crashed after that. I wasn't able to reinstall MacOS successfully anymore, even starting from scratch.

If you could help me out by pointing out which products/technologies you used and maybe a website describing step by step, that would be great.


Ubuntu system. Firstly, I disabled updates, not because of Mac in particular, but I've had issues with Windows updates crashing before, so I don't eagerly update.

Apart from that, standard QEMU process. This was the guide I used: https://www.funkyspacemonkey.com/how-to-install-macos-catali...


Thanks. I'll give it another try.


Have you tried the web based office? I haven’t, and I’m not sure if it’s a good solution, but friends have said good things


I use Excel pretty intensively, learnt all the shortcuts from my past finance days. I don't think the web version allows for those back-of-the-hand shortcuts.


Fedora is the new Ubuntu


I haven't touched fedora in a couple years. what's going on there?


in my case, they ship a almost up to date kernel, they don't use snaps, vanilla gnome, the distro is very stable. The linux community I follow also likes where the distro is going and the overall "Fedora philosophy".


That sounds exactly like what I’m looking for. Does it have an equivalent to “umake” for installing latest-version dev tools?


I find it quite ironic that solution that supposed to make things portable it is now standing in a way of portability.

Yes, I understand why, but still...


Apple has only ever used portable to refer to their own platforms. Which is a bit of a stretch of the term considering what it means to the rest of the technical world, but there you have it. Per their own definition they have improved portability among their own platforms.


Same reason why I chose the 2020 13” with the highest end Intel I could put in it. In another 3-4 years all this will be worked out and I’ll be happy to move to Apple Silicon


> Docker w/ arm linux kernel + x86 userland images: I don't what existing projects might me candidates, but any translation solution would be found within the linux guestOS, not macOS.

That could be something doing dynarec like box86, which is mindboggingly the only way to run (x86) Zoom on a arm7hf Linux atop a Raspberry Pi 4 (I tried it on a Pi3B: it works but it’s way too slow)

https://github.com/ptitSeb/box86


> Docker w/ arm linux kernel + x86 userland images

as mentioned in the issue comments: Docker x86 version already supports running ARM images via qemu out of the box on Windows and Mac, so x86 images via qemu on ARM host should not be a problem...


> Docker x86 version already supports running ARM images via qemu out of the box on Windows and Mac

it "works", but on my overpowered i7, running a raspberry pi docker image that way with qemu-user-static is... only marginally faster than running the code on an actual rpi 3, which is pretty much potato-levels of power.


Yep, so if you're a developer and buy a Macbook with the M1, all the potential performance gains would be lost when depending on common Docker images. That's quite sad.

(It would at least be a huge problem for me. I mostly develop server software that runs on Kubernetes on AMD64 machines.)


Why not just run the ARM images instead of the x64? Granted not everyone makes ARM images, but in a year or so I fully expect that all serious Docker images will have ARM support (not because of Mac, but rather because of ARM’s progress in the server space). And in the meanwhile, building the image yourself isn’t that hard.


Why would your employer pay you to build and test ARM images when the server they're running on is AMD64? Okay, let's be realistic: they'll probably never find out. But still this is wasted effort when you could just as well use a regular x86 machine for development. Cross compilation will lead to many small problems that add up over time...


More likely than not, those servers are at AWS. And if that's the case, once all the components have ARM images the server instances can be switched over to the M6g instance type for significant cost savings.


Even if not AWS, every cloud provider will have cheaper ARM offerings in a few years. The writing is on the wall at this point. Not to mention that some significant portion of the PC market will certainly follow suit, and now whether you’re developing for desktop, mobile, or server, there’s a good chance you’ll be targeting ARM.


They would do it so their developers can use Macbooks.


All of this already works on Windows in the other direction:

PS C:\Users\hokaa> docker run -it --platform aarch64 ubuntu root@aaf35d3fd9de:/# uname -a Linux aaf35d3fd9de 4.19.128-microsoft-standard #1 SMP Tue Jun 23 12:58:10 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux

And yes, AFAIK it uses qemu's binary translation.


X86 software running in a VM hosted ARM Windows/Linux env, but using Apples translation layer is not something I would expect to work anyway - seems like a weird thing for them to try to do to me.


you should be able to run x86 vms through qemu. Just like running arm vms on x86 today


Also, it seems Apple demoed Docker on Apple Silicon at WWDC:

https://twitter.com/0xtim/status/1326980313160032256?s=21


I know they demoed Linux virtualization at WWDC, but I don't recall Docker. I could easily be mistaken though.


They specifically mentioned it by name in the keynote, and even have a screenshot of it running on Big Sur. https://www.youtube.com/watch?v=GEZhD3J89ZE&t=6006

(1:40:06) if the timestamp doesn't work


Yes, but it was ARM up and down, no x86 at all.


Holden Karau has public videos showing her attempts to get ML tools e.g. Dask/Kubernetes running on ARM including heavy use of Docker.

For the last year she has been working at Apple so I've always suspected that she was developing on M1 hardware but with Linux OS.

https://www.youtube.com/c/HoldenKarau/videos


Kubernetes itself has been installable / running on ARM for years now (I remember getting it working on Pis back in 2016/2017 and have been running http://www.pidramble.com on K8s since then).

Many, many images are now multi-arch so I encounter failures to deploy things to the Pi much less frequently nowadays.


> run emulation and virtualization in the same process

To me this says that Rosetta doesn’t convert VT-X commands (et al) to their ARM counterparts.

I don’t see an issue here... sure you’d be left without a dedicated VM framework from Apple, but you should be able to run a qemu process that is emulating a Linux VM. That qemu binary could be either x86 or Arm. But instead of using the Mac OS Hypervisor.Framework, you’d be back to using stock qemu that we used to use. It just isn’t packaged up as nicely at the moment. (And who knows how fast it will run).


I'm not exactly sure what the point of VT-X would be in an emulated environment anyway, and it would be a waste to make Rosetta 2 that much more complicated for it.


Also, Rosetta 2 is user mode only, and VT-X commands are kernel/hypervisor mode only.


I think we made couple of expedient design decisions early on in our build process at work that is going to make producing multi-platform docker images a bit of a chore. I'm not looking forward to what happens on the next machine upgrade cycle. Probably we will end up with most people on the final generation of Intel Macbooks so we have a while to figure it out.


> Probably we will end up with most people on the final generation of Intel Macbooks so we have a while to figure it out.

If the early story around performance of the M1 bears out, I suspect this won't be the case. If their CPU for the base MacBook Air trouncing higher end MacBook Pros, what kind of a beast will their higher end CPUs going to be?

People who really need x86 compatibility will stick with x86 based Macs. Others? I'm not sure. For my current job where everything runs on node.js, the performance and battery life on the new CPU is pretty appealing.


You realize most engineers write software that runs in production on x86 Linux, right?

All that performance is going to be used to run cloud based Linux x86 development environments. Repl.it and Github codespaces are the real winners with the move to Apple Silicon


"Most"?

I'm not entirely sure that's true. Or at least not entirely relevant/ interesting.

First, you have thousands of developers writing code for iOS/ Android & Chromebooks. That right there is a pretty big chunk of developers. And at least for the iOS developers, running on ARM instead of x86 is a significant advantage.

Second, a lot of us are writing software for the web which means primarily writing software that runs in the browser. So long as we have a good running version of node.js and a browser to test with, we are golden. Assuming for a moment here that Google Chrome and Firefox are both going to be ported to ARM, I don't really see why I care about x86 linux except...

Of course you need to serve up your site and that piece is usually running on x86. Only... many of us are already crossing platforms. Our web server is Node on Linux and my dev system is MacOS (no Docker or Linux VM) so I'm already cross platform.

When I was primarily running a Python/ Django shop, it was a similar deal—so long as I was able to get Python running I was good. There are definitely a few places where you notice the difference, but there are definitely places where I can use an extra 50% battery life and a faster CPU as well.

I don't know how many other developers really need x86 and how many don't. I have had jobs where running Docker/ Kubernetes was important, but I've had a fair number where it wasn't as well.


100% of my coworkers deploy to x86-64. I wasn't trying to speak for everyone. I thought I was pretty clear about that.


You were very clear you were only speaking for "Most Engineers".


different responder


This will become less of a problem over time as more M1’s make it into the wild.

What gives me pause moreso than this is the absence of the touchscreen and Face ID, which are very obviously coming in the next refresh (iPad app support without touch?).

As impressive as the new MacBooks are, they’re very clearly a stopgap solution so as not to replace everything all at once.


> As impressive as the new MacBooks are, they’re very clearly a stopgap solution so as not to replace everything all at once.

I agree, but at the same time, they pushed it pretty hard by not just doing an Air with the chip like I thought they would, but putting it in a Pro as well (with, according to the early benchmarks I've seen, barely any performance difference. I mean they could at least have put two of them in there)


Re: the Pro and Air difference... expect the performance for anything long-running to be rather significantly different, I'd guess...


Yep.

The Pro adds

- Brighter Screen - Better cooling for extended CPU loads - Touch Bar (which many don't care for) - Bigger battery.

The MacBook Air boosted to 16B is likely the biggest bang for the buck unless you are going to be hitting that CPU hard all the time. The iPad has pretty damned good performance without fans so I expect the Air will do quite fine.


In this case the whole Docker universe needs to be ported, including gazillion packages? I am glad if most of the datascience packages compile successfully on x86. Much fun it seems.


Is the inability to run emulation & virtualization limitation of the silicon itself? I.e., will it ever be possible an these chips?


It's a hardware thing, so the answer is "never" on A12Z and "it's possible" on A14 and "Apple supports it" on M1.


The limit on “emulation and virtualization” just means Apple’s emulator software (Rosetta) doesn’t support x86 virtualization. Someone could write a different emulator that does support it.


Looks like Valgrind 32-bit porting story to Mac 64-bit


Slightly mis-leading headline. The DTK w/ A12Z chip didn't support virtualization, but it appears the new M1 will. Since docker devs haven't gotten their hands on M1 hardware yet, it remains to be seen how much work is needed to make it functional. It may just work out of the box?


The APIs are somewhat different, so it won't "just work", but it shouldn't be too hard to make it happen.


I will not be buying one of these first generation M1 devices because of issues like this. But in the long term I actually think it’s going to be a really good thing for the industry if we can get more developers using non-x86 devices and deploying code to non-x86 servers.


Having people working on fast ARM machines will be good, although I do worry that apple will be apple and you'll never be able to run (say) Linux on said ARM chips.

I used to be fairly laissez-faire about it but now I really think it should be legally required for at least some level of freedom for the end users to be supported (technologically rather than as in customer service) by big tech companies selling hardware


Wait, you can't install a non MacOS on these things? Are they basically an iPad in a notebook form factor?


Presumably they are locked down. When asked about alternate OS, Craig Federighi said that it will be virtualization rather than booting other OS on the metal, ie Linux (and possibly Windows) VMs rather than Bootcamp. That's one of the big things stopping me from switching from the Intel version.

https://www.youtube.com/watch?v=Hg9F1Qjv3iU&t=3772


Secure Boot can be disabled on Arm macs. Guess that Craig Federighi says this because Apple won't take care of drivers for other OSes.

See this WWDC session: https://developer.apple.com/videos/play/wwdc2020/10686

and this manpage: https://pastebin.ubuntu.com/p/RwcT8stYMY/


While that may be true, due to custom ARM instructions that Apple uses, it might be difficult to port various other OS onto it. We'll see though.


I would think the bigger problems are drivers & secure boot rather than just instructions. M1 is a regular ARM processor so existing backends at a minimum would work fine even if they're not fully optimized (+ I would be a bit surprised if the M1 support for LLVM isn't upstreamed).


Is that really true that M1 is just an ARM processor? I've heard the difficulty in emulating it is due to Apple specific things, whether they be drivers or new instructions.


It's an asymmetrical situation:

If you have code that makes use of custom instructions, even if only sprinkled in a few places, the emulator must support them.

If you have a cpu with those extra instructions that is otherwise backwards compatible, you can run code that doesn't make use of such instructions just fine (of course, you won't benefit from the functionality/performance offered by those new instructions)


I understood this thread to be about running Linux bare metal natively on the M1, so there's no emulation there & new instructions don't matter. Drivers & secure boot are always the biggest problem. It sounds like you can sign your own kernel & add it so in theory secure boot may not be a problem. Then the question is which driver support are missing & need implementation & how close that lines up with existing ARM device trees.


An experimental Linux kernel that runs on Apple A10 devices is at: https://github.com/corellium/linux-sandcastle

See https://arstechnica.com/gadgets/2020/03/project-sandcastle-b... on details about it. This work, especially as there's no exploiting security bugs required here, will benefit Linux on those Macs quite a lot.


> This work, especially as there's no exploiting security bugs required here, will benefit Linux on those Macs quite a lot.

Right, it sounds like Apple was very careful to provide an out here for enthusiasts by making it possible to sign your own kernel, something you can't do for iPhone/iPad/Watch. Seems like an astute appreciation of differences in their target market.


Really curious to know how you found out about bputil. It doesn’t seem to appear in the WWDC session you linked.


It's more knobs over csrutil. Chose the bputil manpage as it contains more info on the underlying hardware and security model, and allows you to selectively disable only some of the security bits.


You can't install an operating system that's not signed by Apple. Also, the system partition is read-only (just like Catalina was).

However, as they showed at WWDC [1], you can run Linux and other operating system using Apple's hypervisor. And it will run faster on an M1 Mac than it does natively on comparable Intel hardware.

[1]: https://developer.apple.com/wwdc20/10686



What exactly is comparable Intel hardware, anyway?

That is to say, am I supposed to compare based on price, power consumption, process node equivalent, or some other factor?

I ask because it's hard for me to understand the comparison otherwise.


What's a computer? iPad.

People aren't laughing now, are they?


The entire thread is about the developers toolkit hardware, which didn’t have hardware support for virtualization.

Obviously they done have access to the shipping m1 devices yet, which are supposed to have the hardware support.


> I will not be buying one of these first generation M1 devices because of issues like this.

It is always a bit of a gamble to be on the bleeding edge with your production machine!


I mean, I wouldn't buy first-gen of anything from anybody. When Apple went Intel, I bought year2 or 3 hardware.

What they're showing now is pretty damn impressive. I look forward to the M2 or whatever variants that would be a more direct replacement for the 2019 rMBP I have now (6 cores, 32GB RAM).

But I'm happy to wait. If you don't have virtualization needs, and your use case is pretty straightforward, these are gonna be amazing machines. They're just not the power user machines -- I mean, except for those who insist on bleeding-edge living.


> But in the long term I actually think it’s going to be a really good thing for the industry if we can get more developers using non-x86 devices and deploying code to non-x86 servers.

Why?

If it's about the dominance of x86, I think in about 15-20 years there will be no other architectures in widespread use except for ARM and maybe RISC-V.


It took ARM 35 years to be an overnight success and x86 has a very large install base. It's not going anywhere. Remember this thread from not too long ago?

https://news.ycombinator.com/item?id=22837753

x86 has lock in like that in spades.

What I think is much more interesting is what the software industry will do with all these random coprocessors we find in chips these days. They seem much less stable than an instruction set but amazing speed gains can be had there, so it's enticing. If software libraries can bring the dream of the HAL to reality, that would be pretty cool.

But from threads like these, it's safe to conclude we're not there just yet.

https://news.ycombinator.com/item?id=25057985


But x86 and Intel are losing badly in the highest growth part of the market: mobile/smart phones.

Now $699 M1-based Mac mini is way faster than any Intel box in that price range and many that cost much more.

The benchmarks that are starting to come out are just nuts, favoring the M1.

Of course not all of the pieces are in place; that takes a while. But those developers that are on top of their game released universal versions of their apps that take full advantage of the M1 SoC and all of its benefits: unified memory, 8 CPU and GPU cores, Neural Engine, etc.

It's day 1 of Big Sur being available and customers haven't gotten their M1 Macs yet, though they'll have them in a few days.

In a couple of months, once the dust has settled and developers have gotten their hands on shipping hardware, people's understanding of what performance is possible on consumer-level hardware will be changed for good.

There are Hollywood studios already planning to replace their high-end, pro Macs with M1 Mac minis because they'll be faster than what they have: https://appleinsider.com/articles/20/11/12/hollywood-thinks-...


> But x86 and Intel are losing badly in the highest growth part of the market: mobile/smart phones.

x86 and Intel are making all their money running your k8s on AWS, GCP and other server platforms so you can consume on your locked down ARM device. That's where the big money and margins are.

Apple never really could get a foothold in that market.


x86 and Intel are making all their money running your k8s on AWS, GCP and other server platforms…

ARM is coming for the datacenter too; this will not end well for Intel [1]:

    Amazon Web Services launched its Graviton2 processors,
    which promise up to 40% better performance from
    comparable x86-based instances for 20% less.
    Graviton2, based on the Arm architecture, may have a
    big impact on cloud workloads, AWS' cost structure,
    and Arm in the data center.
[1]: https://www.zdnet.com/article/aws-graviton2-what-it-means-fo...


Mobile phones have had a decade of phenomenal growth and development.

Now people are at home again, and not quite so mobile. Commensurately, we are seeing a breath of life into a laptop/desktop market segment that, with few exceptions, has been marching to a steady drumbeat for at least the past six years.


Commensurately, we are seeing a breath of life into a laptop/desktop market segment

True. The Mac had its best quarter ever, which should be a good setup for the transition to M1.


It also marks the end of using the same artifact (container) to debug issues which is a huge selling point of docker.


Yup it almost feels a little short sighted now the way we have embraced taking applications written in architecture independent languages like java, python, etc., and packaging them up in architecture-dependent artifacts.


Is it going to be interesting to see how new Apple machines running ARM based processors plays out. Writing code, and building docker images locally on ARM and building and running the same code on X86 in production could potentially, in some circumstances cause some issues. If not, it's a bit of a pain to potentially build and publish x86 and ARM images so that people can run them anywhere (Some projects are already doing this). Running production on ARM could work if you wanted to, but right now the instance types are very restricted in terms of different combinations of cpu/disk/io.


> building docker images locally on ARM and running it on X86 in production could potentially, in some circumstances cause some issues.

Won't it always cause issues? If you build locally you are packaging ARM binaries in zfs which won't run on x86... unless you somehow signal to docker that you want a different arch when building


"Somehow". That's exactly what you have to do. Build multi-arch containers with docker buildx.


I assume (but don‘t know) that most workflows build the deployed docker image in some kind of CI service. Is that not true?


Sorry I made an edit to clarify what I meant as it was confusing.


If you are developing on Apple silicon, the path of least resistance is to deploy to AWS Graviton2 instances, even if the instance cpu/memory/io combination isn't perfect fit for your usecase.


I agree. However for a lot of people, their workloads have been running on x86 forever, maybe their production instances are not reproducible, and with combinations of cpu/memory/io being limited, at scale this could cost $$$. I’m just curious to see how this will pan out.


I wonder if this will drive z move to deploy on arm based cloud instances like aws has arm based vms for ec2


It seems like it very well could. It probably won't be power users, but perhaps, it is the start of more arm based deployments.


I'm actually tempted to get one of the new Macbook Airs. There's something intriguing and fun about navigating the brokenness of a new ecosystem. Does anybody know how Steam handles the new macs?


>>> Does anybody know how Steam handles the new macs?

Games are mostly C++, based on my experience with C++ codebases I'd say there's about zero chance that a game could be (easily) recompiled and made to work on ARM.

And games won't be ported anyway because it's too much work for too little sales, even if a developer really wanted to make it work they'd need a $2000 new mac to be able to compile/test in the first place.


As someone who cross compiles C++ to arm (and others) as part of my $dayjob, I sincerely don't understand this comment. I would like to, as maybe there's something I misunderstand.

Ninja edit: I guess you're suggesting that some parts are in hand rolled x86 assembly, not just pure C++ that would be output as the cross compiler target architecture?


There's usually a million assumptions that are technically undefined behaviour but work, and immediately break if you try to cross-compile by just changing the target system in CMake or w/e.


I could see that if it were being compiled against another libc as well. It's true my experience is with a codebase that has forever been cross compiled. Just altering the output architecture I guess I just haven't seen be an issue. Aarch64 hasn't ever given me an "immediately break" scenario. If it compiles, it works. Ymmv though. Computers are complex.


Most games that survived the 32 bit -> 64 bit change and run in Catalina should run under Rosetta2 with no changes.

Performance? Who knows.


$2,000 is a stretch. You can get the high-end M1 Mini for about $1,200. A high-end M1 MBP for under $1,800.


The Macbook Pro that is $1200 in the US is £1300 in the UK and 1449€ in France. Apple products are way more expensive outside of the US.

Plus another 100 or 200 in adapters. It's beyond me why they would make a laptop with zero USB and zero HDMI ports.

Total is $2000 when converted back in USD.


A business doesn't pay the VAT, so the $700 basic M1 Mini is £583, which is $772.

The 10% might still be to account for the instability of the GBP.


The new MacBooks have 2 USB ports.


One of them is used for charging, so you really only have one useable port. Still, TB3 dock should be able to provide enough ports for most people (at the expense of portability).


Not to defend Apple because your point is valid, but you can buy a $25 Anker hub and run power through that and gain a ton of ports.


Not really. You could try to get a $25 hub from Amazon and hope it works, half the reviews seems to disagree on that.

However it still won't give you any HDMI or audio or Ethernet output. If you try to get one more adapter, say HDMI to connect a projector, you won't be able to plug it because there was only one thunderbolt port in the first place.

£75 pounds for basic connectivity https://www.apple.com/uk/shop/product/MUF82ZM/A/usb-c-digita...

£120 pounds for dual display with dual USB. https://www.apple.com/uk/shop/product/HMX02ZM/A/caldigit-thu...

£230 for a dock https://www.apple.com/uk/shop/product/HMX12Z/A/caldigit-ts3-...


I dunno what you're on about, here are a couple of Anker adapters I use just fine, all day every day.

That said, it sounds like you're really not interested in this market and I recommend you look elsewhere for a solution.

https://www.amazon.com/Anker-Upgraded-Delivery-Pixelbook-A83...

https://www.amazon.com/Anker-PowerExpand-Adapter-Delivery-Et...


Prices in France are TTC (Toutes Taxes Comprises, all taxes included), the VAT in France is 20% so if you take 20% off 1400€ you end up with 1159€ which is $1370. So in the end the MBP is a bit cheaper in France than in the US.


Apple claims there is not a significant translation slowdown when using Rosetta 2 to run graphics heavy workloads.


Maybe you're talking about indie developers. I don't think studios worry about spending $2000/machine for porting. And a lot of studios also outsource porting


Or a $700 mini.


This is exactly the reason I picked up an iPad Pro earlier this year. I have ADHD that flares up pretty bad when I’m wading through the boring parts of a project and flipping through open windows, the Dock flashing at me, update notifications popping in randomly, and all the complexity that comes with a desktop OS just wore me down. Add in all the fun stuff 2020 has to offer and I just couldn’t take it anymore.

Not that the transition has been seamless, but I’ve only had to fall back to my Macbook once in the past three months. There’s something about shaking up the routine, plus the forced simplicity and single-tasking has majorly helped my focus. I just wish Cloud9 or Codespaces worked better on an iPad...


This sounds like you would actually _like_ the incompleteness of most linux window managers. On i3 you don't get update notifications, because by default notifications don't work!


I’ve been down that road and combining ADHD with an infinite amount of buttons to press and dials to turn is a recipe for disaster. I just can’t help but push all the buttons and turn all the dials until something breaks, and then I have to spend all day trying to fix it.


That’s why I got the Pro (also my laptop is from 2013 and I’m desperate). I love and use Docker every day, and know it’s a mess right now, but I’m still excited to jump in head first.

Edit: This is my personal laptop fwiw. My work one is still Intel.


What is the difference between the M1 Air and the M1 Pro 13"? From what I can tell, they are basically the same spec except for small differences like slightly bigger battery life, active cooling and touchbar in the Pro.

Is that worth $300, for a larger form factor?


Form factor is similar. Brightness is better as stated. I believe you get better speakers in the Pro, the charger is higher wattage and the GPU in the high end air and Pro is 8 cores while the GPU on the base Air is 7 cores.


The Air is passively cooled, so the theory is it won't sustain higher workloads for as long as the Pro should be able to.

From what we know right now - same processor, different thermal envelope.


400 nits versus 500 nits brightness is another minor difference.


Higher sustained speeds, larger battery, brighter screen, better mic/speakers, Touch Bar.


> intriguing and fun about navigating the brokenness of a new ecosystem

Funny how far we've gotten from "It just works"


If you can burn the cash then go for it, but unless these laptops can support something other than MacOS I'm (with a sad expression) staying away from them.

Obviously if you just make websites it doesn't matter but if I'm buying an ARM device I want to play with the bare metal (you can't even access performance counters in MacOS IIRC)


May I ask what do you enjoy with a Macbook not running MacOS?

I don't like Apple devices myself but I understand that some people like the ecosystem as a whole.

However I don't understand the point of a Mac without MacOS. The hardware is pretty much garbage compared to any laptop of the same price range, let alone high-end laptops like Thinkpads.

(this is a honest question and I am not trying to troll or make fun of anyone, in case anyone doubt of the tone of my post)


Apple hardware is pretty nice. Spec out a top-of-the-line Thinkpad X1 Carbon Gen8 against a top-of-the-line MBP13, and you definitely pay more for the MBP13, but it gives you more and superior RAM, more and superior SSD...the configurable ceiling and hardware quality are higher. The X1C-8 gives you more screen resolution, 4G LTE (not that you would need it if you have mobile hotspot with 5G), more ports, WiFi 6....but the central nervous system of the MBP is arguably better.

To answer your question, I don't see much reason to buy a MBP and run it without MacOS, because -- although the build quality is sensational and the MBP travels well -- I'd rather just buy a Lenovo with Windows 10 Pro from the factory, get their top-tier service and repair plan, and still come out cheaper than a new MBP. However, I could see myself converting an old MBP for use as a Windows machine.


You are comparing the specs sheets while I am looking at practical situations.

For the same use cases Linux feels (and is) way more performant and responsive even on a less powerful hardware, and I'm not even talking about Docker.

The MBP keyboard is probably the worse on the market and any lower-end laptop keyboard is more comfortable to use. And nowadays it is not even a complete keyboard since it lack many standard physical keys.

The number of ports is extremely limited and you cannot connect anything without an adapter.

The computer gets hot very easily, and when it does, the keyboard gets extremely hot as well. It is also very noisy compared to any other laptop.

The glossy screen may look cool in a shop but it is very unpractical and uncomfortable.

I do not have any of those problems with my X1, I even have the best keyboard I could hope for a laptop, along with actual features like the Trackpoint, physical camera shutter or the PrivacyGuard.


Install Linux on them when MacOS is no longer supported on your machine, perhaps? If your laptop still works, you should still be able to use it even if it is no longer supported.

Also, many people like to work with dual boot devices. And no, virtualization is not the same.

The hardware is also not garbage. It may not be competitive, but it is well made.


1. I'm not that much of a linux-guy but almost everything I derive pleasure from in programming is fairly close to the metal, e.g. microarchitectural benchmarking and tracing are probably the two things I've been doing the most recently and you can't even get close to them on MacOS. If I spend that much on a new ARM machine that's what I want to be doing

1.5) I like the GPL, apple doesn't

2. I like apple's hardware in a vacuum. As much as the company leaves a bad taste in my mouth (there is a deeply pretentious streak to Apple's approach to their users), they make nice cases and such.


> Obviously if you just make websites it doesn't matter

Websites in 2020 is not that simple. At my workplace, we use Docker extensively in our build pipelines


It's probably apples #1 tooling priority right now, though


Are the old x86 Macbook Airs already discontinued or are they still available via some less obvious page on Apple's site? I picked one up last month since I liked the form factor and needed x86. Seems like I may have got it just in time?


They're available refurbished.

https://www.apple.com/shop/refurbished/mac/macbook-air

I've bought a lot of Apple refurbs, they are like new in every respect.


I don't know about Steam specifically, but in the Apple keynote they said that existing games can run under Rosetta 2 and that some of them even have higher frame rates due to M1 speed despite emulation


> intriguing and fun

ఠ_ఠ


I've been holding off on upgrading my MBP13 for a few years now because Apple was painfully slow to refresh it. I was ready to order the model that came out in May (with 32GB RAM) but then the rumour about ARM hit the internets so I kept waiting.

Now (I guess always?), it's clear that Apple Silicon is not going to be a comfortable dev environment, at lest not for some and not for a while. JDK macos/aarch64 port is still in dev and so is VS Code. Docker support is probably months away (looking at their roadmap no dev has even started) and when it arrives it's almost certain to be limited to ARM linux images.

Still, hanging on to x86 on a Mac seems like a lost cause and I wonder if I should just change my approach. Rather than getting a beefy MBP, get the cheaper Air with M1 and a powerful MiniPC (NUC or similar) with native Linux. VS Code has a Remote Development (over SSH) feature, has anyone used it? Can it be combined with Docker (on Linux) in a seamless setup where the Air runs VS.Code and all devel happens on the MiniPC through remote coding? Is this setup going to work when VS Code macos/aarch64 is out or is there something else one needs to wait for?


Up until this year, I had using BYOD MBP for both work and personal development, I'm on the late 2016MBP and it's been a great setup that allows me to both interact with corporate's exchange server, edit power point slides, WebEx screen share, and still develop for Linux servers.

End of last year, I switched jobs and have been forced onto a Windows laptop. I had been using VSCode, so after fighting with WSL1 and Docker on the work system for months, I installed a dedicated console only linux VM (under vmware workstation) within Windows and I use the VSCode's SSH remote option. It's pretty much seamless. The VM runs docker "natively" within it, the core part of vscode is running within the VM itself. There used to be a bit of confusion if you open a new vscode window, then you had to use that to open a second ssh-connected window. But last month, they changed that so you can connect the current window. When you open a folder or file, it's all like browsing the "remote" VM's filesystem. VSCode also now can auto-detect when you're starting a program on a localhost port and creates an ssh tunnel from your desktop to the remote system (you can also set these up manually if it fails to detect it). Sidebar: Now that WSL2 is available, I could see migrating from my Linux VM under VMWare to a WSL2 Linux VM under Hyper-V, but that's another level of effort for about the same end result.

Just prior to the pandemic, I had set myself up a linux desktop and made that my primary system (relegating my MBP to secondary/couch use). My work laptop was on a stand to the right and my MBP was on a stand to the left. I would use vscode to remote into either the VM on my work laptop or develop personal stuff locally. I also setup VSCode on the MBP to remote into the desktop. During the pandemic with kids home, I had to migrate from my detached garage/home office to inside the house. So I rarely touch my desktop directly and do my (personal) dev work remotely on it from the MBP.

In the future, should it come time to replace my MBP, I don't think I'll use another Apple. Since I can't BYOD it for my current job, and I don't need Outlook and PowerPoint for personal use, getting a hefty Linux laptop seems just fine for personal. If they would let BYOD MBP on the corp VPN, I'd consider it.


I haven't used it with docker, but the remote development feature of VS Code is surprisingly good. You don't feel like anything is missing or slow compared to running on your own machine.


> VS Code has a Remote Development (over SSH) feature, has anyone used it?

I actually take this route, building a $300 personal server out of second hand parts (except for the ram and ssd) so I don't waste too much money if it doesn't work and try to exclusively use vscode remote development feature. It's working perfectly. My development workflow is not changed at all. I was expecting some friction, but there are almost none. I can even work outside my home network seamlessly using zerotier. Building a ryzen system would be totally great for this setup.


I've been using WSL2 for my personal projects on Win10 and it feels really good if you use VSCode. The "spaces" version of windows is better IMO which is a huge plus. It was very easy for me to go from my mac workflow to windows which I found surprising. Docker, Windows Terminal, VSCode, WSL2 all seem to work well for me. Windows 10 even has a spectacle equivalent-ish where you can drag and snap windows to create grids for your apps.

This allows me to go from development on personal projects to gaming and vice-versa which is very satisfying.


All this trouble so you can stay with apple?

Unless you are an ios dev, why??


MacOS, is usually the answer.


That begs an even bigger why, given it's such an awful OS in many ways.


LOL. Because people value different things?

I stay on MacOS because it works very, very, very well for what I want to do.

There are a large number of affirmative reasons I prefer Macs. Hardware build quality has traditionally been stellar, on par with the golden age of Thinkpads (which is one reason the keyboard thing was so jarring). The OS is immensely, profoundly stable. The built-in tools for things like mail, contacts, and calendars work very very well. The overall polish of the experience is unmatched.

AND I have access to a bash prompt, and a whole host of FOSS offerings for things I might want. (As I've moved away from actually writing code, this has mostly boiled down to emacs and a few other bits, but still; it's comforting.)

I also use an iPhone, an iPad, and have an Apple Watch. The seamless integration is really, really great. I'd be hard pressed to give that up.

Add to this the fact that there are material reasons I want to AVOID the other two players:

- Windows is a chaotic disaster in terms of consistency, stability, overall design, and general behavior over time. The degree of weird enmeshment of binaries required to install software more or less guarantees system bloat that just doesn't happen on Macs.

- Linux can be many things to many people, and if Apple hadn't moved to a unixy base at the turn of the century I'm sure I'd have ended up there. But as it is, I'm just not willing to tinker around to achieve a workable environment for me, or deal with the inevitable interoperability challenges that would come from living without the COTS tools I rely on, like Office.


So much this, OSX remains a great tool for devs on X86. As much as I would love to move to Linux full time, I also don't want to think or deal with OS issues when I'm trying to write my code. OSX is far more polished than any other unix-based system out there, and there's a ton of support for any issues you might have.

I also love homebrew, it strikes a good balance between something like apt and windows-style installers. Toss all that together with the ecosystem that apple offers with it's other products (being able to send texts from my laptop via iMessage was a revelation) and you have one hell of a value proposition. There's a lot of valid criticism against apple but I can pretty much guarantee that 99% of the actual devs who rail against them use a MBP for work.

People have to understand that these are not your bargain-bin windows laptops, these are premium products. You wouldn't compare a Ferrari to a Toyota Camry, nor would you treat them the same or expect the same level of "performance" across all use cases.


Why is there this illusion that osx just works and linux requires much maintenance??

Only today we had 2-3 _major_ osx issues on HN front page. For example

https://twitter.com/lapcatsoftware/status/132699029641299148...


It's not maintenance. It's the hassle of getting it to work the way you want it to work (which, for me, would mean "on par with the seamless experience I have on MacOS/iOS).

Maintenance is dead easy on Linux. Setting THAT up, though, isn't.


>Hardware build quality has traditionally been stellar, on par with the golden age of Thinkpads (which is one reason the keyboard thing was so jarring).

Really? You think the same Macs that maintain silence by overheating have "stellar" build quality? Seriously, go watch Louis Rossmann's channel. You'll soon see how terrible the build quality in Macbooks is. Just because it's encased in aluminium doesn't mean it's well made.


Yes, in my personal experience (20+ years of personal and organizational Mac hardware), their build quality has been well in excess of everyone but golden-age Thinkpads.

- They work a long time

- They tolerate travel and other "heavy use" scenarios much better than most other laptops

- I've only rarely had to resort to warranty support or repair, which is 100% not the case with even the higher-end Dell (e.g.) machines I've bought for organizations.


I used Linux exclusively for a number of years and was pulled into the Mac world kicking and screaming for bureaucratic reasons.

I actually like it. I get a sane shell with all of the basic *nix tools I need and I get the wide consumer-level support of the rest of the OS.

One basic thing I really enjoy is having extra monitors just work without wrestling with display settings. A small thing, sure, but it was a surprisingly persistent pain.


people have different opinions about things. What you think is awful, another may enjoy.


On twitter one of the Docker Eng. posted that they are at present working on docker support for Apple Silicon (sorry did not bookmark it so can't provide a link)


I've been using Remote SSH from Windows and it works pretty good I must say


Screenshot of Docker for Mac control panel during the WWDC demo. It even shows some containers running.

https://i.imgur.com/bQb1grM.png

Pretty misleading to include it in the demo if Docker for Mac still doesn't launch next week right?


> Pretty misleading to include it in the demo if Docker for Mac still doesn't launch next week right?

The M1 hardware hasn't shipped yet. Presumably this criticism is only valid if the hardware has shipped, which it has not. If Apple did indeed have a lot of involvement porting Docker, my assumption is that they'll release that with the machines next week (November 17, 2020), as opposed to before the hardware is generally available.


While Docker support will surely come, this thread does give me the feeling that Docker on Apple Silicon Macs will take at least several more months still to be released.

I would have expected Apple to be far more proactive in working with Docker, Inc., especially considering they namedropped Docker specifically at the WWDC Apple Silicon announcement. Apple has supplied virtualization capable hardware to certain developers - Parallels certainly - so it's strange to see that Docker seems to have had to sit on their hands until now? I certainly would have expected a Docker release a lot closer to hardware availability.

The fact that a stable golang for Apple Silicon likely won't be out until February is not a great look either. That blocks Docker but a lot of other things also.

I mean, you should expect a lot of startup issues when moving to a new ISA, but it seems Apple could have done a lot more here.


A big part of this is that Docker Desktop is not open source/free software: it's closed and proprietary, so you're at the mercy of Apple/Docker to fix it.

(Additionally, it's terrible spyware, and sends ridiculous amounts of data back to Docker about your system, including network pcap logs(!).)


I really don't understand why is this fuss.

TBH docker-on-mac is not that good. That is why I've most minimal config and just using to have some 'linux' time to time.

What I use is `docker context` command or DOCKER_HOST=ssh://user@host pattern.

env-var is more convenient for 'ephemeral' hosts. (It uses your own SSH-key to connect to host) And for static hosts (like staging machines) I use context.

Plus it uses native SSH client, so it supports ProxyJump or JumpServer or Bastion Host, forwarding, session-reuse, YubiKeys & whatnot.

So, I'd rather to have client on my laptop, having 1 or 2 medium sized hosts in Digitalocean, AWS, whatever place. (Some providers even gives you docker hosts with ssh access, like civo.com - disclaimer: I just started using them last weekend...)

PS: You cannot 'mount' your local folder to remote host. But you can 'build' your local folder on remote host. (Thanks to BuildKit / DOCKER_BUILDKIT=1)

Edit: I remembered that I've installed (compiled actually) docker-cli to my Android phone using Termux. You can't run daemon at all. The client gives some warnings at the start saying some things not right. But it works with those methods & variables...

Hint:

    $ go get -u -v github.com/docker/cli/cmd/docker
    # profit...


Docker for mac is slow that's true, but seamless filesystem and network forwarding means that exactly the same instructions work for linux and mac team mates


docker is part of the workflow of a lot teams. Simply not optional for a lot of backend teams to have it around.

The mac version is not perfect but I've used it for years without major headaches. I regularly run all sorts of middleware, build tooling, etc. using it.


One of the most forgotten thing is filesystem in macOS is case-insensitive while Linux it's case sensitive.

Creates lots of headaches with mounted volumes (local vs staging, prod etc)


If you want a solution for syncing local dev files to a remote host I would encourage you to try mutagen. It works very well, even inside containers.


The point is not 'syncing' at all. Seamlessly deploy from the machine to the container & run it in a 'native' environment.


The first AS Macs are targeted at people who don't need things like docker. Eventually all the support needed will be available. It's a two year transition, you have to start somewhere.


Still worth sharing so that people are aware. I was in the market for these machines and holding on until their release, but I started going through my tools to see what was supported or not, and no docker is a no-go for me.


We don't know yet whether M1 will be able to support a hypervisor, so it's just too soon to say. I know it's not the HN culture, but I encourage everyone to take a deep breath and be patient for the actual hardware to arrive.


The Apple documentation for Hypervisor.framework now explicitly includes references to Apple Silicon, and even ignoring that, they showed Debian running in Parallels back at WWDC during the announcement. They literally had a giant interstitial during the talk that was titled "Virtualization," complete with a cute little logo/icon and a demo, in classic Apple style.


That's great data! Thanks!


It absolutely will - this post is referencing a July post which means it's the DTK.

Apple explicitly said that the DTK does not support hypervisors, and that final consumer hardware would. People posting garbage reports like this to HN annoys the hell out of me.


It will; it's based on the A14 chip which adds EL2 support.


[flagged]


I did. But it doesn't matter: not a single consumer outside Apple (and maybe some reviewers under embargo) has a production M1 in their hands today. Nor is there a public datasheet for the processor yet that I could find.

Kindly check your attitude at the door.


Unsubscribe


That's what I'd assumed too which is why a M1 Macbook Pro was a bit of a surprise.


If the benchmarks from the other front-page post are true, then not releasing a MBP version would have left them in a position where the AS MBA outperforms their entire MBP line.


Yep, but due to the new high speed on-package RAM architecture they couldn’t support more than 16GB RAM, so couldn’t retire even the 13” Intel MPBs because some people need the RAM. It all makes sense.


I was also surprised, but consider that "Pro" to Apple means photographers, video editors, musicians, etc. just as much as (or perhaps more than) developers.


It's the lower tier MBP though - it replaces the "two port" one. The "four port" one is still Intel.


Not very clearly targeted. MacBook Air was probably the most popular laptop on my CS degree, and I bought a low-ish end 13” MBP toward the end of it and still use docker on it constantly. And that’s probably even more common now that they’ve pushed the price of the higher end MacBook Pro range so high. And they’ve replaced Intel Macs with M1 in the store, it’s not additive, I can’t buy an Intel MacBook Pro for less than £1799 from Apple any more.


If ARM servers take over, then running ARM docker natively on M1 could be helpful ...

... but for the current state where ARM-compatible images make up roughly 1% of Docker Hub, I don't even see the point of wanting to use Docker on M1.


But that's because it was such a niche until just recently - basically only playing with docker/k8 on Raspberry PI and the likes. With AWS Graviton instances and these new Macs it will quickly change.


ARM servers will take over you just probably won't know it.

AWS is in the process of moving all of their managed services e.g. RDS, ElasticCache over to their Graviton platform.


Graviton2 is looking really compelling in our tests. At this point we plan to migrate all of our instances over the next year and we’re going to want arm machines locally to develop with. It’s starting to come together.


I think you can still use docker x86 images on virtualization mode


No, I've seen no evidence of this. Docker Desktop does not work [on Apple Silicon] in any form today.


On ARM in general or with Apple Silicon? The problem with Apple is that processor doesn't supports virtualization mode yet AFAIK


Docker Desktop does not work on Apple Silicon. The M1 processor does support virtualization; just wait for Docker to finish their port.


Or start a regular Linux VM and run Docker inside instead of using the wrapper.


Docker doesn't let you run x86 binaries on arm without emulation.


well, evidence would be something like this: https://ish.app


I'm sure plenty of people are saying this but I'm really glad I switched back to Ubuntu about a year ago. The state of non-apple-centric developer tools and environments on macos has been in crazy flux and disrepair since Catalina hit. Feeling real bad for the people who just bought new macbooks only to find docker won't work and so they can't deploy their web app.


Just my 2 cents regarding this: I've already stopped using Docker directly on my MBP15 (2017, 2.8GHz i7).

It reserves a huge chunk of disk and RAM. Running anything non-trivial (e.g. k3d and a few pods) led to a very noticeable slowdown. My fan was noisy all day.

Instead I purchased (for a very, very modest amount of money) a decommissioned rack server with a dual Xeons (2*8=16C/32T) and 64GB of DDR3 RAM. It's on my local gigabit network. I docker-machine into it. The hardware is obviously older but build times are in the same ballpark.

I'm holding out for the 16" ARM MacBook Pro but I'll probably pull the trigger when it's released.


It nice, but the old servers use a _lot_ of electricity and the single-core performance might be a killer.


I'm a bit lost. I was under the impression that Docker (and containerisation in general) works based on kernel namespaces, which, IIUC, is a way to instantiating distinct subsystems in the kernel, on demand. What does hardware support for virtualization have to do with it? I mean, for VMWare or VirtuaBox, yes, I can imagine; but does that matter for Docker? Can someone explain?


Docker on Mac and Windows both use virtual machines to run a Linux OS, which then runs Docker normally.


It works as you understand on Linux. For Windows and macOS, Docker use virtualisation to do contenatisation on Linux kernel instead.


The issue was posted in August, not with current released hardware. Please can we tag this.


If you keep reading down the thread they say even with the release hardware it won't work for "months".


right?


thread goes from people being confused that the DTK isn't the same as the release hw, to someone complaining abut docker desktop spying on you, to arguing about whether or not you should test server software on your mac. I feel bad for people subscribed to that thread that care about the actual issue (wanting docker desktop to run on m1)


It will be interesting to see how they do it. As far as I understand, the current Docker Desktop for macOS runs a Linux VM and the docker server and binaries run in that while the client runs natively on macOS.

Running it in a VM is bad enough. If they also had to run the VM in the Intel emulator (Rosetta 2), it could be unusable, performance-wise.

If they developed a macOS subsystem for Linux (like the WSL on Windows) and ran that on Rosetta 2, I can imagine it being faster.

Slightly off-topic: I just realized that they're actually calling it Apple Silicon. I find that name very cringey for some reason.


For other apps compatibility with Apple Silicon, see:

Does it ARM? https://news.ycombinator.com/item?id=25075458


Why is this even a discussion? There are thousands of applications that don't (yet) work on Apple Silicon. When the hardware actually releases the software will be slowly ported over to it.


Because Apple would like us to believe that Rosetta is a panacea that makes everything work.


They've detailed explicitly what won't work with Rosetta.


https://www.apple.com/mac/m1/ doesn't mention anything about what doesn't work with it. It just says "With the introduction of Rosetta 2, M1 and macOS Big Sur seamlessly run apps that haven’t yet transitioned to Universal versions." The details about what won't work are in the developer documentation, which most people who buy Macs don't read.


Most people who buy Macs won't use Docker either, or even know what it is.


Most people who buy Macs won't run in to the edge cases.


Actually, most things tend to work.


It seems to be a trend these days that if something doesn't work right, people take to the public internet in the hopes that it will be fixed faster. I've seen this tactic on internet forums with products, after which companies often contact the reviewer of said products to make things right.


"At its 2020 Worldwide Developers Conference, Apple announced a non-commercial prototype computer called "Developer Transition Kit" (DTK).[1] It is intended to assist software developers during the transition of the Macintosh platform to the ARM architecture. Described informally as "an iPad in a Mac mini’s body,"[2] the DTK carries a model number of A2330 and identifies itself as "Apple Development Platform."[3][4] It consists of an A12Z processor, 16 GB RAM, 512 GB SSD, and a variety of common I/O ports (USB-C, USB-A, HDMI 2.0, and Gigabit Ethernet) in a Mac mini case"

"..Even that DTK hardware, which is running on an existing iPad chip that we don’t intend to put in a Mac in the future – it’s just there for the transition – the Mac runs awfully nice on that system. It’s not a basis on which to judge future Macs ... but it gives you a sense of what our silicon team can do when they’re not even trying – and they’re going to be trying." - Apple SVP

https://en.wikipedia.org/wiki/Developer_Transition_Kit_(2020...

"The Apple A12Z Bionic is a 64-bit ARM-based system on a chip (SoC) designed by Apple Inc.The chip was unveiled on March 18, 2020, as part of a press release for the iPad Pro (2020), the first device to use it.[1] Apple officials touted the chip as faster than most Windows laptops of the time.

https://en.wikipedia.org/wiki/Apple_A12Z

HTH


The DTK notes stated explicitly that virtualization was not supported, and equally explicitly that release hardware would support it.

Just the DTK said explicitly that 4k pages were not supported under rosetta but would be by release - that's what broke chrome and apps that are just glorified copies of chrome.


Has there been any word on third-party OS support for the new AS macs? Not just so geeks can run Gentoo on them, but also for corporate customers to virtualize MacOS. I know some fellows who run ESXi/Vcenter on clusters of Minis to run multiple versions of MacOS for QA.


OT, but im curious if one could design a system that had multiple kinds of processors? like for instance an x86 and as well as an ARM? I'm imagining the OS would use one or the other but it could use the right processor based on the kind of application being executed, but also choose to use one that is under utilized through emulation if necessary. I suppose we do something like this when it comes to CPUs and GPUs already.


It doesn't really make sense. Different memory models mean that you'd have to be running two separate OSes, not some kind of unified OS. At most they could try to cooperate over some shared channel to transfer resources back and forth. Certainly not applications But now what you're talking about is two computers communicating over what could just as well be IP, which happen to be in the same box. Not terribly useful.


thanks for the insight, makes sense.


The comments on the GitHub issue are a dumpster fire.


Lots of bellyaching about nothing. Just give the hardware time to get into hands of people and eventually things will shake out.


No idea why you got downvoted, because you're right.


I think Docker for Mac should just use x86 emulation for the time being -- apparently the new Apple chips don't support virtualization?

x86 emulation gives access to most available Docker images anyways -- and the performance of the new CPUs would seems to indicate that emulation may not be so terrible, at least for the short term.


The M1 does support virtualization. Parallels is already working. I don't know how long it will take to get HyperKit working on Apple Silicon.

QEMU would allow running x86 images on Apple Silicon but I don't know if the performance would be acceptable. You can't use Rosetta 2 with Linux no matter how much you might like to.


It's kind of really bad because it doesn't get to use Rosetta's kernel support for enabling strict memory ordering.


The M1 supports virtualization. The comment in question was talking about the older DTK CPU.

A full VM with x86 emulation is not supported though, and from the sounds of it will never be.


Never seems like a long time. Why would it be impossible to implement x86 emulation with virtualization?


Apple's x86 emulator only implements userspace.

Apple would need to implement all the supervisor mode parts of x86, which massively increases complexity. It's a notably different product.

And it would even be virtualiztion, it's not exposing the real host hardware. No real x86 cpu exists to be exposed. It would just be an emulator pretending to be virtualiztion.


Because that's not virtualization, it's emulation. And running code your processor isn't designed for is slow.


> Never seems like a long time.

Fair point. On an infinite time scale...


EDIT: I don't know about the hardware but their license change, as mentioned by saurian below, didn't add support for this as I had thought and had written in my original comment.


I thought MacStadium just released a blog post--which was on Hacker News--going into detail on the exact opposite: that now Apple has made it very clear that not only can you not provide shared hosting (as you must lease the device to the user in its entirety) but you can't even lease it in less than 24 hour increments (which to me is a kind of ridiculous overreach on what it means to own something; imagine if Toyota sold a car and said "the infotainment unit runs a bunch of software and the license on it prevents ride sharing or per-hour rentals").


> a blog post--which was on Hacker News--

https://news.ycombinator.com/item?id=25064841 This post with 8 votes? Did it come up anywhere that got actual discussion?


I read that post much more carefully and indeed, I had misunderstood this. I'll edit my comment so yours survives.


Perhaps is time to stop virtualizing to run containerized software!

Someone said WebAssembly? :P


Docker for Mac is a Linux VM with some tools to glue your Mac to that VM. It gives you the illusion that your Mac is running containers, but it's not -- Linux containers are a Linux thing. Some virtualization is going to be necessary unless your host OS is Linux. One could argue that maybe MacOS should somehow run the Linux kernel. Windows tried that as WSLv1 and it didn't work that well; WSLv2 just uses a VM.

The net impact of this is not particularly high. I always write my apps so that they can run on the local machine, and just have my CI system put them in Docker images for deployment. But, that does require some effort, and some language ecosystems or applications make this nearly impossible (relying on details like case-insensitive filenames, OS quirks, etc.)... so people just develop inside a container to work around that. (I would never tolerate it, but it can work OK for things like PHP where your edits take effect in the running container without having to rebuild anything.)


Windows runs containers without a Linux kernel. They made a runC API out of existing Windows APIs.

runV and VMWare Vsphere 7 can run each container in its own VM.

WSL v1 did not run a Linux kernel at all, but v2 does.


Windows runs _Windows_ containers without a Linux kernel.

And even with WSL1, they didn't implement the cgroups APIs, and couldn't support Linux containers.


Here we go... Boot your container host in the browser! https://copy.sh/v86/?profile=archlinux


Wow, that's doing every filesystem operation as a separate fetch to the server? With no caching? Simple to implement, but about as slow as you could possibly get!


Yeah, but it boots up really fast (from a snapshot), and that's the important thing for a demo!


Apple Silicon is all the hype now. And I'm on the train.

Does anyone have a good blog post to explain what the new architecture is, how it works, what's it's status right now and so on?


This is like dejavu from Microsofts ARM version of laptop / tablet. hopefully Apple got it right..

and maybe we can get away from x86 platform or Intel and AMD will step up on their games.


As always, it's prudent to wait a bit new Apple products to mature before jumping in when there's a chip transition going on. Also, in general.


sigh apple is always behind, this is like a dejavu from Microsoft ARM surface tablets/laptops...

hopefully this time apple got it right.


Apple can keep their silicon - and their shitty OCSP security checks. As the Apple laptops / iMacs die or become obsolete in my house, they'll being replaced by PCs. Let's not pretend that this is some pro-consumer move. I would actively avoid implementing Docker on Apple Silicon, at some point Apple will close the macOS ecosystem and you won't be able to use Docker anyhow.


If silicon flops I wonder how long they will drag it out before converting back?


Looks possible that Apple has internally written patches to docker to get it running on the M1 based machines. I could imagine NDA's might apply, which is why docker devs don't appear worried about it.

I guess they're planning on upstreaming those patches when the hardware is released.


uh oh


Get over it, it’s another architecture which is incompatible with x86/x64.


Well yeah, no shit.

It's bleeding edge new tech that other than the A12x hasn't been available for more than a few months... And I'd argue apple arm silicon is going to be a tectonic industry shift.


"Not yet. The OS has the virtualisation functions, but the Apple chips don't have virtualisation support yet."

This is strangely a really big selling point for me as a habit breaker. I spent a good part of the release evening whining about the pitiful memory and the cost of the devices but I am regretting that on reflection. Why? Well I just spent two days arguing with problems with Microsoft's fucked up shit show of mixed platform Windows containers and Docker which ultimately ended up in a completely hosed OS install. Every time I use some heavily marketed stack it turns into this mess eventually if not right away.

My conclusion is doing this on my main computer i.e. my portal to the universe is a functional risk so I'd rather do it somewhere completely different i.e. on a desktop PC hidden away somewhere or Amazon where I can nuke the whole universe and start again without losing access to essential services etc.

So perhaps I don't need a virtualization equipped CPU after all for my computer? Forcing my hand in that direction may be a net gain.


FWIW, that comment was referencing the A12Z in the DTK, not the M1 CPU. It's less clear where the M1 sits on this issue and whether Docker can be ported.


Yes, and someone commented later that A14 does have virtualization in hardware so this issue might actually be fixable in software?


Ah thanks for the correction! Either way it made me think hard which is always positive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: