FWIW-- the easiest way to do this is just using and probably break everything you love:
export ARCHFLAGS='-arch arm64'
brew install -s --HEAD pkg_name_here
Nothing else needed. Honestly, the `--HEAD` part may not even be needed anymore.
Realize the problem with homebrew isn't brew itself right now on the M1. I have absolutely zero doubt (go read the issues) the maintainers want to support it properly and be done with this kind of noise. The reason they can't or don't is because a lot of shit is broken still. They can't safely let you go all willis-nillis installing ARM packages because they're broken.
Installing stuff from source, yourself, isn't hard if it's going to actually work... 99% of it boils down to:
1. git clone whatever_it_is
2. ./configure --prefix=/usr/local/bin
3. make
4. make install
I think the title of this post should be “Workarounds for ARM-based Apple-Silicon Mac” as in the page, it reflects the contents better. Homebrew is only mentioned in one section. It would be better to help the maintainers, by visiting the GitHub issue, who are working hard to provide an official solution out of the box.
I too have just been running ARM Homebrew the past few days.
For the most part, no real complaints. It'll depend heavily on what you install, and obviously you have to be OK with compiling things, but yeah no real complaints so far here either.
Other than the Python scientific ecosystem, Rust is really the main thing in my end-user stack I'm waiting on since fzy, bat, exa, etc. still don't compile, but other than that I'm fairly OK.
Ha, apologies, fzy was a typo for `fd`. Too many installs on the brain.
Installs of rust itself aren't succeeding yet for me via Homebrew, with what look like likely simple errors but not ones I investigated yet (though yeah I saw the Rust tracking ticket you linked).
> Installs of rust itself aren't succeeding yet for me via Homebrew
Ah, yes. Homebrew builds Rust itself and building Rust uses a previous beta release to build the current development code.
Until my recent [pr] bumping the beta bootstrap version, building Rust on aarch64-apple-darwin required specifying a nightly version of Rust because we only had nightly artifacts. I've mentioned that to a homebrew developer, so it should flow through soon. I'd expect that you'd only be able to install the nightly release at first.
> if you have recommendations
I'm a Rust fanboy, so I'd recommend installing Rust via rustup ;-)
Not OP, but I'm a bit against using cargo (or any other language-specific dependency manager) as a general purpose package manager. It forces me to periodically run updates on many different managers, it doesn't always interface as well with other system / distribution idiosyncrasies as the native package manager, and I don't want to remember or have to care about what tool is written on what language. (Of course, they have their place for specificic use cases, just imo not system-wide installs of random binaries)
So I can see why it's a problem if it works with cargo but not with brew.
Sure and that's fine, but the original comment wasn't clear that they were waiting for homebrew to ship an arm64 version of Rust. The phrasing made it sound like Rust itself was the aspect blocking those tools from working. That's the point I was addressing.
> not system-wide installs
rustup installs Rust into your home directory by default, and then `cargo install` follows suit, so it's not system-wide.
Hopefully the pains we feel with this shift to arm will inspire compiler developers to take cross compiling seriously. I don't see a reason why a compiler can't output binaries for any (or even ALL supported) platforms with a single CLI flag like golang does.
Golang can do this for the single reason of having no runtime dependencies whatsoever. Try it with literally any other language and you quickly find out that it’s no use crosscompiling to armv7 ‘in one click’ when your target doesn’t have the version of libc you’re targeting. The problem is not the compiler, it’s the build tooling.
Zig and Nim both have excellent cross compiling support out of the box.
Zig extends that excellent support to its embedded LLVM C compiler - it’s probably the easiest way to cross compile C across operating system boundaries these days.
musl is a Linux libc, so you can't use in on any other OS (because it wouldn't know how to send syscalls to the kernel).
macOS handles libc a little differently from Linux; whereas the Linux kernel publishes its syscall API, and allows programs to make syscalls however they want (e.g. with glibc or musl, or with go, which does syscalls itself), macOS's syscall API is subject to change at any time, and the only (supported) way to make syscalls is to dynamically link to the system libc.
Nim always relies on an underlying platform compiler.
Getting an .o out from a cross compiler is easy, in my experience - the big problem is getting libraries, includes and linkage right. And in my (admittedly small) experience, Nim makes that part much simpler as soon as you have a compiler set up (which in my case was an apt-get away)
But do note the apt-get doesn’t get you far. You now have a compiler that can produce .o, but headers, libraries, borked make files that are unaware of cross compiles etc are still a problem.
Can it do this anymore on macOS? My understanding was that they were going to start linking to libSystem, as they should, because otherwise golang apps can and will break between major macOS releases.
(Not to mention the golang folks' need to reinvent the wheel has caused a few high-profile bugs that have needlessly wasted people's time.)
Regardless, this quickly falls apart whenever you have an app that does any kind of FFI to system libraries, or even third-party libraries that the author didn't want to reimplement in golang.
FFI can work - P/Invoke with C# for example doesn't care if you're cross compiling or not, but it comes at the cost of needing a definition of the interface in your code.
Windows makes cross compiling work by shipping the libs for each of the architectures in the SDK. I think that's quite reasonable for macOS since Apple controls the SDK, but it's always seemed to be a mess on Linux.
Disclaimer: I work at Microsoft on Windows but I have tried cross compiling code on both Windows and Linux in the past and I've always found it painful on Linux.
You can, I think, link to libSystem dynamically without actually having libSystem to refer to. At least, you can output a macho file that calls a function in a dynamically linked library when all you know is the name of the function.
If anything the M1 Mac reduces the case for cross-compilation since they're actually fast enough to build code on. Compare that to e.g. compiling hefty C++ projects on the Raspberry Pi where cross compilation on a fast x86 box is ideal (but still far from trivial).
For non-trivial projects you want CI anyway so it's a better ROI of your time to just to add an additional VM, Docker or native runner.
And let's not forget you'll also want to run your automated tests in the native environment.
I wonder if Rosetta applies to apps that are purchased from the app store or at least are compiled with Xcode, but maybe not apps compiled with gcc. Or maybe if you compile something with gcc there's a way to specify to the OS that it should run with Rosetta.
I’m sure Rosetta 2 is designed to work with gcc compiled apps but given that LLVM/Clang is largely sponsored by Apple, I’m sure the preference is on that, versus whatever version of gcc with whatever specific settings emacs is using.
I'm pretty sure the problem is that the "portable dumper" that replaces the dumpster fire that was unexec is not in fact portable across architectures.
Rosetta can translate most Intel-based apps, including apps that contain just-in-time (JIT) compilers. However, Rosetta doesn’t translate the following executables:
- Kernel extensions
- Virtual Machine apps that virtualize x86_64 computer platforms
Rosetta translates all x86_64 instructions, but it doesn’t support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions. If you include these newer instructions in your code, execute them only after verifying that they are available. For example, to determine if AVX512 vector instructions are available, use the sysctlbyname function to check the hw.optional.avx512f attribute.“
It would be unreasonable to the point of impossibility to expect it to implement every instruction that an x86_64 CPU could possibly have. So as far as it pretends to be an x86_64 CPU, rather than every x86_64 CPU, it works on all [program] binaries.
I'm thinking it's more of the AVX instructions still being covered by patents. Some older comments [1] suggest that plain AVX will only lose its patent protection in 2031.
Another factor is that Mac software was never able to rely on the availablility of AVX instructions anyway. macOS Catalina was officially supported on some systems with processors as old as Ivy Bridge, which lacked AVX support.
I thought it was up to the software (for example, a video encoder like Handbrake) to detect whether a processor supported a set of instructions, and then try to use them?
That's how it's supposed to work, yes. And any software that does that should probably work fine on Apple Silicon, by falling back to the non-AVX path under emulation.
I've seen some poorly behaved Mac software which used AVX instructions without testing for availability. It crashes if run on some older Macs, and it won't work at all on Apple Silicon.
General note, MacPorts does not build everything from source anymore, and it hasn't for a while. MacPorts will try to find a precompiled binary on packages.macports.org, then fall back to building from source if a binary is not available.
That said—while I don't know whether ARM actually works yet, MacPorts still supports PowerPC wherever possible, so they have a long history of managing multiple architectures. I expect they'll have a somewhat easier time with ARM as a result.
No, it has no more reasonable chance than Homebrew. Homebrew has always supported building from source. Bottles did not originally exist. All M1 Homebrew packages are presently being built from source, because there are no Apple Silicon bottles. It is the source packages that need to be updated and patched to work on Apple Silicon, and that is true whether you use MacPorts, Homebrew, or build them yourself from scratch.
Exactly. I keep having to explain this to colleagues. For some stuff, it’s just going to take time.
And I can also envision that certain packages might not get updated, necessitating a fork to patch them for M1.
I’m interested in how homebrew and others will handle that (so official package x chooses not to patch/accept a patch for whatever reason, leading to a fork of package x with a patch applied), presumably to avoid namespace collisions/not giving the user what they want, there could be an error message that states that an ARM64 version doesn’t officially exist but links it to the fork and gives the option to install that instead. And then there could be a flag to always allow linked-replacements if an official patch doesn’t exist.
Homebrew's policy is to not apply patches that upstream doesn't accept, though I do notice that they are sometimes applying patches to the build systems themselves to patch in some of the paths to include files living deep inside the macOS SDK.
Given that policy I would assume that such a package might die until somebody forks it and takes on responsibility for maintenance.
Homebrew is about building and installing upstream packages, not about installing and maintaining custom forks of packages.
It's worth noting, btw, that this is another major difference between Homebrew and MacPorts. MacPorts maintains tons of their own patches, whether to make software work at all or just to add support for older or newer OS's.
... which is a blessing and a curse: When they are doing a good job, that's perfect because it means that some software which wouldn't run correctly now runs correctly.
When they are doing a bad job, they might anger maintainers ("I didn't add this bug - this was added by macports - complain to them!"), or they might introduce additional security issues not present in the upstream package (see the Debian openssl bug from 2008)
It might also mean that you're not getting the latest versions of upstream packages because adding those patches and rebasing them on top of upstream changes takes time.
Being close to upstream was a selling-point of homebrew back in the days when it was just a collection of scripts to make it easier to build original source distributions of common Unix software.
Sorry if you have that impression. If you’re interested in a constructive discussion, feel free to give an example and tell me how you think we should handle it better in the future.
You proved yourselves to be untrustworthy by opting everyone into analytics using Google and hiding the notice in lots of terminal output. You guys doubled down on that by refusing to reconsider, saying unless you were a contributor to the project, your opinion was irrelevant. Mike McQuaid's responses as a representative of the project were user hostile, all while he tried to paint himself as the victim of abuse over the matter.
If you want to regain my trust, remove the analytics function, swear off such things forever, and expel Mike McQuaid from the project. But I doubt that will happen, so MacPorts it is. And I'll encourage everyone I know to use it rather than Homebrew.
Again, sorry you feel that way about Homebrew’s analytics. I think we’ve learned a lot from our mistakes here. If the way the analytics notice is implemented now is still a no-go for you, I can’t blame you for moving on. MacPorts is a fine package manager, too.
One thing I still want to point out though: whatever we do, we do it in good faith and with the best of intentions for both our users and ourselves.
The only gripe I have ever had with Homebrew is the /usr/local "do yourself a favor" prefix, due to its collision with so many 3rd-party installers. But that's now been fixed.
The package selection is fairly limited. I tried it recently and was disappointed that neither neovim nor ripgrep are packaged. It also has little mindshare. I found out by accident that it was available for macOS.
using macports wouldn't make it such that "more things will work". it's really about the applications now needing to be compiled in the ARM architecture.
if you notice the code changes for the items above, they make changes to detect if the compile target is ARM and make the necessary changes so it would compile.
I loved MacPorts, greatly preferred it over Brew, and used it all the time on Mac OS back then on my PowerBook and my dying Mac Mini, but doesn't Apple's notarization requirement make it impossible to compile from source to create runnable binaries?
No. Even with Gatekeeper enabled, running binaries that you compile from source on your own machine doesn't require that they be notarized. As of Big Sur, they must be _signed_, but can be self-signed by a certificate you create locally. No need for Apple's approval in any way. I don't know what the status of supporting this new requirement in MacPorts (or Homebrew) is, but it's certainly something that can be dealt with.
Apple's linker will automatically adhoc sign binaries on AS systems so it shouldn't require any work for most people.
Anything run from the Xcode UI (or Terminal if you use "spctl developer-mode enable-terminal" to show the Developer Tools group under Security > Privacy in System Preferences) and enable Terminal is exempt from GateKeeper notarization checks. You can also put other terminal clients in the same list and they get the same benefit (child processes exempt from GateKeeper).
In a similar note "DevToolsSecurity -enable" allows any admin or member of the _developer group to use the debugger or performance tools without authing first. (Normally you must auth the first time and the authorization can expire if you don't unlock your system after a certain amount of time).
> In a similar note "DevToolsSecurity -enable" allows any admin or member of the _developer group to use the debugger or performance tools without authing first.
Oh nice! That was a big annoyance on older systems; glad to see they've fixed it.
Since Gatekeeper was originally announced I've seen people claiming that Apple were going to lock down macOS so Homebrew wouldn't work any more. I've never seen evidence that this will actually happen (and the people I speak to at Apple point to the opposite).
Yes they do. But the question is, why would they make a decision that will instantly make the machine completely unusable for a substantial portion of their clientele? Also, the portion that arguably gives MacBooks and, especially, iOS devices their value.
I think I'm going to manually build things from source until Homebrew officially supports M1 and Big Sur. I don't want to deal with any sort of migration / funky re-install, personally.
Homebrew distributes prebuilt binary packages. It’s possible that the architecture for this is improperly set up and does not understand that different architectures exist: unlike macports, homebrew was born after the x86 transition.
That’s why one user above suggests `-s`: it forces a compilation from source.
The problems with homebrew packages at present are mostly down to projects like Go and Rust not being updated for M1 ARM yet and hence sources not compiling properly rather than a problem with homebrew itself. So users attempting to compile from original upstream sources would run into similar problems.
That approach has always ended badly in my experience because your manual installs will require more care to remove than the Hombrew ones. My usual recommendation is to install anything which supports it using --HEAD which will keep things tracked and make it trivial to either completely uninstall or simply use "brew reinstall" packages after the upstream stabilizes.
> I don't want to deal with any sort of migration / funky re-install, personally.
Isn't the benefit of Homebrew that it all goes into /usr/local and it can just blow everything away if necessary? You could run `brew leaves` to see what packages you have, uninstall everything, and reinstall. Easier than keeping track of what you've manually installed where.
In the short run yeah. You have a point. I intend to uninstall & reinstall everything once Homebrew is officially supported though, which isn't a big deal for me (and I actually kind of enjoy building from source just for the experience).
But these replies are making me think that a large part of my decision to do this is motivated by me just not understanding Homebrew well enough (e.g. how easy it is to nuke everything). Oh well, we'll see how it goes.
I just bought the base model Air. No issues with the RAM thus far, but know that the disc space is going to be an annoying limitation. Will swap it for the 512GB model in the next week or so.
Absolutely stunning computer though - first time back to Apple for over 8 years for me.
Same, especially if you're planning to use it for normal consumer uses as well. Photos and videos take up a lot of space — even moreso if you have an iPhone and use iCloud Photo Library and the Mac Photos app.
I bought the base model Pro to try, but 512GB looks like a must for me going forward.
I use iCloud photos a ton and my old 128GB Mac's disk is constantly full just from the non-removable thumbnails for iCloud photos, without any actual photos on my local machine.
I've been fighting disk space issues with my Mac for the past five years because I was short-sighted and only got the 128GB disk. This time I went with 1TB, not making that mistake again.
I went for both because my existing 2017 MBP only has 8GB/512GB and I bump into them fairly regularly. But if it's one or the other, I'd go for memory - you can offload your disk space to a NAS/Cloud/whatever much more easily.
What stack are you using that you find the 512GB limit to be a problem? I've picked up one of the 16/512 MBPs and am wondering if I should swap it for a 16/1TB MBA, or maybe MBP but I have a little while to decide.
This is personal laptop - I've got >250GB in Photos which, even on "store originals in iCloud" mode takes up a huge chunk of the 512GB. I've got several x00GB folders in Dropbox that I can't currently have on the laptop (fonts, assets, etc.) 50GB+ in ~/git. I hover around 5-10GB free space these days.
Lack of CI infrastructure. Apple and MacStadium have given us access to some machines so hopefully this should be a temporary situation to be resolved soon.
I want this. I dislike depending on rosetta. I want native, and I am saddened that Apple didn't reach out to homebrew or macports to pre-arrange something. It speaks badly of the future, they're so indifferent to the developer experience.
I downvoted your comment. I think this article is spot on for HN, and I think you showed a lack of critical self-awareness questioning its relevance.
What Apple claims to have done and what Apple actually does are two very different things. Apple has reached out to some projects with various levels of support, but it's not like they just dropped by all of those projects with patches on the day of WWDC.
>but it's not like they just dropped by all of those projects with patches on the day of WWDC.
Apple had their own stuff to develop, including a new architecture, a new OS version for 2 architectures, ports of all their apps, and several other stuff besides, including the UNIX userland they ship.
The idea that a company should be responsible for all third party FOSS stuff on its platform, used by a small minority of users, is a little strange...
That said, a MacPorts guy below says that "Apple engineers had patches for basic support ready fairly quickly".
I'm not claiming they should have done anything. I was actually pleasantly surprised when they said they would. I'm just saying that they didn't show up with all the fixes as some may have believed from what Apple said during WWDC.
How on earth is distributing common Unix tools for MacOS that aren't included in the base OS not "using the underlying UNIX". This is exactly what Unix was designed to enable. Apple understand and values this, which is why they have submitted patches for these projects.
I could quibble with the "Most of" part, most of it is stuff you don't get in MacOS at all, but that aside so what?
This isn't really about alternative ecosystems, it's about complementary ecosystems. There are a lot of people that use MacOS desktops alongside Linux or other Unix machines. For these people having a common set of tools that work the same, so you can use the same command lines and scripts across multiple platforms, is incredibly useful.
What do you think people use Homebrew for? brew list gives me mailhog, mysql, postgresql, newer python, newer ruby, macvim, node, redis, ... these aren't IN the "UNIX provided in the macOS box".
As if some self-imposed UNIX/POSIX austerity, making do with the basic (and old) UNIX userland that comes with macOS (or other platforms), is something to be lauded?
(As opposed to just an example of someone making do with the little they need, where others' mileage may vary?)
Or is needing some of the tons of programs that don't come with "Apple platforms and their UNIX" (e.g. some random stuff I use: gnuplot, ripgrep, redis, postgres, jq, graphviz, and tons of different things others might want) somehow problematic?
Not even sure where Apple platforms and UNIX come into play as something to be contrasted to "replacements for GNU/Linux".
One of the benefits of macOS is precisely that as a UNIX it can run all kinds of UNIX tools, not just the basic POSIX utils, but close to everything available in a Linux/FreeBSD/etc package manager...
On the contrary, I use macOS for what it is and the value of its development stack, not as a pretty replacement for GNU/Linux, for that I already have my Asus netbook.
>On the contrary, I use macOS for what it is and the value of its development stack
OK, I get what you mean.
But "what it is" includes being a very usable Unix core that can run all kinds of stuff one might want.
So, like you, I don't expect macOS to be a GNU/Linux, or cater to tinkering and Linux/FOSS preferences. And I do my Linux-based development in Docker, remote VPS and servers, and so on.
But, on the other hand, I wouldn't carry two laptops, a "Linux" one for running postgres and redis and gnuplot, and a Mac one for running XCode and Instruments and Photoshop, out of some principle that Mac is Mac and Linux is Linux and "never the twain (use cases) shall meet".
Homebrew and Macports maintainers could have easily reached out to Apple and gotten free DTKs to port with, just like thousands of other developers.
But the truth is that XCode built software is far more important to Mac users and that’s where Apples focus was. Homebrew users are less than 1% of their installed base.
The other truth is the M1 hadn’t been important to Homebrew maintainers. At least not yet.
On the MacPorts side, I know that at least Saagar Jha (who posts on HN often) did indeed have a DTK. Perhaps consequently (?), MacPorts does support ARM right now.
I have very little to do with MacPorts's ARM support, I'm not even a maintainer ;) Most of that infrastructure was already there from the PowerPC→Intel transition, and Apple engineers had patches for basic support ready fairly quickly. I worked a little bit on early support for some heavily-depended-on packages, but I wasn't really directly involved in the effort.
Whooops! I only mentioned you because I remembered you were listed on MacPorts's website until recently, as a contact for "Apple DTK issues" or something like that.
Not really sure what you mean. Node at least runs fine in emulation mode and it takes like 5 minutes to build node from scratch to run natively on this thing.
export ARCHFLAGS='-arch arm64'
brew install -s --HEAD pkg_name_here
Nothing else needed. Honestly, the `--HEAD` part may not even be needed anymore.
Realize the problem with homebrew isn't brew itself right now on the M1. I have absolutely zero doubt (go read the issues) the maintainers want to support it properly and be done with this kind of noise. The reason they can't or don't is because a lot of shit is broken still. They can't safely let you go all willis-nillis installing ARM packages because they're broken.
Installing stuff from source, yourself, isn't hard if it's going to actually work... 99% of it boils down to:
1. git clone whatever_it_is 2. ./configure --prefix=/usr/local/bin 3. make 4. make install