Hacker News new | past | comments | ask | show | jobs | submit login
Run Homebrew Natively on Apple Silicon Arm M1 (github.com/mikelxc)
311 points by soheil on Nov 23, 2020 | hide | past | favorite | 151 comments



FWIW-- the easiest way to do this is just using and probably break everything you love:

export ARCHFLAGS='-arch arm64'

brew install -s --HEAD pkg_name_here

Nothing else needed. Honestly, the `--HEAD` part may not even be needed anymore.

Realize the problem with homebrew isn't brew itself right now on the M1. I have absolutely zero doubt (go read the issues) the maintainers want to support it properly and be done with this kind of noise. The reason they can't or don't is because a lot of shit is broken still. They can't safely let you go all willis-nillis installing ARM packages because they're broken.

Installing stuff from source, yourself, isn't hard if it's going to actually work... 99% of it boils down to:

1. git clone whatever_it_is 2. ./configure --prefix=/usr/local/bin 3. make 4. make install


For anyone reading this, `-s` == `--build-from-source`.


I was curious and found the package install scripts nicely organized by name here--

https://github.com/Homebrew/homebrew-core/tree/master/Formul...


You could also try using guix or running gentoo in a prefix. both have been running on arm since forever.


I think the title of this post should be “Workarounds for ARM-based Apple-Silicon Mac” as in the page, it reflects the contents better. Homebrew is only mentioned in one section. It would be better to help the maintainers, by visiting the GitHub issue, who are working hard to provide an official solution out of the box.


I too have just been running ARM Homebrew the past few days.

For the most part, no real complaints. It'll depend heavily on what you install, and obviously you have to be OK with compiling things, but yeah no real complaints so far here either.

Other than the Python scientific ecosystem, Rust is really the main thing in my end-user stack I'm waiting on since fzy, bat, exa, etc. still don't compile, but other than that I'm fairly OK.


> Rust is really the main thing in my end-user stack I'm waiting on since fzy, bat, exa,

Can you expand further?

    % cargo install bat
        Finished release [optimized] target(s) in 2m 21s
      Installing ~/.cargo/bin/bat
       Installed package `bat v0.16.0` (executable `bat`)
    
    % file $(which bat)
    ~/.cargo/bin/bat: Mach-O 64-bit executable arm64
    
    
    % cargo install exa
        Finished release [optimized] target(s) in 1m 21s
      Installing ~/.cargo/bin/exa
       Installed package `exa v0.9.0` (executable `exa`)
       
    % file $(which exa)
    ~/.cargo/bin/exa: Mach-O 64-bit executable arm64
I don't know what you mean by [fzy], as it appears to be a C program, not Rust.

See also:

• The tracking issue for tier 1 support [tracking]

• My intermediate README with instructions [readme]

[fzy]: https://github.com/jhawthorn/fzy

[tracking]: https://github.com/rust-lang/rust/issues/73908

[readme]: https://github.com/shepmaster/rust/blob/silicon/silicon/READ...


Ha, apologies, fzy was a typo for `fd`. Too many installs on the brain.

Installs of rust itself aren't succeeding yet for me via Homebrew, with what look like likely simple errors but not ones I investigated yet (though yeah I saw the Rust tracking ticket you linked).

But e.g. retrying `brew install rust` this morning produces 404 errors trying to download https://static.rust-lang.org/dist/2020-08-27/rust-std-1.46.0...

It looks like https://github.com/Homebrew/homebrew-core/pull/65286 may fix things. Haven't spent a ton of time investigating though. Obviously if you have recommendations would love them.

Will have a look at your README, thanks for writing it.


> Installs of rust itself aren't succeeding yet for me via Homebrew

Ah, yes. Homebrew builds Rust itself and building Rust uses a previous beta release to build the current development code.

Until my recent [pr] bumping the beta bootstrap version, building Rust on aarch64-apple-darwin required specifying a nightly version of Rust because we only had nightly artifacts. I've mentioned that to a homebrew developer, so it should flow through soon. I'd expect that you'd only be able to install the nightly release at first.

> if you have recommendations

I'm a Rust fanboy, so I'd recommend installing Rust via rustup ;-)

[pr]: https://github.com/rust-lang/rust/pull/79219


Not OP, but I'm a bit against using cargo (or any other language-specific dependency manager) as a general purpose package manager. It forces me to periodically run updates on many different managers, it doesn't always interface as well with other system / distribution idiosyncrasies as the native package manager, and I don't want to remember or have to care about what tool is written on what language. (Of course, they have their place for specificic use cases, just imo not system-wide installs of random binaries)

So I can see why it's a problem if it works with cargo but not with brew.


> I'm a bit against using cargo

Sure and that's fine, but the original comment wasn't clear that they were waiting for homebrew to ship an arm64 version of Rust. The phrasing made it sound like Rust itself was the aspect blocking those tools from working. That's the point I was addressing.

> not system-wide installs

rustup installs Rust into your home directory by default, and then `cargo install` follows suit, so it's not system-wide.


Does Anaconda not work on ARM? Anyone tried this yet:

https://github.com/conda-forge/miniforge/


>It'll depend heavily on what you install, and obviously you have to be OK with compiling things

I guess we are closer and closer to The Year of Linux on Desktop after all!


Hopefully the pains we feel with this shift to arm will inspire compiler developers to take cross compiling seriously. I don't see a reason why a compiler can't output binaries for any (or even ALL supported) platforms with a single CLI flag like golang does.


Golang can do this for the single reason of having no runtime dependencies whatsoever. Try it with literally any other language and you quickly find out that it’s no use crosscompiling to armv7 ‘in one click’ when your target doesn’t have the version of libc you’re targeting. The problem is not the compiler, it’s the build tooling.


Zig and Nim both have excellent cross compiling support out of the box.

Zig extends that excellent support to its embedded LLVM C compiler - it’s probably the easiest way to cross compile C across operating system boundaries these days.


https://nim-lang.org/docs/nimc.html#cross-compilation

Nim doesn't seem to have excellent cross compiling support out of the box. It simply relies on cross platform toolchains that you have to set up.

Good luck setting up cross platform GCC.

Zig uses musl AFAIK? That can often be problematic.


Zig supports musl, glibc, macOS libc, and mingw I think:

https://ziglang.org/download/0.7.0/release-notes.html

But yeah, I think musl is the main target, and it ships with the source for it


But it only uses musl for cross compilation, no?

I don't think it uses glibc to compile Linux binaries on macOS or macOS libc to compile macOS binaries on Linux.


It can target glibc in a cross-compile setting.

https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...


musl is a Linux libc, so you can't use in on any other OS (because it wouldn't know how to send syscalls to the kernel).

macOS handles libc a little differently from Linux; whereas the Linux kernel publishes its syscall API, and allows programs to make syscalls however they want (e.g. with glibc or musl, or with go, which does syscalls itself), macOS's syscall API is subject to change at any time, and the only (supported) way to make syscalls is to dynamically link to the system libc.


Nim always relies on an underlying platform compiler.

Getting an .o out from a cross compiler is easy, in my experience - the big problem is getting libraries, includes and linkage right. And in my (admittedly small) experience, Nim makes that part much simpler as soon as you have a compiler set up (which in my case was an apt-get away)


Yeah it's nice when it's an apt install away.

Not so nice on Windows or macOS.

While Go supports it all out of the box.


But do note the apt-get doesn’t get you far. You now have a compiler that can produce .o, but headers, libraries, borked make files that are unaware of cross compiles etc are still a problem.

Not so with Nim.


FreePascal works well, too, once it is installed

To compile to linux it needs to have a linker installed. For windows it has an internal linker, so probably does not even need that


Can it do this anymore on macOS? My understanding was that they were going to start linking to libSystem, as they should, because otherwise golang apps can and will break between major macOS releases.

(Not to mention the golang folks' need to reinvent the wheel has caused a few high-profile bugs that have needlessly wasted people's time.)

Regardless, this quickly falls apart whenever you have an app that does any kind of FFI to system libraries, or even third-party libraries that the author didn't want to reimplement in golang.


FFI can work - P/Invoke with C# for example doesn't care if you're cross compiling or not, but it comes at the cost of needing a definition of the interface in your code.

Windows makes cross compiling work by shipping the libs for each of the architectures in the SDK. I think that's quite reasonable for macOS since Apple controls the SDK, but it's always seemed to be a mess on Linux.

Disclaimer: I work at Microsoft on Windows but I have tried cross compiling code on both Windows and Linux in the past and I've always found it painful on Linux.


You can, I think, link to libSystem dynamically without actually having libSystem to refer to. At least, you can output a macho file that calls a function in a dynamically linked library when all you know is the name of the function.


Even Java ? ;)


Good thread on this from 2018: https://news.ycombinator.com/item?id=18732832


If anything the M1 Mac reduces the case for cross-compilation since they're actually fast enough to build code on. Compare that to e.g. compiling hefty C++ projects on the Raspberry Pi where cross compilation on a fast x86 box is ideal (but still far from trivial).

For non-trivial projects you want CI anyway so it's a better ROI of your time to just to add an additional VM, Docker or native runner.

And let's not forget you'll also want to run your automated tests in the native environment.


Try to cross compile a UWP application from Linux into Windows.

Cross compilation only works for toy examples, or when one has 100% of the target libraries available.


Or for languages with no such non-language-native dependencies, like Go, where I've found it to be piece of cake (and I've never ever tried it for C).


That only works for pure Go code (no use of cgo) and for Go libraries that don't call into OS APIs.


You can absolutely call into OS APIs, as that's what the Go implementation does to interact with the OS in the first place.

It also uses cgo internally for that task. https://github.com/golang/go/blob/2a029b3f26169be7c89cb2cdcc...


Right, now try to actually use it. Specially missing entries from that list.


What do you mean? You use this when you use the builtin os package of Go.

https://golang.org/pkg/os/


Now try to use that, to call these from a Linux compiled executable,

https://docs.microsoft.com/en-us/windows/win32/api/


>That only works for pure Go code (no use of cgo)

Yes. I never cared for cgo, and I'd say most don't.

>and for Go libraries that don't call into OS APIs.

Or that have those APIs wrapped for different conventions. So? This still leaves billions of possible apps, and thousands of existing ones...


I do this for win32 all the time with mingw. I've never really messed with uwp from C but I'm sure both mono and dotnet can do the same.


Mono and .NET can do it as long as native code isn't used.

Mingw can do it provided the desired APIs are known to the toolchain.


Cross compiling from Linux to Win32 works perfectly with FreePascal


Provided the APIs that one wants to call are know to the cross compiler infrastructure.


There is no difference to native compilation

FreePascal always knows the same Windows APIs, no matter on what it is compiling


Looks likes emacs didnt run perfectly on the emulation/translation layer known as rosetta.. Maybe its not so perfect after all.


On the other hand, emacs is self-modifying code. Or executable data. it's lambdas all the way down. ;)


Virtualization of operating systems is not supported in Rosetta.


I cackled.


Thanks...I needed a laugh


I haven't had any issues using the "Mac port" of Emacs here: https://github.com/railwaycat/homebrew-emacsmacport

I installed it with the x86_64 architecture.

EDIT: Was more explicit in architecture nomenclature.


I installed it with the x86 architecture.

Do the 32-bit binaries really run on M1? I thought only x86_64 would run there, emulated.


Updated my comment with the additional clarity of "_64".


I wonder if Rosetta applies to apps that are purchased from the app store or at least are compiled with Xcode, but maybe not apps compiled with gcc. Or maybe if you compile something with gcc there's a way to specify to the OS that it should run with Rosetta.


I’m sure Rosetta 2 is designed to work with gcc compiled apps but given that LLVM/Clang is largely sponsored by Apple, I’m sure the preference is on that, versus whatever version of gcc with whatever specific settings emacs is using.

Just a guess though.


I'm pretty sure the problem is that the "portable dumper" that replaces the dumpster fire that was unexec is not in fact portable across architectures.


Rosetta is designed to work on all x86_64 binaries.


Most, not all. https://developer.apple.com/documentation/apple_silicon/abou...:

“What Can't Be Translated?

Rosetta can translate most Intel-based apps, including apps that contain just-in-time (JIT) compilers. However, Rosetta doesn’t translate the following executables:

- Kernel extensions

- Virtual Machine apps that virtualize x86_64 computer platforms

Rosetta translates all x86_64 instructions, but it doesn’t support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions. If you include these newer instructions in your code, execute them only after verifying that they are available. For example, to determine if AVX512 vector instructions are available, use the sysctlbyname function to check the hw.optional.avx512f attribute.“

So, if you compiled specifically for some CPU,


> So, if you compiled specifically for some CPU,

It would be unreasonable to the point of impossibility to expect it to implement every instruction that an x86_64 CPU could possibly have. So as far as it pretends to be an x86_64 CPU, rather than every x86_64 CPU, it works on all [program] binaries.


I'm thinking it's more of the AVX instructions still being covered by patents. Some older comments [1] suggest that plain AVX will only lose its patent protection in 2031.

[1] https://news.ycombinator.com/item?id=14528591


Another factor is that Mac software was never able to rely on the availablility of AVX instructions anyway. macOS Catalina was officially supported on some systems with processors as old as Ivy Bridge, which lacked AVX support.


I thought it was up to the software (for example, a video encoder like Handbrake) to detect whether a processor supported a set of instructions, and then try to use them?


That's how it's supposed to work, yes. And any software that does that should probably work fine on Apple Silicon, by falling back to the non-AVX path under emulation.

I've seen some poorly behaved Mac software which used AVX instructions without testing for availability. It crashes if run on some older Macs, and it won't work at all on Apple Silicon.


I guess you could compile a binary that is exclusively x86_64h, but Apple silicon won't load it anyways so…


Has anyone tried MacPorts? It builds everything from source anyway, so there's a reasonable chance more things will work.


General note, MacPorts does not build everything from source anymore, and it hasn't for a while. MacPorts will try to find a precompiled binary on packages.macports.org, then fall back to building from source if a binary is not available.

That said—while I don't know whether ARM actually works yet, MacPorts still supports PowerPC wherever possible, so they have a long history of managing multiple architectures. I expect they'll have a somewhat easier time with ARM as a result.

Edit: MacPorts does indeed support ARM as of the latest release! https://lists.macports.org/pipermail/macports-announce/2020-...


No, it has no more reasonable chance than Homebrew. Homebrew has always supported building from source. Bottles did not originally exist. All M1 Homebrew packages are presently being built from source, because there are no Apple Silicon bottles. It is the source packages that need to be updated and patched to work on Apple Silicon, and that is true whether you use MacPorts, Homebrew, or build them yourself from scratch.


Exactly. I keep having to explain this to colleagues. For some stuff, it’s just going to take time.

And I can also envision that certain packages might not get updated, necessitating a fork to patch them for M1.

I’m interested in how homebrew and others will handle that (so official package x chooses not to patch/accept a patch for whatever reason, leading to a fork of package x with a patch applied), presumably to avoid namespace collisions/not giving the user what they want, there could be an error message that states that an ARM64 version doesn’t officially exist but links it to the fork and gives the option to install that instead. And then there could be a flag to always allow linked-replacements if an official patch doesn’t exist.


Homebrew's policy is to not apply patches that upstream doesn't accept, though I do notice that they are sometimes applying patches to the build systems themselves to patch in some of the paths to include files living deep inside the macOS SDK.

Given that policy I would assume that such a package might die until somebody forks it and takes on responsibility for maintenance.

Homebrew is about building and installing upstream packages, not about installing and maintaining custom forks of packages.


It's worth noting, btw, that this is another major difference between Homebrew and MacPorts. MacPorts maintains tons of their own patches, whether to make software work at all or just to add support for older or newer OS's.


... which is a blessing and a curse: When they are doing a good job, that's perfect because it means that some software which wouldn't run correctly now runs correctly.

When they are doing a bad job, they might anger maintainers ("I didn't add this bug - this was added by macports - complain to them!"), or they might introduce additional security issues not present in the upstream package (see the Debian openssl bug from 2008)

It might also mean that you're not getting the latest versions of upstream packages because adding those patches and rebasing them on top of upstream changes takes time.

Being close to upstream was a selling-point of homebrew back in the days when it was just a collection of scripts to make it easier to build original source distributions of common Unix software.


since this is homebrew we are talking about the solution will be whatever is the most user hostile.


Sorry if you have that impression. If you’re interested in a constructive discussion, feel free to give an example and tell me how you think we should handle it better in the future.


You proved yourselves to be untrustworthy by opting everyone into analytics using Google and hiding the notice in lots of terminal output. You guys doubled down on that by refusing to reconsider, saying unless you were a contributor to the project, your opinion was irrelevant. Mike McQuaid's responses as a representative of the project were user hostile, all while he tried to paint himself as the victim of abuse over the matter.

If you want to regain my trust, remove the analytics function, swear off such things forever, and expel Mike McQuaid from the project. But I doubt that will happen, so MacPorts it is. And I'll encourage everyone I know to use it rather than Homebrew.


Again, sorry you feel that way about Homebrew’s analytics. I think we’ve learned a lot from our mistakes here. If the way the analytics notice is implemented now is still a no-go for you, I can’t blame you for moving on. MacPorts is a fine package manager, too.

One thing I still want to point out though: whatever we do, we do it in good faith and with the best of intentions for both our users and ourselves.


The only gripe I have ever had with Homebrew is the /usr/local "do yourself a favor" prefix, due to its collision with so many 3rd-party installers. But that's now been fixed.


MacPorts handles multiple architectures a bit better. Of course the usual "this software doesn't compile on ARM" issues are still there.


Another option might be pkgsrc, I'm sure someone's already looking at it.

https://www.pkgsrc.org/


It's weird that in any homebrew/macports discussion I don't see much at all about pkgsrc. Is it not any good? Has issues? No "market share"?


The package selection is fairly limited. I tried it recently and was disappointed that neither neovim nor ripgrep are packaged. It also has little mindshare. I found out by accident that it was available for macOS.


That's too bad. It looks pretty good command line wise.


using macports wouldn't make it such that "more things will work". it's really about the applications now needing to be compiled in the ARM architecture.

here are some examples:

aom: https://github.com/Homebrew/homebrew-core/pull/57976

boost: https://github.com/Homebrew/homebrew-core/pull/59257

...

if you notice the code changes for the items above, they make changes to detect if the compile target is ARM and make the necessary changes so it would compile.


I loved MacPorts, greatly preferred it over Brew, and used it all the time on Mac OS back then on my PowerBook and my dying Mac Mini, but doesn't Apple's notarization requirement make it impossible to compile from source to create runnable binaries?


No. Even with Gatekeeper enabled, running binaries that you compile from source on your own machine doesn't require that they be notarized. As of Big Sur, they must be _signed_, but can be self-signed by a certificate you create locally. No need for Apple's approval in any way. I don't know what the status of supporting this new requirement in MacPorts (or Homebrew) is, but it's certainly something that can be dealt with.


Apple's linker will automatically adhoc sign binaries on AS systems so it shouldn't require any work for most people.

Anything run from the Xcode UI (or Terminal if you use "spctl developer-mode enable-terminal" to show the Developer Tools group under Security > Privacy in System Preferences) and enable Terminal is exempt from GateKeeper notarization checks. You can also put other terminal clients in the same list and they get the same benefit (child processes exempt from GateKeeper).

In a similar note "DevToolsSecurity -enable" allows any admin or member of the _developer group to use the debugger or performance tools without authing first. (Normally you must auth the first time and the authorization can expire if you don't unlock your system after a certain amount of time).


> In a similar note "DevToolsSecurity -enable" allows any admin or member of the _developer group to use the debugger or performance tools without authing first.

Oh nice! That was a big annoyance on older systems; glad to see they've fixed it.


Then let's hope Apple doesn't alter the deal further.


Since Gatekeeper was originally announced I've seen people claiming that Apple were going to lock down macOS so Homebrew wouldn't work any more. I've never seen evidence that this will actually happen (and the people I speak to at Apple point to the opposite).


Why would they?


They slowly tighten the screws with each release; by now it should be noticeable for most people.


Yes they do. But the question is, why would they make a decision that will instantly make the machine completely unusable for a substantial portion of their clientele? Also, the portion that arguably gives MacBooks and, especially, iOS devices their value.


I think I'm going to manually build things from source until Homebrew officially supports M1 and Big Sur. I don't want to deal with any sort of migration / funky re-install, personally.


Homebrew is just a way of compiling upstream packages. If homebrew is broken building from source will be broken too.


Homebrew distributes prebuilt binary packages. It’s possible that the architecture for this is improperly set up and does not understand that different architectures exist: unlike macports, homebrew was born after the x86 transition.

That’s why one user above suggests `-s`: it forces a compilation from source.


The problems with homebrew packages at present are mostly down to projects like Go and Rust not being updated for M1 ARM yet and hence sources not compiling properly rather than a problem with homebrew itself. So users attempting to compile from original upstream sources would run into similar problems.

They have an overview issue here:

https://github.com/Homebrew/brew/issues/7857


Rust does have tier 2 support, but it hasn’t made its way to stable yet. This means if you did try in upstream today, it would work.


That approach has always ended badly in my experience because your manual installs will require more care to remove than the Hombrew ones. My usual recommendation is to install anything which supports it using --HEAD which will keep things tracked and make it trivial to either completely uninstall or simply use "brew reinstall" packages after the upstream stabilizes.


> I don't want to deal with any sort of migration / funky re-install, personally.

Isn't the benefit of Homebrew that it all goes into /usr/local and it can just blow everything away if necessary? You could run `brew leaves` to see what packages you have, uninstall everything, and reinstall. Easier than keeping track of what you've manually installed where.


> I don't want to deal with any sort of migration / funky re-install, personally.

Wouldn't this just make things even messier? Some packages from Homebrew and some compiled from source that you have to remember to update manually?


In the short run yeah. You have a point. I intend to uninstall & reinstall everything once Homebrew is officially supported though, which isn't a big deal for me (and I actually kind of enjoy building from source just for the experience).

But these replies are making me think that a large part of my decision to do this is motivated by me just not understanding Homebrew well enough (e.g. how easy it is to nuke everything). Oh well, we'll see how it goes.


So what's more worth the $200 in upgrade. Double the RAM, or Double the disk space?


I just bought the base model Air. No issues with the RAM thus far, but know that the disc space is going to be an annoying limitation. Will swap it for the 512GB model in the next week or so.

Absolutely stunning computer though - first time back to Apple for over 8 years for me.


Same, especially if you're planning to use it for normal consumer uses as well. Photos and videos take up a lot of space — even moreso if you have an iPhone and use iCloud Photo Library and the Mac Photos app.

I bought the base model Pro to try, but 512GB looks like a must for me going forward.


I use iCloud photos a ton and my old 128GB Mac's disk is constantly full just from the non-removable thumbnails for iCloud photos, without any actual photos on my local machine.

I've been fighting disk space issues with my Mac for the past five years because I was short-sighted and only got the 128GB disk. This time I went with 1TB, not making that mistake again.


You can move your photo library to an external SSD.


I went for both because my existing 2017 MBP only has 8GB/512GB and I bump into them fairly regularly. But if it's one or the other, I'd go for memory - you can offload your disk space to a NAS/Cloud/whatever much more easily.


What stack are you using that you find the 512GB limit to be a problem? I've picked up one of the 16/512 MBPs and am wondering if I should swap it for a 16/1TB MBA, or maybe MBP but I have a little while to decide.


This is personal laptop - I've got >250GB in Photos which, even on "store originals in iCloud" mode takes up a huge chunk of the 512GB. I've got several x00GB folders in Dropbox that I can't currently have on the laptop (fonts, assets, etc.) 50GB+ in ~/git. I hover around 5-10GB free space these days.


Gotcha. I store photos on a NAS so I only ever have one or two SD cards worth locally (~100GB). My git directory is pretty similar.


Perhaps silly question: what's stopping Homebrew from publishing bottles for those formulae that already compile?


Lack of CI infrastructure. Apple and MacStadium have given us access to some machines so hopefully this should be a temporary situation to be resolved soon.


I guess lack of CI infrastructure


Given that apple just released M1 macs, yeah that makes sense


To just publish those that compile, there's no need for any M1 Macs; it's as easy to compile on Intel Macs.


[flagged]


I want this. I dislike depending on rosetta. I want native, and I am saddened that Apple didn't reach out to homebrew or macports to pre-arrange something. It speaks badly of the future, they're so indifferent to the developer experience.

I downvoted your comment. I think this article is spot on for HN, and I think you showed a lack of critical self-awareness questioning its relevance.



What Apple claims to have done and what Apple actually does are two very different things. Apple has reached out to some projects with various levels of support, but it's not like they just dropped by all of those projects with patches on the day of WWDC.


>but it's not like they just dropped by all of those projects with patches on the day of WWDC.

Apple had their own stuff to develop, including a new architecture, a new OS version for 2 architectures, ports of all their apps, and several other stuff besides, including the UNIX userland they ship.

The idea that a company should be responsible for all third party FOSS stuff on its platform, used by a small minority of users, is a little strange...

That said, a MacPorts guy below says that "Apple engineers had patches for basic support ready fairly quickly".

Not to mention:

https://twitter.com/wongmjane/status/1275177255681982464?s=2...


> a MacPorts guy below says that "Apple engineers had patches for basic support ready fairly quickly"

That would be me, who is not really a MacPorts guy…

> https://twitter.com/wongmjane/status/1275177255681982464

…and this is the link I replied to ;)

I'm not claiming they should have done anything. I was actually pleasantly surprised when they said they would. I'm just saying that they didn't show up with all the fixes as some may have believed from what Apple said during WWDC.


>That would be me, who is not really a MacPorts guy…

Lol, missed that. Rarely pay attention to who I'm responding to, just what they wrote :-)


A rather unfortunate side effect of deemphasizing the author of the content ;)


If you want native, then just use the underlying UNIX provided by macOS, no need to add extra stuff.

I never bothered with these alternative eco-systems.


How on earth is distributing common Unix tools for MacOS that aren't included in the base OS not "using the underlying UNIX". This is exactly what Unix was designed to enable. Apple understand and values this, which is why they have submitted patches for these projects.


Most of the stuff happen to be GNU/Linux replacements for what macOS already provides.


I could quibble with the "Most of" part, most of it is stuff you don't get in MacOS at all, but that aside so what?

This isn't really about alternative ecosystems, it's about complementary ecosystems. There are a lot of people that use MacOS desktops alongside Linux or other Unix machines. For these people having a common set of tools that work the same, so you can use the same command lines and scripts across multiple platforms, is incredibly useful.


That means you had time to spare to manually compile all kinds of stuff.

Other's don't.


Nope, I just use the UNIX provided in the macOS box.


What do you think people use Homebrew for? brew list gives me mailhog, mysql, postgresql, newer python, newer ruby, macvim, node, redis, ... these aren't IN the "UNIX provided in the macOS box".


I think he means that he uses Mac for Mac-y stuff (XCode, Apple and proprietary apps, etc) and Linux for the rest that you've mentioned.

But, to paraphrase Big Lebowski, "that's like, his preference, man".


Yeah, I figured out he wasn't using Mac for general development ... too late.


Well, that means you just use basic userland tools. Nice if that's all you need, others need more.


Nope that means I care about Apple platforms and UNIX, not pretty replacements for GNU/Linux.


Not sure what you mean.

As if some self-imposed UNIX/POSIX austerity, making do with the basic (and old) UNIX userland that comes with macOS (or other platforms), is something to be lauded?

(As opposed to just an example of someone making do with the little they need, where others' mileage may vary?)

Or is needing some of the tons of programs that don't come with "Apple platforms and their UNIX" (e.g. some random stuff I use: gnuplot, ripgrep, redis, postgres, jq, graphviz, and tons of different things others might want) somehow problematic?

Not even sure where Apple platforms and UNIX come into play as something to be contrasted to "replacements for GNU/Linux".

One of the benefits of macOS is precisely that as a UNIX it can run all kinds of UNIX tools, not just the basic POSIX utils, but close to everything available in a Linux/FreeBSD/etc package manager...


On the contrary, I use macOS for what it is and the value of its development stack, not as a pretty replacement for GNU/Linux, for that I already have my Asus netbook.


>On the contrary, I use macOS for what it is and the value of its development stack

OK, I get what you mean.

But "what it is" includes being a very usable Unix core that can run all kinds of stuff one might want.

So, like you, I don't expect macOS to be a GNU/Linux, or cater to tinkering and Linux/FOSS preferences. And I do my Linux-based development in Docker, remote VPS and servers, and so on.

But, on the other hand, I wouldn't carry two laptops, a "Linux" one for running postgres and redis and gnuplot, and a Mac one for running XCode and Instruments and Photoshop, out of some principle that Mac is Mac and Linux is Linux and "never the twain (use cases) shall meet".


We got this far down before we find out you don't actually do anything on your Mac that would require Homebrew? Well played.


Homebrew and Macports maintainers could have easily reached out to Apple and gotten free DTKs to port with, just like thousands of other developers.

But the truth is that XCode built software is far more important to Mac users and that’s where Apples focus was. Homebrew users are less than 1% of their installed base.

The other truth is the M1 hadn’t been important to Homebrew maintainers. At least not yet.


On the MacPorts side, I know that at least Saagar Jha (who posts on HN often) did indeed have a DTK. Perhaps consequently (?), MacPorts does support ARM right now.


I have very little to do with MacPorts's ARM support, I'm not even a maintainer ;) Most of that infrastructure was already there from the PowerPC→Intel transition, and Apple engineers had patches for basic support ready fairly quickly. I worked a little bit on early support for some heavily-depended-on packages, but I wasn't really directly involved in the effort.


Whooops! I only mentioned you because I remembered you were listed on MacPorts's website until recently, as a contact for "Apple DTK issues" or something like that.


Ah, I remember adding myself to that page because I was trying to see if there was anyone else with one to help. The actual team is here: https://trac.macports.org/wiki/MacPortsDevelopers


DTKs were $500 USD, not free.


I think he's suggesting that Apple would have donated them to Homebrew and MacPorts if they had asked. Seems extremely doubtful to me...


Apple offered free remote access to DTKs for open source developers, in cooperation with MacStadium.

Cite: https://github.com/NixOS/nix/issues/3853#issuecomment-678249...


Apple did donate them to Homebrew without us asking. When we asked for more: they donated more.


What's "doubtful"? Apple has done similar things in the past...


nice well done


Most of the gnu tools are working fine. Javascript stack is still a tire fire. JVM is ok. Ruby/Python are fine.


Not really sure what you mean. Node at least runs fine in emulation mode and it takes like 5 minutes to build node from scratch to run natively on this thing.


Chrome is running great, natively, on M1, so Javascript is running just fine on the V8 engine.


Whats wrong with javascriptcore?


I don't think JavascriptCore is anybody's idea of the "Javascript stack". Parent means Node/NPM/Electron.

Though, I'm pretty sure he's mistaken on its status.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: