The fact that he even got that far with emulating what is essentially completely undocumented hardware is a very good sign, adding the rest of the hardware to QEMU might not be as hard as initially thought.
This approach will help with none of these
Don't underestimate the community -- a lot of Hackintosh (and emulation) stuff is done for the "just because I can" reasons, and getting a fully emulated ARM Mac working enough for any sort of actual use would be a huge win even if it's slower than their hardware and not 100% complete (just like Hackintoshes usually are.)
I remember running (I think) Jaguar on PearPC; mind you, Leopard was probably out by that point, I just had old disks handed down to me and no actual Macintosh. It was a lot of fun digging into the OS like that, especially at a young age, but definitely slow considering it’s emulating another instruction set.
Not long after I got a relatively successful hackintosh going (read: accelerated graphics and working audio), which, besides Linux, helped brew my respect for Unix, BSD, and unique macOS internals.
Altogether it lead me to purchase roughly 5 Macs over the years. I recently got back into it for fun, and the scene is thriving. Will be interesting to see what happens post-AS.
Doing things for the heck of it is, for some reason, a great motivator and educator.
Not surprised given their attempt to run AIX on QEMU as well. That's slightly more documented but still a pain to do - as it requires specific versions of AIX(7.2 TL3 SP1/7.1 TL5 SP5).
Even the PowerPC version of OS/400 uses custom instructions pretty heavily. It's a tagged memory architecture with another hidden 65th bit. I don't think QEMU has support for that, and it'd be extremely slow to emulate.
I don't believe QEMU emulates tagged memory on POWER, but it does emulate tagged memory on ARM (MTE), so there is no reason why it couldn't be added.
PowerPC AS only requires 1 tag bit for every 16 bytes. (1 tag bit for every 128 data bits, 1 tag byte for every 128 data bytes, etc.) That's 8MB of tags for every 1GB of memory. So the overhead isn't massive [1].
Do recent IBM i versions still rely on hardware enforcement of tagged memory?
IBM i tends to be picky about disks, as well. At least as of a couple years ago, the only way to use local disks with standard 512-byte or 4096-byte sector sizes was through VIOS or similar* storage virtualization mechanisms, so unless QEMU supports 520- or (4096+mumble)-byte virtual SAS disks, you also need to emulate enough of the IBM POWER hypervisor architecture to support running VIOS (IBM's AIX-derived virtual I/O appliance) concurrently in a separate logical partition within the same QEMU VM.
Even if QEMU does support odd-sized virtual sectors, given the integral support for logical partitioning included within modern IBM i releases, various system management and RAS features, etc., I'd be entirely unsurprised to learn that significant firmware support isn't also required to boot the OS at all, regardless of I/O device support.
In which case, you'd either need to reimplement a significant bit of IBM firmware functionality from scratch, or else you'd need to emulate POWER system hardware convincingly enough to fool (possibly hacked) authentic IBM firmware.
Finally, given that AFAIK IBM has never sold unbundled licenses for IBM i (or OS/400), the only potentially legal way to do any of this would be to run QEMU a POWER system already licensed to run IBM i (no technical problem, as all recent POWER systems have excellent Linux support).
*Where, by "similar", I'm thinking of virtual disks used by virtual instances running within an existing IBM i instance, which wouldn't be helpful here.
I know you said this sarcastically. The custom GPU is going to be a huge roadblock to get anything graphical to render. Probably never going to happen.
Add: actually never say never. Consoles with custom GPUs have been very successfully emulated. So maybe someday.
You don't have to emulate the hardware directly. Since apples kernel is opensource, there's a good chance one could write a kernel driver for another GPU offering a sufficiently compatible interface. Nearly all GPU code will have software rendering fallbacks too, so the driver might just be enough to trigger those fallbacks.
Has anyone ever successfully written a custom GPU driver for macOS? Perhaps there would be more incentive to do so now, but... seems like a pretty massive undertaking.
You could of course use the software renderer—I'd guess that probably still exists on Apple Silicon Macs for safe mode and such—but it wouldn't be pleasant...
Edit: Well, anyone besides nVidia, I suppose. But they're a big company, not a Hackintosh side project.
At least with the previous versions of macOS you didn't have to do anything special, the software renderer would just work if you don't have a supported GPU.
> The fact that he even got that far with emulating what is essentially completely undocumented hardware is a very good sign, adding the rest of the hardware to QEMU might not be as hard as initially thought.
I'm less convinced—it sounds like he was largely reusing the setup that can semi-boot iOS (which is of course also totally undocumented, but has had many years to be studied.)
It's still a cool accomplishment, I just don't think it necessarily portends great things.
> Besides, Hackintoshes are often built when Apple’s own hardware isn’t fast enough; in this case, Apple’s ARM processors are already some of the fastest in the industry.
They are also used when one wants more cores than are possible on Apple hardware. If you want a build engine for a medium to large sized compiled language project, Apple has no options that make economic sense, since a Ryzen Threadripper will beat everything else hands down. The same is true of every other embarrassingly parallel, linearly-scaling compute problem.
In such cases, the "speed" of Apple's own silicon doesn't help at all.
Hackintoshes from my experience are usually built as a low cost hobbyist alternative. Most people earning a living from a Mac will sacrifice speed to have stability and support.
Plenty of people who want MacOS but cannot afford the official Mac will use it instead.
I need to build my software for macOS. On my Ryzen Threadripper running Linux, I can run a faster, more powerful KVM/QEMU version of Mojave than I can buy from Apple, while still having cores and RAM left for Linux.
I could afford to buy hardware from Apple, but why would I when the cost/performance ratio for an embarrasingly parallel compute task like compiling is so much worse?
Like the previous person said, mainly stability and support. There's no doubt that macOS can run much faster in non-Apple hardware given certain parameters, but if you want support and solid stability you would probably go with Mac hardware.
I remember seeing a build on the tonymacx86 forums where someone built a Hackintosh with 64 cores (dual Xeons, I believe) and 128 GB of RAM - back in 2010. What you'd do with such a beast, I don't know, but that's well above and beyond any hardware Apple has ever sold.
This speed advantage won’t apply when emulating Apple silicon on an x86-64 CPU, as discussed here.
Emulating ARM on x86-64 is doable, but it has dramatically more overhead. It’s doubtful that a high core count would be enough to overcome this relative to just using Apple silicon.
My point was that the best use of hackintosh/VM tech is unrelated to "speed advantage". Apple has nothing that can touch a Threadripper for linearly scaled parallel workflows. The gap is so big that it might even overcome the arm/x86-64 emulation cost, though that wasn't what I was suggesting.
> The gap is so big that it might even overcome the arm/x86-64 emulation cost
I've spent a lot of time on this. With QEMU, the performance gap is huge right now.
I have several ARM/QEMU virtual machines running on my AMD virtualization server, but the emulation overhead makes everything painfully slow, even when assigning 20+ cores to the VM. I picked up an 8GB Raspberry Pi because it's often faster to run a workload on the Raspberry Pi than even the many-core AMD emulated system.
The emulation overhead across architectures is huge with QEMU.
Is it not the case that x86 Chromebooks emulate the ARM ISA in order to run Android apps? There's certainly a slowdown but it's not horrible. How do they do it?
Then again maybe they just have a good JVM and that solves most of the problem for Android apps?
I feel really stupid, because as the (otherwise happy) owner of a well-specced x86 Chromebook, I've always wondered why most people seemed happy with android apps on chromeOS while I find them laggy and buggy messes. I've just realized it's probably an emulation issue.
Let’s wait the Apple Silicon Mac Pro. I think they are all in on this fastest chip race thing, with deep pockets, high paying customers and scale behind them.
I noticed he was using -d unimp on the qemu command line so qemu should print unimplemented features when it encounters them. (Of course that only prints them, you'll still need to research / reverse engineer to discover what they are).
I think it was Craig Federighi from Apple who said that they don't plan to support booting of any other OS. They want you to use to use their virtual machine manager instead.
This is some very cool hacking, but I’m more interested in knowing how Apple Silicon will run x86 Windows and Linux stuff. Can virtualization software get help from Rosetta 2? Or is QEMU and similar the best we can hope for?
_If you really need to_, Huawei provides the ExaGear Server translator for Linux at https://www.huaweicloud.com/kunpeng/software/exagear.html , which allows to run x86 and x86_64 apps on an arm64 Linux system, including Docker containers for their customers. That translator works pretty well in most cases.
Note however that you need to create a Huawei account to download this.
For Windows guests:
Run arm64 Windows, the JITs to run x86(_64) apps are included with the OS.
Why use ExaGear, why not just use qemu-user-static?
Why would I want to use a (presumably) closed-source solution from Huawei whose documentation is only in Chinese, when I could instead use a well-known open source project with plentiful documentation in English?
Exagear is originally from the Russian company Eltechs and was targeted to running x86 apps on SBCs like the Raspberry Pi. e.g. they provided Ubuntu images with wine to run windows apps on a Pi.
They discontinued the product beginning of 2019 and presumably? got bought by Huawei.
And since then the product changed a lot. They target Arm 64-bit machines only nowadays and support both x86 32-bit _and_ 64-bit application compatibility.
Apple TV 4K uses the A10X, which is ARMv8.0. That's so old that it even pre-dates the true atomics instructions (which are intro'd in 8.1-A). So it'll have to be a pretty heavyweight JIT to adapt ARMv8.3 code to that baseline.
For Apple A11 though, going from 8.3 -> 8.2 is much easier... and doable through a patch on fault method.
Incredible. Emulating AppleSilocon before they even announced it.
Does anybody know where one can learn about how these people approach and learn about the inner workings of what is essentially a black box from the outside?
This approach will help with none of these
Don't underestimate the community -- a lot of Hackintosh (and emulation) stuff is done for the "just because I can" reasons, and getting a fully emulated ARM Mac working enough for any sort of actual use would be a huge win even if it's slower than their hardware and not 100% complete (just like Hackintoshes usually are.)