Hacker News new | past | comments | ask | show | jobs | submit login
macOS Internals (gist.github.com)
773 points by JoelMcCracken on May 7, 2023 | hide | past | favorite | 141 comments



It’s funny how I used to think that operating systems, databases, and cloud systems, as these kind of arcane things that were somehow apart from the rest of software.

I could not begin to grasp how they could deal with running programs, access control, or any of these things that once seemed so foreign and mysterious to me.

And realising that it’s actually just software, that there’s no black magic to it, just lower-level APIs, less pre-built batteries-included stuff to rely on, and probably a whole lot of more complexity, that was… both mind blowing, exciting, and kind of a let down.

And it has enabled me to have more realistic expectations from these tools.

It however doesn’t take away any of the brilliance and effort that went into the these things, or lessen the credit due to those behind them. I can only begin to imagine the complexity involved in a full-fledged modern OS.


If you wanted to learn, I really recommend Operating Systems: Three Easy Pieces (OSTEP). I thought it was excellent and pretty easy to follow. https://pages.cs.wisc.edu/~remzi/OSTEP/


There are many things in OSTEP that I found really eye opening. One thing that stuck with me was that at some point, when discussing virtualization, it referred to an OS as a virtual machine, and that really changed the way I look at operating systems now.

> That is, the OS takes a physical resource (such as the processor, or memory, or a disk) and transforms it into a more general, powerful, and easy-to-use virtual form of itself. Thus, we sometimes refer to the operating system as a virtual machine.


Oh that’s a great quote. Really good framing.


ken thompson when asked what he thought of virtual machines, his response "they are such a good idea i wrote one. you may have heard of it: unix"


Thanks!


It's turtles all the way down. Layers of abstraction stacked on layers of abstraction.


Until you get to the physics of transistors and logic gates :)

Though I suppose in a sense, even subatomic particles are abstractions over quarks. It’s just that those are no longer abstractions we control.


I realized the brilliance of the name Quartz for the graphics framework. It’s what happens to “rendered” silicon.


Quartz glass is strong and very clear, it implies sharp, clear images.


Yup. APIs and constraints all make engineering sense. Turtles, all the way down... except when you get to quantum mechanics, that stuff is whack.


>QM is whack

Very much so. But I also think it always had to be that way. A universe of infinite classical regression (atoms made of atoms made of atoms...) would be more insane than whack.


If I had to guess, I'd say infinite regression is how the real world works, and quantum mechanics is how our reality's simulator lazily computes interactions.


Would that make things like wormholes similar to a heartbleed attack?


why i like arduino.

why i kind of like old IBM PC environment. x86 + a few bios calls, dont even need DOS after you load game.exe


Yeah NetWare used to boot DOS then load their kernel over it. I feel like computers used to be cooler.


Well, that is it worked on Windows as well ( up until Windows ME ), load DOS first and then Windows.

You could still do it that way if you want. Windows 11 needs a boot loader. No reason you cannot write a DOS based one.

The comment about game.exe above probably refers to the days of “DOS Extenders” that games were based on. You would launch a game from DOS and, instead of just running a 16 bit DOS executable, it would switch into 32 bit mode and use a “DOS extender” to run the game.

“DOS extenders” are really just operating systems though that use DOS as a boot loader. Popular options for DOS games and programs were DOS/4GW, Phar Lap, and Quarterdeck ( from memory ). There were many others.

Windows 95 was really a “DOS extender” too.


dos.exe was kind of referring to how a lot of games would just completely ignore DOS even before the days of protected mode and 32 bits, im going back to the 8088 days.

but yeah same idea.

actually now that i remember it, my first linux was a copy of ... i think Dragon Linux? It used Dos as a bootloader, it ran off the existing DOS FAT filesystem so you didnt have to mess with partitions, and it dealt with the filename-too-short issue by keeping a dot file in each directory with a list of the 'real' linux long names and how they matched the DOS fake shortnames.

but like even windows and linux, they are having layers and layers of abstractions. trying to do anything in X11 or win32 API calls, compared to the before-times where you just write directly into RAM at A0000 to draw pixels.. there is something about that that is very interesting.


Interesting. An odd rabbit hole. I've never heard of this kind of thing existing before Ubuntu wubi, (though I don't know why that surprises me, it's not like they're was no need for such a thing before that):

> DragonLinux was a distribution of Linux that had the ability to be installed on a loopback file on an existing FAT16 or FAT32 partition.

https://sourceforge.net/apps/wordpress/dragonlinux/

In the readme linked there, it says something different:

> DragonLinux is a Linux distribution which runs on top of windows with no partitioning needed as long as the Windows OS is sitting on a FAT32 partition. DragonLinux is fully supported on Windows ME and below. Work for support with Windows 2000 and Windows XP is in the works. > > DragonLinux is a UMSDOS distribution, compared to a Loopback filesystem of vr2r1... That change was made to eliminate the 2GB disk space boundary with the previous version, and to simplify installation and expansion (ie the file system grows as needed). > > The Full Version is a fully loaded version with GNome and several other smaller window managers, and the full line of tools for GNome. The lite version is a full console version with everything from the full version excluding the X environment, therefore being able to be run on older machines.

Looking this up more brings me to umsdos project, a filesystem that apparently runs on top of another filesystem

https://tldp.org/HOWTO/UMSDOS-HOWTO.html and https://en.wikipedia.org/wiki/FAT_filesystem_and_Linux

This brings me to loadlin:

> loadlin is a Linux boot loader that runs under 16-bit real-mode DOS (including the MS-DOS mode of Windows 95, Windows 98 and Windows Me startup disk). It allows the Linux system to load and replace the running DOS without altering existing DOS system files.

https://en.wikipedia.org/wiki/Loadlin

And then there's also grub4dos

https://github.com/chenall/grub4dos


> I've never heard of this kind of thing existing before Ubuntu wubi

WUBI wasn't the same at all.

WUBI makes a single big file and formats it with a Linux filesystem.

`umsdos` kept Linux files as DOS files directly in the FAT16 filesystem, with a hidden file in each directory containing additional Linux metadata. No Linux filesystem anywhere.

You could even run a DOS defragger and your Linux would survive. :-)

I used a distro called Pygmy Linux that worked this way.


It still kind of blows my mind that basically all this reduces to assembly and then machine language.

I’m of the age where my first computing experiences were on green screens, PET computers, networked VAX machines at the colleges my parents taught at and so on.

I was aware of what assembly was back then, and the idea that the game or basic business program I was using was just a sequence of MOVE ADD eax and so on. I just couldn’t quite get how you could be smart enough to use that to something useful.

And everything we see today is basically just a bunch of moves and adds and pokes a layer or three below the surface.


It hits you in the face, how do you eat an elephant? A bite at a time. One API, one abstraction at a time.


The thing that made me realize this was compilers!

"Wait, code is just text?"

"Always has been"

I still have to remind myself of this sometimes when I think "woah, how does this work?" and then I try to step through how it might be built.


I don’t know that code was always text, but I’m thankful to have been born in the days after programming assembly on punch cards was not the only way.


The early days of programming was also text in the form of pencil on paper, then the program was transferred to the machine in various manual ways, such as flipping toggle switches, followed by punch cards. But the actual process of writing programs was always text. Even before electronic computers people wrote down algorithms to be executed by themselves on paper, or by rooms full of people whose job title was ‘computer’.


My first experience of programming was a 1970s 8085 kit with a hex keypad and a few digits of 7 segment LED display. The only book was Intel's technical reference manual for the 8085, and my first big project (a full year university project) involved conceiving the right program, writing it on paper in assembly language, hand assembling it, entering it as hex (a few hundred bytes). It worked first time because it pretty much had to work first time so I was very very careful.


Yeah, it's all a bunch of programs doing their thing. One draws a GUI, another talks to the network adapter, and so forth.. And when you click a launcher icon, a loader program is started that (more or less) copies a program into RAM and when all is set up, it tells the OS (and the hardware) to run that, too.


Yes, a modern OS is a tangled mess of microservices.


They just don't communicate using JSON. I can see it now; Kernel JSON (aka kjson) support in the Linux kernel to attract Node developers.


Seriously I wish /proc was available as a tree of JSON documents as opposed to all these different ad hoc plain text formats. It would make using /proc a lot easier and more reliable


Could be unnecessarily costly unless the files are written ahead-of-time. I propose some kind of GraphQL-like interface to only read what you need.


/proc files are generated on-demand as you read them.

Generating JSON couldn’t be significantly more costly than the current ad hoc plain text formats are


Just take the Unix philosophy and replace the word "file" with "JSON file".


I think that’s one of the advantages of coming into computing in the early 80s. Computers like the Apple ][ were completely understandable by a single person (and it’s still my mental model for how a computer works 40 years later).


For me this moment was when computers were no longer “magical” to me and that I found comments to this effect (even from those that know more than I and were more senior) odd and kinda culturally cultish.


History is such a great way to learn anything.

It’s interesting - like a mystery revealed, but also provides the why - why things are the way they are.

I was looking into accounting the other day, and you can go back to the first document that mentioned credit and debit and introduced modern accounting. There are so many topics like this that seem confusing and arbitrary but it all began extremely simply and pragmatically. Music theory is another example of a fun topic to study historically. And programming languages and operating systems.

It’s important to use two learning paths though: historical and modern scientific. Historical usually explains terminology and legacy aspects of such topics, but then topics can also be explained by using modern technology to experiment and observe.

Think about unix - it was primarily designed for very low power low memory environments. And databases too. You can learn a lot poking at an existing system, but when you watch the early content on it, you see the why.


> The present is the past rolled up for action and the past is the present unrolled for understanding. — A&WJD


> A&WJD

Who's that?


Ariel and William James Durant, authors of "The Lessons of History", where the quote appears on page 12, itself a quote of their earlier volume 6 from "The Story of Civilization", specifically "The Reformation", page viii.



Agreed. This is why often reading a Wikipedia page is the worst way to learn something. The best way is to get a teacher to tell you the relevant history leading up to the thing, and it makes much more sense.


CS teacher here. This is my preferred way to teach stuff. Way easier to remember and understand knowledge if you know its origin story.


Do students respond well?

When I was in school, a lot of teachers did a little tidbit of history at the start of topics, but I never latched onto it and found it boring. It was only after my studies that I started to really love going back to the origin.


Well, you can make almost any subject interesting if you do it right. In my experience it's mostly the teachers fault if students are bored.

That said, some students cannot be activated whatever your do. We're all people.


Though many Wikipedia pages have a History section doing exactly that. (More should have one.)


Well, they have a history section, but I don't often find it does exactly that.


This is so true.

Most of math starts to make sense once you know the history behind the concepts.


I was able to do calculus when I first learned it, but it didn't really click until I studied Newton's Principia, and was like, oooooooooh I see what he did.

Similar with Euclid's Elements vs the modern "Cartesian" view of geometry.


What was that first accounting document?


For double-entry accounting, Luca Pacioli's Summa. The history of accounting is a bit more complex and unknown.

https://en.wikipedia.org/wiki/Summa_de_arithmetica


    The window server is "lightweight" in that it does no screen drawing itself. It is, to use Apple's term, "drawing model agnostic."
WindowServer is such a behemoth nowadays, it took me ages to reverse engineer how to make rcmd (https://lowtechguys.com/rcmd) control Stage Manager.

It’s interesting to see it described as lightweight. I guess every lightweight solution becomes heavy and complex with enough usage.


Rcmd is the best app for window management productivity I found in years. Thank you so much !!


Thank you! Happy to hear it has become so useful ^_^


I love your website's privacy page. Also lightweight.

["rcmd does not collect any personal information."]

That's the complete page. Great policy!


Thank you! Yes, I believe privacy policies should be written for end users, not for lawyers.

I tried to do the same with my other more complex app as well: https://lunar.fyi/privacy


That's a very refreshing attitude. I had heard of Lunar but was reluctant to try it. Lately my policy has been "the less software the better" precisely because of how predatory most software is.

But not your software, it seems!

Your privacy policy has made me think it would be a good idea to try and probably buy your app if it works for me.

Thanks for being one of the good ones!


Yep, paying rcmd user now. Glad to support 'one of the good ones' (and benefit personally from using it, no altruism here)


Love rcmd.

Once I started using it I was wondering how I worked without it


How does reversing WindowServer allow you to make rcmd control Stage Manager?

Figuring out which CG calls WindowServer is making?

Wouldn't these calls fail unless you have special entitlements?


This is about a work in progress for a version of rcmd that I intend to distribute outside of App Store. As you guessed, calling private functions is not allowed by the App Store review guidelines.

I want to add hotkeys for creating Stages with the needed windows, because as it is right now, mousing around to move windows is too slow for me.

I'm looking inside SkyLight.framework and WindowManagement.framework to find the right sequence of SLS* or CGS* functions that does what I need.


If this kind of thing interests you: late-2000's/early-2010's Apple had some fantastic documentation, but it's all hidden in the “documentation archive” (https://developer.apple.com/library/archive/navigation/).


Only available to us old timers really, as newer devs on the ecosystem don't even know what to search for.

I don't get it how they messed this up.


My hunch is SEO combined with placing less responsibility on the programmer.

The new docs have a page per code symbol, and much more focused single-page topic articles, rather than 20-page programming guides. Docs seem to be structured to answer a question right when you have a problem, but then they don't offer a good way to study in depth before you begin.

The other day a younger teammate asked a group about some API documentation that said the results are undefined if you call this wrong, and had we ever heard of such a thing. And yes, of course, using framework code in unintended ways is always a garbage in, garbage out affair. This made me realize we have come a very long way in eradicating boilerplate and ceremony in code.

There used to be so many things you just had to use the way the docs explained, or else. Doing the boilerplate correctly was a primary focus of code review. Today we have ARC, we don't call alloc separately from init, we never handle KVO without the compiler's help, and we have no IBOutlets to forget to bind. To begin writing for iPhone 15 years ago, I had to study for a week or so with the programming guides, one for the language and another for the platform. Today you can do a one hour tutorial.

So, if you can stumble your way through building an app by typing the period key and seeing what methods are available, and the whole platform SDK exhibits Clarity at the Point of Use, would a newcomer even use the programming guides if they weren't down in the archive?


> would a newcomer even use the programming guides if they weren't down in the archive?

Yes, of course they would.

If you don't learn multiple different paradigms for acquiring knowledge, you'll be automated by ChatGPT.

Natural language query searches, category design & grouping via visual hierachies, are merely two forms of knowledge acquisition.


My best guess is that as macOS aged and became harder to keep backwards-compatible, their dev communication changed from "you must not rely on implementation details" to "you must not know about implementation details".


It's impressive how so many years after he left, the legendary articles by John Siracusa are still having an impact.


For newer macOS changes, Howard Oakley's blog is really awesome:

https://eclecticlight.co/category/macs/


I've loved reading blogs and microblogs of Asahi Linux developers, like https://social.treehouse.systems/@marcan, https://asahilinux.org/blog/, and https://rosenzweig.io/

They all have intimate insights into macos internals from reverse engineering it.


The insights are amazing.

The presentation is horrible and I feel like I am wading through syrup trying to glean the odd morsel of info.


I think striking a balance between the two is quite difficult. Personally, I say I'd prefer the information being there in poor presentation over a shiny thing that doesn't work but....


With a somewhat lower bar for accuracy, unfortunately.


If you don't, would recommend listening to ATP, which he's a cohost on. Still a joy to listen to.


Neutral and then ATP is what got me into listening to podcasts. Still a great one to this day!

Their interview of LLVM co-founder/Swift architect Chris Lattner is def a good listen for historical background: https://atp.fm/205-chris-lattner-interview-transcript


That's a great one. There's another one with Chris Lattner as well (https://atp.fm/371).


Oh right I forgot there were two!


Used to listen to it. Stopped because I cannot stand Marco Arment.


His tendency to give moralizing lectures does weirdly conflict with his tendency to talk about his several different rich people hobbies but I think he eventually managed to suppress it.


I am mostly fine with Marco but he introduces so much fluff and nonsense that I subconsciously avoid listening to ATP.


That seems to be in demand. They're sort of associated with relay.fm podcasts which I looked into and found that, while they're supposedly tech podcasts, they're actually just guys talking about everything that happened to them today.

(Oddest case being one with CGPGrey, who I think is a famous youtuber? But you wouldn't learn a single thing about this from listening to it. Instead they talked about some kind of planning notebook they invented for hours.)

I guess it's good if you want a parasocial relationship with a dad.


Again, I agree. But strangely not a lot of folks see it the same way.


As does his old podcast, Hypercritical. It finished almost a decade ago, yet even the early episodes continue to have interesting insights.

https://hypercritical.fireside.fm/

If you’re interested but don’t know where to start, “The Bridges of Siracusa County” is a classic.

https://hypercritical.fireside.fm/15


Man, I miss those 20+ pages long Ars Technica reviews. They don't seem to publish these kind of stuff anymore.


The macOS Ventura review doesn't count?


Not really. It lacks the insight and the attention to small changes and under the hood improvements of the Siracusa's reviews, in my opinion.


Strongly agreed.

Siracusa's pieces were all meat, no filler. The modern ones are trivial glances at the cosmetics: all filler, no meat.


If you want to go even deeper there's the book series by Jonathan Levin


http://newosxbook.com/home.html for the lazy :)

They are an updated and much expanded version of "Mac OS X and iOS Internals: To the Apple′s Core" by the same author.

Which in turn was an updated version of "MAC OS X Internals: A Systems Approach" by Amit Singh.


Does anybody know why they are so hard to buy from outside the US?


International shipping of individual books has become very expensive. Within the US, media mail provides an inexpensive option for authors.


The loss of the bookdepository is a big one.

For some modern books you can get a digital copy.

I almost want to start a website dedicated to people bringing books on the plane with them when flying, but I'm sure that would light every single alarm bell the TSA has ever dreamed of having.


Oh, interesting. I still have Amit's book here on my shelf.

Though I see this author has also given up work on this project. And they are extremely expensive in Europe. At least Amit Singh's I could just grab on local Amazon. Levin's aren't available here.


There are also some amusing writings not included here about modern macOS changing things in a way Siracusa dislikes. For example, mandatory file extensions (written in 2001, when the Internet was a thing and file extensions were required for all file exchanges) [0] or the Spatial Finder saga [1] about missing an antiquated way to make a mess on your screen and pretend to be managing folders.

[0] https://arstechnica.com/gadgets/2001/10/macosx-10-1/12/

[1] https://arstechnica.com/gadgets/2003/04/finder/#prev-article...


He’s right about the spatial Finder. The Mac OS 9 Finder is still a better user experience. The OS X Finder only become marginally usable when they added the sidebar so that you had some starting reference points; it was much improved with Spotlight because you could then find things without spatial reference points.

But I still maintain that adding the NeXT browser view to the OS 9 Finder would be the perfect experience. The OS X Finder went down this awful .DS_Store-strewn half-assed path and it’s still just the weirdest set of choices imaginable.


I think modern "kids" (who now are in their early 40s perhaps!) don't quite get how useful the spacial finder was to non-technically aligned people. "I left it right here" is a huge piece of memory that our brains have developed over millennia, and the spacial finder played right into that.

The "it's in the directory listing in terminal" or "spotlight will find it" or "what is a file" don't really compare, even if they can be made usable.


Funnily enough, learning how to computer from Chromebooks and tablets have made modern kids incapable of understanding the spacial metaphor at all. Their navigation is entirely search-centric.


These are great :) I'd like to do without file extensions again, but internet URL norms have now made that decision for all operating systems. But I'm still kicking Finder windows back into spatial mode three times a day.


macOS probably doesn't get enough love from an OS design perspective. Over the years I've had reasons to work pretty closely with the guts of all of Linux, macOS and Windows and macOS is probably my favourite, design wise, although for some tasks you can't beat the flexibility and feature set of Linux. A few highlights that are lesser known:

- XPC/Mach is a pretty reasonable IPC system that avoids the huge complexity of DCOM, whilst being more widely adopted throughout the platform than DBUS.

- Bundles are frustratingly ad-hoc in terms of detection and layout but the basic concept does work, and is better than the (rough, not really) equivalents on Windows and Linux.

- Launch Services does a pretty good job these days of letting apps be purely declarative whilst still keeping track of all the integration points things (in the past it's been a bit rough).

- APFS is a very modern FS and the migration of the Apple ecosystem to it was so well executed it was barely even noticed. I can't think of any other platform that has managed to do a migration from one FS to another so smoothly.

- Ditto for CPU architecture migrations. Apple's skill at these has become rather legendary now. I remember a time when the computer industry assumed such transitions were simply impossible, Apple did it twice!

- The code signing infrastructure is rather complex but well thought out and flexible, even though Apple don't use all that flexibility in practice. They've chipped away at it over the years and by now it delivers concrete and real benefits to end users. In particular it enables apps to be kept somewhat separated even when they aren't sandboxed e.g. on Ventura apps can't tamper with each other unless you specifically give them that permission, and this works without any notion of admin/root escalation or installation, and that's true even if apps aren't opted in to the app sandbox.

- The way Apple live the UNIX spirit by having lots of little CLI tools along with man pages for those tools etc is pretty nice.

- The sheer quantity of APIs and frameworks that solve modern high level problems is well ahead of any other desktop platform by miles (e.g. the ML frameworks, data syncing).

- The way Apple have managed to slowly migrate code out of the kernel is well executed IMO. Likewise for SIP and their whole security/immutable OS posture, something the Linux world is still just experimenting with and Windows isn't even trying.

- Their integration API for cloud storage is solidly designed, albeit it's a pity they don't also offer a lower level FUSE-like API for experimental use cases.

There are problems too of course. It's very weak that the OS has no way to keep apps up to date except via the store, which then introduces tons of other problems. Sparkle does a great job of patching up this gap, but it should really be a core OS service in this day and age. And the lack of any high level GCd/managed language for their platform continues to be a major weakness. You can have smooth UI and 60fps whilst still offering GC as Android has now proven, and of course there are large classes of apps where developer productivity matters more anyway. Microsoft always recognized that, even the Linux community understood this hence all the bindings into Python.


I agree with most of what you've said here, except:

- Code signing. It's incredibly frustrating as a developer to try and codesign an application to pass gatekeeper, requiring multiple steps with very little information as to what's _actually_ going on or going wrong in the middle of it, including uploading your entire bundle to Apple and waiting an arbitrary amount of time for them to notarize it.

> It's very weak that the OS has no way to keep apps up to date except via the store, which then introduces tons of other problems. Sparkle does a great job of patching up this gap, but it should really be a core OS service in this day and age.

App Store _is_ the solution and a core OS service. You may not like the limitations that implies, but it is the OS level solution for keeping apps up to date.

> And the lack of any high level GCd/managed language for their platform continues to be a major weakness.

I think swift fits this gap nicely, no? It's not technically GC'ed, but it is refcounted, which for end user case achieves the goal of "ignore memory mangement for small tasks" I like this blog post [0] which shows a desktop app in 11 lines of code.

[0[ https://www.amimetic.co.uk/blog/swiftui-for-small-internal-d...


Automatic reference counting is very well streamlined into Swift. Even in more substantial projects in most code I'm not putting much active thought into memory management at all with it. The main area where some extra attention is required is in closures, and there in most cases the main thing is to not strongly reference self which is easy to avoid.

As for the compile times mentioned by the sibling comment, they're pretty tiny for small utility things, especially incremental builds. On more recent hardware (M1, Ryzen 5000, etc) I find they're reasonable even for moderately sized projects… on M1 Pro a complex iOS app with somewhere in the ballpark of 30 different screens can be clean-built in under a minute with incremental builds usually under 3s which seems plenty reasonable to me.


A core OS service in my books is an API that can be used by any app, can be adopted in a backwards compatible way and which is used by the in-house apps too. The App Store isn't an API or service, it can't be used by every app and the built-in macOS apps aren't necessarily store apps. Yes, it's Apple's strategy but I maintain that from a tech perspective it's a very weak one.

Cryptography DX is always terrible on every platform, I don't know why, it seems to be an unwritten rule that crypto tools must have as many sharp edges as possible. But as far as these things go, Apple's infrastructure actually is effective and flexible. Notarization got a lot faster lately and is certainly way better for both devs and end users than client side virus scanners.

Swift is a good upgrade, but not exactly a high level managed language. Most of those languages have very fast or non-existent compile times as well as GC, they tend to be simple-ish, perhaps they have optional types. The Swift DX is quite different to a C# or a Java for example.


> Yes, it's Apple's strategy but I maintain that from a tech perspective it's a very weak one.

Gotcha. I disagree, but I think that's ok.

> Cryptography DX is always terrible on every platform, I don't know why,

Let's encrypt is a pretty good example of how simple it can be. The experience of codesigning + notarisation as a developer is poor. If you want to use the GUI and are working from xcode templates, it's fine, but apart from that, you're into glueing together forum posts running binaries and tools with no logs that report success with arbitrary delays that work on your local device but not on other devices.

> Swift is a good upgrade, but not exactly a high level managed language. Most of those languages have very fast or non-existent compile times as well as GC, they tend to be simple-ish, perhaps they have optional types. The Swift DX is quite different to a C# or a Java for example.

Swift has ARC, fast compile times, and a _very_ high level interface to the underlying API's. I think it's very comparable to C# on windows for many, many things.


Personally, my biggest gripe with the Swift DX, or anything DX related on macOS, actually is with Xcode.


AppCode or bust.


Sadly it doesn’t enable you to totally bypass Xcode. And JetBrains is sunsetting it: https://blog.jetbrains.com/appcode/2022/12/appcode-2022-3-re...


Code signing is coming for Win32 as well, as announced at BlueHat IL 2023.


I searched and found the slide:

https://msrndcdn360.blob.core.windows.net/bluehat/bluehatil/... (Pages 20-22).

Oh dear.


> Apple did it twice!

Thrice, in fact! 68000 -> PowerPC, PowerPC -> Intel, Intel -> ARM/Apple Silicon


XPC is great.

I don't really understand WinDev in regards to COM.

In abstract, it is a very good OOP ABI, and cross language interop.

In practice, one would expect that given how much they have doubled down on COM for the last 25 years, they would have come up with a development experience that isn't the worse from all IPC systems invent so far in regards to development experience.

.NET requires actually knowing COM at C and C++ level, and as of .NET Core, also write IDL files manually, as they removed type library support from .NET toolchain.

On C++ side, there have been endless reboots, none of them really productive, with exception of C++/CX, killed due politics.

WinDev really loves their bubble.

Regarding GC on OS, many others have tried as well. The problem is having a company willing to throw enough money at the problem and steer developers to adopt the platform no matter what.

Which is actually quite surprising for Android, given Google's track record in killing products before they can establish themselves.

All in all, there is still a lot of NeXTSTEP in macOS.


> It's very weak that the OS has no way to keep apps up to date except via the store, which then introduces tons of other problems.

/usr/bin/softwareupdate [1]

[1] https://ss64.com/osx/softwareupdate.html


This updates the OS and related content.


> and related content

That is a very interesting if not a prodigiously strained hand-wave of a solution to GGP's complaint, "that the OS has no way to keep apps up to date," as Safari, Mail, Contacts, Calendar, Maps, Music, Photos, Messages, FaceTime, Notes, Preview, TextEdit, KeyNote, Pages, Numbers, GarageBand, Quicktime Player, Activity Monitor, Terminal, Console, Disk Utility, and hundreds of other bundled applications, and including the tens of thousands of BSD userland utilities, as well as the AppStore itself, are packages managed by the package manager called Software Update, a Preference Panel in System Preferences, which is the GUI front end for /usr/bin/softwareupdate.

There is no package manager that exists on any platform that manages every application there is, but between softwareupdate, AppStore, and the only professional and stable third party package manager for macOS that exists, MacPorts —these three package managers can be employed to keep well over 2 million distinct installed packages up to date.


You're calling my comment strained, but somehow missed the fact that all of the things you mentioned are shipped by Apple and updated basically together? The comment mentions Sparkle as "patching up this gap" so it's very clear what they meant: third party applications. Nobody is asking for updates for "every application there is", but providing an API to update third-party apps. MacPorts, despite getting significant investment in the past from Apple, is now definitely a third-party solution and not an system API.


> You're calling my comment strained, but somehow missed the fact that all of the things you mentioned are shipped by Apple and updated basically together?

Again, you're handwaving and intentionally failing to acknowledge that Safari and GarageBand and all the other applications are applications, like any other a team may develop and/or a user may use source to build, or download and run an installer, or drag and drop a bundle, far more complex than, technically speaking, most available applications, and very big programs to the non-technical. Yet in regards to the operating system, not in the remotest way, "and related content."

Safari is not and Music and Photos are not related to the OS. They have nothing to do with the kernel, nothing to do with core services, nothing to do with directory services or App Kit, they're just applications. Apple bundles applications, and this is entirely irrelevant to the operating system "and related content." The bundled applications have nothing to do with the OS, which will keep chugging along whether they're there or not.

The salient detail here that has been lost on you since my initial comment is that softwareupdate manages a metric sh!tton of packages and keeps them updated with very little effort from the operator, like any good package management system should, and not merely "updates the OS and related content," an astounding simplification that misses the fact that it is a package manager, just like FreeBSD ports, just like pkgsrc, just like yum, just like RPM, aptitude, Ubuntu Software Center, Windows Package Manager, and just like MacPorts, AppStore, and Cydia, and hundreds if not thousands of other available package management systems.[1]

softwareupdate isn't just some cute Apple product, and Apple certainly didn't invent or pioneer package management; softwareupdate manages thousands if not tens of thousands of packages that have absolutely nothing to do with a computer operating system. It isn't called "macOS Update," it is called softwateupdate for patently obvious reasons.

> MacPorts, ... is now definitely a third-party solution

I could have sworn I stipulated it was third party. Didn't I?

> and not an system API.

What are you driving at with this straw man?

[1] https://en.wikipedia.org/wiki/Package_manager


Your commenting style reminded me a lot of a discussion I had earlier, which went on far longer than I had expected it to for stupid semantic quibbles that didn't even end up panning out to be correct. Of course I decided to go look it up, and it's none other than you: https://news.ycombinator.com/item?id=34589999

Here is some unsolicited feedback: you are unpleasant to interact with. You reply to people in an incredibly smug and self-satisfied way, except you often say things that are wrong or at the very least significantly misunderstand context. When people mention this to you just dig deeper by doubling down and accusing people of intentional bad faith. To be honest your responses make me wonder if you are involved in distribution of macOS software, because your views seem to distanced from the request at the top of this thread, which is so familiar to every Mac developer that they would recognize it immediately, and yet your initial response was like you just went online and searched "macOS update software" and picked the first command line tool you found, without looking at whether it matched the need being expressed. When I told you this you seem to have fixed on the word "package manager" and decided to argue that softwareupdate is one rather than understanding that I literally do not care what you call it. I understand and accept that macOS's built in software updater can update some of the things you've mentioned, although I disagree on the details of how you've presented it, but you haven't for a moment stopped to think "why did Sparkle come up in the conversation at all? Perhaps I am missing something by not addressing it?" Instead you ended up focusing some sort of definition about how OS updates and security patches are some sort of "package manager", and decided to be condescending while you did so. Seriously, you don't get to call my point for a "system API" to be a strawman when it is responding to a need for a "core OS service".

I enjoy talking to people who know their stuff, and sometimes I will put up with jerks if I learn something new, even though I would rather that they weren't abrasive. I haven't learned anything from this conversation, and I am unsure you know what you're talking about let alone what we are talking about. The fact that you keep doing this is quite disappointing and honestly makes me not want to engage with you. I don't check usernames very often and might still do it by accident but if you keep this up and I suddenly stop responding to you it's probably because I realized who I was talking to and didn't want to respond anymore.


> Ditto for CPU architecture migrations. Apple's skill at these has become rather legendary now. I remember a time when the computer industry assumed such transitions were simply impossible, Apple did it twice!

Three times actually. Motorola -> PowerPC. PowerPC -> Intel. Intel -> Apple Silicon.

Edit: Oops this was already mentioned elsewhere in this long thread sorry.


It’s a bit confusing to say “read these in chronological order” since it’s not immediately apparent that the document is ordered chronologically. It would be clearer just to say “read these in order”.


Thanks for the note; I may tweak that. Does the Highlights section's intro, "These chronologically-ordered highlights…" help?

—gist author


Yeah I’d find that clearer.


I mean, it already does say what I quoted. Would you change it?


That's great!

Now, bonus time: Let's do something like that for SwiftUI.

<gets coat/>


I love the Swift Talk video series https://talk.objc.io. They've done a few reimplementations of SwiftUI components that helped me feel more like I understood those things.


I subscribe to them.

Well worth it.



Damn, I was hoping for a MacOS version of SysInternals.


Try Red Canary Mac Monitor, it does a lot (it's at least a decent equivalent for ProcMon)


Me too. I think the closest is Xcode Additional Tools.


I still have Amit Singh's huge book on my shelf here. Completely outdated now, of course. But it was always quite interesting.

It's nice to see someone else has taken on the task of bringing that kind of insight into the modern age. Because Apple certainly won't.


Stopping the reviews at Yosemite was a smart choice, every later version had noticeably more bugs then the previous.


Does anybody know how to force sandboxing for a 3rd party app?


You can write your own sandbox profile if you're so inclined... but not easily, and you can't force it into a container in the Library folder, for example.


sandbox-exec?


That’s super interesting.


Fascinating stuff, I've never really thought about what problem e.g. frameworks are solving. It's kind of too bad that in my day job my head's in the clouds and usually somewhat removed from most of these system level details.

inb4 it's just someone else's computer -- trust me I know


They’re just shared libraries.


Thanks for the blinding insight.


What they solve is that they are not just shared libraries, they include any other resources the library needs like translations, images, other data etc and for developers any header files.

But it works if you just treat them as shared libraied - just use -F on the compiler anlink rather than -l and the headers are not needed in /usr/include etc.

The are also versioned so easy for two apps to have different versions of the Framework.


Minor clarification: two apps can embed different, incompatible versions of the same framework without versioning — the versioning is a NeXTism that allows the system vendor to ship a newer, binary incompatible version of the framework.

NeXT used it with AppKit for a few releases, but when they came to Apple they realized it would be impossible to support something like Aqua with the desired UX without having to update all the versions of the framework every release, which would defeat some of the purpose, not to mention exploding the QA matrix.


True for Apple/NeXT Frameworks or others used by a lot.

I think versioning could work if your framework is not used by many apps - the link structure and version number is still there but I have not tried for 20 years.

The correct current practice is as you say to embed the framework in the app at build time so you don't link to outside non Apple Frameworks.


You’re welcome! Most things are pretty simple :)


So the Window server is a memory shitshow for the past 20 years?

Today I learned.


Pixels use a lot of memory. Especially since they're effectively not compressible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: