Hacker News new | past | comments | ask | show | jobs | submit login
Apple's use of Swift and SwiftUI in iOS 17 (timac.org)
346 points by ingve on Oct 19, 2023 | hide | past | favorite | 221 comments



I loaded these all into a SQLite database so I could explore them with Datasette Lite (Datasette running in WebAssembly).

Here's the result:

https://lite.datasette.io/?sql=https://gist.github.com/simon...

You can use this to answer questions like which binaries are new in iOS 17 compared to iOS 16:

https://lite.datasette.io/?sql=https://gist.github.com/simon...

Or here's just the binaries in the /System/Library/VideoDecoders folder - it's interesting to compare iOS 1 (just H264H1 and MP4VH1) to iOS 17 (AppleProResHWDecoder, AppleProResSWDecoder, AV1SW, AVD, H264H8, JPEGH1, MP4VH8, VCH263, VCPMP4V) https://lite.datasette.io/?sql=https://gist.github.com/simon...

Details on how I built this here: https://gist.github.com/simonw/0b8a2ddaeab76fe407b08a2b20412...


Really nice, thanks for sharing!


What is interesting to me is the decline in C, from 52% in iPhone OS 1 to 5% in iOS 17.

I am a bit surprised to see Objective-C still dominating. It goes to show that transitioning languages is extremely difficult, even when you are applying the impetus to yourself. Consider the Python 3 transition in this light. Python 3 came in 2008 and I don't think it could be said the transition completed prior to 2020, so 12 years conservatively. Swift picks up in iOS 10 at 1%. So, perhaps one could estimate by iOS 22 some large majority of Swift in Apple codebases? And then maybe by iOS 30 Objective-C is down to 5% or les?


One thing to note about the data is that the way the author counts a "Swift" binary and an "Objective-C" binary is a bit misleading, as it dual-counts binaries that feature both, and most new binaries feature both languages. That's why the slopes of the two languages look essentially the same: because they're the same binaries.

By my rough count [0], the number of strictly Objective-C binaries increased by exactly 4 between iOS 16 and 17. The number of strictly Swift binaries increased by 9 [1], and the vast majority of binaries added use both languages. It's not clear from the data if there are substantial amounts of ObjC code in these binaries or if there are just one or two functions in each.

EDIT: The percentage tallies are calculated weirdly too—because they double- and triple-counted binaries that have multiple languages, they calculated all the percentages out of a total of 8723 binaries (for iOS 17) even though there are only 6030 in their data set. The correct percentages should be:

SwiftUI: 6%

Swift: 25%

C: 7%

C++: 17%

Objective-C: 89%

This is less satisfying because it doesn't add up to 100%, but the metric they used doesn't have any useful meaning.

[0] Using the following regex on the raw data files: / \|Objective-C\n/ [0] Using the following regex on the raw data files: / \|Swift(\|SwiftUI)?\n/ and


SwiftUI shouldn't be counted as it's own language. It's a UI framework written in Swift.


Consider using a regex like “objc” or similar, very few places in the code will include “Objective-C” spelled out


I'm just relying on the raw data provided in the article, I have neither the time nor the interest in re-doing the analysis myself.


A lot of the low level code is working code in ObjC.

There is no reason to replace perfectly working code that is battle tested if the requirements don't change.

The fact that the new binaries on it are written in Swift is showing their 100% commitment to Swift, which is great to see. And why wouldn't you, Swift is a joy to work with, especially if you're from Apple :)


I really do enjoy working in Swift too, but the anemic cross-platform story is a colossal hurdle to selection as the primary development language in many commercial cases.

It seems to me that Apple's strongly insular instincts serve them well for integrated product design, but for the Swift language, alas, they form a severe hindrance to wider adoption.


If cross-platform is your thing 'fraid Objective-C isn't a great sell either.


We have other choices and end up using Apple-specific languages for bridge code.


At least two open-source compilers, several runtimes, several Foundations.

What more do you want?


Both lagging behind what Objective-C on XCode actually means, and without the libraries that make Objective-C actually relevant.


I find that not to be the case in practice.


The vision of Swift as a truly cross platform, modern language Apple laid out in the initial announcement was genuinely exciting.

Unfortunately it seems like they didn't really mean it.


Hardly any different from going with .NET (many stuff is still Framework only), or C++ with OS specific SDKs like MFC/WinUI/...


They’re working on making foundation cross-platform. When that is done, things should look a lot better for non-GUI applications.


Sure, but "working code" written in C is declining. So some choices are being made.

It fits in line with how I've seen Apple developers at conferences highlighting the memory safety aspects of Swift. They are maybe hoping to position it as a competitor to Rust on that front. I wonder how they feel Objective-C compares from a safety perspective compared to Swift. I wonder if that might influence decisions on what kind of stuff gets re-written in Swift.

I hope we can see further graphs like this to track what happens. I don't feel MS ever committed to C# in this way but that is due to ignorance at the numbers. I would love to see all of that data for Windows and even macOS.


> Sure, but "working code" written in C is declining

Perhaps it’s just not increasing much since the value is in percentage and overall code size has increased dramatically.

I doubt they will rewrite Mach/XNU, for example, in the foreseeable future.


Can Swift even be used for kernel development?


In the recent talk about the new Swift to C++ bidirectional bridge [1] the presenter from Apple says that they intend to eventually use Swift for basically all their first-party code from embedded code on microcontrollers, to the kernel, libraries and apps.

There will be language subsets for embedded and kernel development, not all features would be able to be used at the lower levels.

1. https://www.youtube.com/watch?v=lgivCGdmFrw


It was Lattner’s vision, a language that would scale from scripts to kernel. Oh yeah, and one that’s easy for a beginner to learn.

I think that’s impossible, Swift should try to be a great language for building Mac/iOS apps and hopefully frameworks.

But what do I know. I wouldn’t think writing a new industry grade C++ compiler in a modular architecture would be feasible. Just use GCC. And yet here we are with clang and LLVM.


I think he's trying again with Mojo – just from the top (Python) down.


With a somewhat different definition of “kernel” this time.


GPGPU code has similar constraints to actual kernel code, so in a way it kind of overlaps.


He’s the only reason we are paying attention to Mojo. I have more faith in Mojo being a fast Python than in Swift being an all encompassing multi-platform language.


IMO such a language must be designed from the kernel and up to scripts, not the other way around. And it must be nested, like C and C++.


I believe you’d need a subset of Swift, there’s a thread about this on the forum: https://forums.swift.org/t/introduce-embedded-development-us...


Additionally, until very recently, there was no (practical) way to reach the performance potential of manual memory management in pure Swift: all heap management was done via Automatic Reference Counting (ARC). A series of "ownership" features have been getting integrated into the most recent versions of the language that let you avoid that runtime cost while retaining memory safety (at the expense of more complex code — the vast majority of code should still use ARC). That makes Swift much more suitable for lots of "systems programming" than it was just a few months ago. https://www.swift.org/blog/swift-5.9-released/#ownership


Thanks, I had never read something about this.


It’s reference counting is far too slow for kernel code.


> There is no reason to replace perfectly working code that is battle tested if the requirements don't change.

Are you familiar with Apple's MO? Take a perfectly good, working app and replace it with a total rewrite with half the features.


Genuinely curious to know which apps you are referring to.


Off the top of my head here are some notorious examples you can look up from the Jobs era:

On the consumer side, iMovie was probably the worst, not even being able to read projects made in the prior version. Not just the interface but the whole paradigm was different, only supporting a few features of the original program. I gave up on it.

iPhotos was another example: it had languished, but what came out had less functionality (no more external streams, for example) and simply threw info away (tags, face recognition, added metadata like location was all gone). Links to external tools were no longer available, though added later.

This happened to Pages and Numbers (& Keynote?) as well but I don't know if they count as consumer or pro apps. Consumer I guess.

On the Pro Side, replacing Final Cut Pro with Final Cut X (~2010 or 2011) pretty much drive video production off the Apple platform onto Windows. Like the iMovie case (I guess they didn't learn) it was a completely new interface without most of the features of the old version. I hear a lot of the functionality made it back in over the following decade, but you wonder why.

I think the loss of the pro production side in video (and replacement by Adobe) led to the abandonment of Aperture too. Apple announced that the consumer product Apple Photos was supposed to be the replacement -- what a joke!


I feel like this was a trend. Windows Movie Maker from XP was tremendously better than all of their replacements. I remember using that heavily when I was in middle-school, making stop motions and stupid videos with my friends.


I swear. It was really really good. Wonder if we can still download it somehow


From Apple's perspective, they used to need that pro "creative" image to build their own brand. Their advertising, their products, were all about getting that brand.

But now, the Apple brand has its own status, independently of any particular group. Their products have gone the same way.


There can be no clearer example than iTunes's downgrade into the awful Music and somehow even worse Podcasts. On the plus it has invogorated the 3rd party space.


I was never a fan of iTunes but at least I could usually wrangle it to do what I wanted. Music, on the other hand, is simply horrendous. I have never cursed the Apple ecosystem as much as when I try to put music on my iPhone.


I think it's more that they are not focusing on their previous niches in the laptop/desktop world.

There are lots of those apps where they enhanced the generic user's ease of use, while maybe not maintaining features that weren't used.

I wouldn't be surprised if their apps are instrumented to work out what features are needed or used.

Their apps are now effectively cross platform across all of the devices you have, so they have focused on making that more seamless, while, of course, trying to hook you into Apple One to get the shared storage you need in iCloud.


This isn't unique to Apple I'm afraid, so many software projects are like that.


Working code can hold unknown bugs, including those Swift could help prevent.

There are reasons to rewrite it, though I’m sure it will take a long time and they’re prioritizing risk vs reward.


"can" being the key word; rewriting code in Swift because the old code "can" contain bugs that swift "could" help prevent is wishful thinking. Working code is proven code, tested, reviewed, optimized, fixed, analyzed, and has run trillions of times. Rewriting it to another language because it "can" hold unknown bugs that may not have been discovered yet is a weird flavor of hubris, similar to people seeing Rust as the one language that will fix all software bugs, amortizing the cost of rewriting and dismissing the existing code.


Rewriting code can equally introduce bugs. Maybe it’s worth it for things with significant attack surface - but certainly not everything qualifies.


Prioritizing expense and risk vs nebulous reward


It’s not nebulous.

There are kinds of bugs that Swift can provably catch that Objective-C can’t.

There are bits of safety that Swift can ensure without needing to do additional boundary checks or something else, leading to faster/smaller code.

Perhaps biggest of all Swift doesn’t need a runtime like Obj-C. Apple has done an amazing job tuning it, but a direct function call is always going to be faster than message passing.


Nebulous is exactly what it is, because despite what you think:

1. The empirical evidence does not support the notion of fewer bugs, even for static typing, never mind Swift

And no, this is not from lack of trying. People have tried to show this time and time again. And time and time again they end up not showing it, with effect sizes tiny and pointing in both directions.

And as an anecdotal example: which way has Apple software quality gone since they started adopting Swift and SwiftUI? System Settings anyone?

2. Swift is slow, not fast

I did fairly extensive research for on this for chapter 9 of my macOS/iOS Performance book. Even those cases where I fully expected Swift to be faster, which were not numerous, it managed to be slower. And there are really egregiously bad examples, like Swift Coding, which despite the overwhelming evidence to the contrary some people still believe to be fast.

Why do they believe it to be fast? Because so much work can be done in the compiler. So it must be fast. But it just is not. In fact, it's ludicrously slow. This is probably lesson #1 from performance optimisation: things are not fast because simply you did something that you believe to be fast. You need to measure to figure out whether they are fast and in order to figure out how to make them fast.

(And those mistaken beliefs actually lead to slow code, because you already did the thing that makes it fast, not need to check or optimise. I am pretty confident that a similar effect happens with static typing and safety.)

Or check out the computer languages shootout. Swift tends to be somewhat on par with Java. Which is not particularly fast.

3. Swift needs loads of runtime

For example, due to the somewhat bizarre idea of having protocols be polymorphic over (pass-by-value) structs and (pass-by-reference) objects, Swift has to dynamically dispatch the copying of individual arguments to a function when protocols are involved. If you're both lucky and not using modular code, the compiler may be able to figure out how to elide that dispatch. But it won't tell you if it managed to figure it out or not.

Oh, and speaking from 30+ years of experience: message passing simply is not the bottleneck in 99%+ of the cases, and if it is it is fairly trivial to remove.


Do you have a link to your perf analysis? The only mentions I could find in your profile links were over three years old, and used @objc Codable objects.


The most detailed analysis is in my book:

https://www.amazon.com/gp/product/0321842847/ref=as_li_tl?ie...

The Codable stuff was this:

https://blog.metaobject.com/2020/04/somewhat-less-lethargic-...

As shown and discussed in the series, @objc is not an issue. Do you have reason to believe that Swift performance in general or Codable in particular have improved since?

I mean, you can run these sorts of tests yourself, and you probably should. It's not that hard.


No, I’m just curious to learn more. And it’s relevant to my work.

Is your book available on Apple Books or some other electronic format besides kindle?


What is nebulous is the actual return on the investment of time and money, just the same as it would be in any rewrite of any code ever, when the same resources could be allocated differently. The possibility that some hypothetical bugs might be avoided isn't a compelling argument for spending dollars and time on doing the work, if it's measurably doing it's job now.


While this is true, your arguments are mainly in favor of writing new code in Swift, which I agree with. What you're missing is selling and convincing people of rewriting existing, working code to a different language, which is a nontrivial investment and a decision that shouldn't be taken lightly.


> Perhaps biggest of all Swift doesn’t need a runtime like Obj-C

Swift most certainly has a runtime: https://github.com/apple/swift/tree/main/stdlib/public/runti... And most or all of it is written in C++, not Swift last I checked. Whenever you see a `_swift_fooBarBaz` symbol in a stack trace, that's the runtime.


Swift doesn't need a runtime, but if it runs alongside Objective C code, doesn't it? On iOS wouldn't much of what Swift is doing be to calling down into Objective C? In other words, I think that unless you get rid of all of the Objective C in iOS, Swift will still be sitting on top of a runtime.


Swift does need a runtime. The language design is kind of accepting of implementation complexity in a way that interpreted languages often are but compiled languages usually aren't, so there's a lot of implicit heap allocations.


> Working code can hold unknown bugs

Working code has also squashed a huge number of past bugs though.


> the new binaries on it are written in Swift

It appears to be 50:50 with Obj-C for now. See how the Obj-C line in the "Evolution of the programming languages" chart is still growing steadily. I can see Swift taking the majority of new code for the next year though.


Most of the new code written in Objective-C tends to be related to Metal, or new features being added to existing Objective-C frameworks like UIKit.


The reason is those battle tests showed time and again that the code is vulnerable, so it's not working perfectly


I think that is a case where the percentage chart is deceptive.

My take is that C has probably remained steady — all the new binaries in iOS are made up from the other three.


There is a chart for that as well and it is as you say. In iOS 17 the amount of binaries using C even grew a bit.

For C++ the amount has been growing steadily with every release, but it went from 14% in iPhone OS 1 to 12% in iOS 17.


"What is interesting to me is the decline in C, from 52% in iPhone OS 1 to 5% in iOS 17."

Take this all with a grain of salt as a lot of C++ code is really just C-With Classes. As well if i remember correctly you can wrap objective-C in C++ or vice versa.

"Consider the Python 3 transition in this light. Python 3 came in 2008 and I don't think it could be said the transition completed prior to 2020"

To be fair, yes Python3 came out but there were still a lot of Python2 libraries that had to be migrated to 3, so for most people it didn't make sense to use 3 when it came out.


> What is interesting to me is the decline in C, from 52% in iPhone OS 1 to 5% in iOS 17.

Note that most of this change is between iOS 1-9, where Objective-C rose from 34% to 70% (+36) and C declined from 52% to 14% (-38).


Yes but is any of that a simple rename from .c to .m chnages the language but no need to change the code.


Herb Sutter has a talk where he makes a pretty convincing argument that making a backwards incompatible change in your language costs you about 10 years in adoption


Something not noted in the article is that any binary which links Foundation, even if otherwise written entirely in Obj-C, now uses Swift. This is due to the Foundation Swift rewrite effort (https://github.com/apple/swift-foundation) that is rewriting Foundation is Swift while maintaining Obj-C ABI compatibility.


I don't believe the new Foundation is shipping with any released versions yet, is it?


Yes, it has shipped this year, even in the partial state. https://developer.apple.com/wwdc23/10164?time=1187


Oh, very cool. I’d missed that somehow. Thanks for the link!


What's the downstream effect of that?


Bit better performance, fewer bugs, updated implementations.


For some things, like JSON decoding and anything to do with dates and times, performance should be quite a lot better.


Sorry, I should have clarified I meant specifically having Swift in ObjC binaries.


I think it's more interesting than anything else. As far as I know Swift won't pull in a heavy runtime or anything like that


I think Swift does pull in a runtime, but the runtime is included in iOS itself so apps and users don't really pay a cost. It can just link to whatever’s on disk.

Prior to some Swift version — Swift 4? don't remember — iOS apps that used Swift did have to bring their own runtime, and iirc this was because Swift was evolving so fast that Apple wasn't willing or able to commit to including a bunch with iOS.


> ... iirc this was because Swift was evolving so fast that Apple wasn't willing or able to commit to including a bunch with iOS.

Not just that - they wanted to have language features for stability and migration which they hadn't fully flushed out yet and implemented.

Now, Swift is part of the OS itself - which has good and bad sides, because policies to support older versions of iOS also mean you can't fully leverage all the new language features.


Shame they didn’t look at the “Fitness” app. It’s first party but requires a separate download. Probably because it’s some of the worst quality software I’ve ever seen.

Its UI becomes desynchronized and self-inconsistent in ways I wouldn’t have believed are possible. The very fact that it exists and is capable of presenting as incoherent UI as it does is damning evidence against whatever UI framework it uses.


SwiftUI exists to solve exactly this problem - the ui is declarative and state driven.

However, its predecessor UIKit is mostly imperative and it takes a lot of manual code to keep the UI reflective of the underlying data model. For this reason I find many IOS apps, and especially apple’s own, to be always mildly broken.

Programming in UIKit is like programming in JQuery, or maybe Backbone at best.

SwiftUI brings us to the modern age of react (but with less capabilities and more bugs)

IOS developers though tend to be pretty resistant to such “modern” paradigms. If you read the rest of the discussion on this post, you fill find that even using Swift, let alone SwiftUI is still a very debated issue.

Maybe the grass is greener, but I don’t find the same resistance for new ideas in the web (typescript) or Android (kotlin) communities.

I do wonder if this resistance is combing from an objective evaluation of the new tech, or the lack of desire/time to learn something new.


As a mobile dev, so far the biggest issue keeping me from adopting SwiftUI stems from its greenness and UIKit being so well fleshed out.

The newness is an issue because new SwiftUI revisions ship with new iOS releases, which means that big chunks of it are gated by the oldest iOS version you support. Jetpack Compose on Android gets this more right since it’s independent of the OS, but suffers from other tradeoffs (Java ecosystem and the rest of Android dev gives me a headache sometimes).

SwiftUI is also just missing various things that are present in UIKit, and so if you’re using those things it’s easier to write the whole app in UIKit instead of bridging those controls to SwiftUI.

I absolutely foresee going SwiftUI exclusive but realistically that’s still a few years down the road.


It's sort of dismaying that SwiftUI is regarded as a superior native choice as opposed to second-citizen cross-platform frameworks, then see examples like the macOS System Settings app. Though perhaps that is more of a case of poorly-executed Catalyst iOS to macOS porting, and poor design, than a lack of "nativeness" with SwiftUI.

The framework lacking consideration for navigation until recently shows that its rollout has been half-baked, though.


The problems with the macOS settings app have just as much or more to do with its design as they do with SwiftUI. If they had instead built the redesign with AppKit it’d feel a bit smoother in some areas but would be just as flawed. Just making it more traditionally Mac-like would go a long way.


I've heard a lot of disgruntlement from macOS developers about SwiftUI. I wonder if the framework has an inherent bias for iOS and maybe iPadOS? Again, it's crazy that after years of disparagement of kludgy cross-platform frameworks (albeit more from the fans than the company itself), Apple basically went ahead and built their own, with fewer supported frameworks than the third-party alternatives.


SwiftUI definitely has an iOS/mobile bent to it, no doubt because iOS is its flagship platform. macOS no longer has the corporate gravity that it enjoyed prior to the iPhone, when AppKit received most of its post-NeXT development.

This problem isn’t exclusive to SwiftUI though, WinUI/Windows App SDK is also mobile-flavored likely due to its UWP heritage, lacking basic desktop widgets like a tableview/datagrid.


> This problem isn’t exclusive to SwiftUI though, WinUI/Windows App SDK is also mobile-flavored likely due to its UWP heritage, lacking basic desktop widgets like a tableview/datagrid.

Hah, it's far worse than that - Microsoft has let their desktop DX story stagnate for 15 years now (WPF was launched in 2006), since then Microsoft hasn't launched any new desktop-first UI framework for Windows, nor offered more than token improvements to User32, CommonControls, and WinForms since then.

It's no lie that everything MS has done in the UI-framework space since Windows 7 in 2009 has been a waste of time and money. It started-off with the "Metro" Windows Phone reboot, then the shoehorning of that into Windows 8, and the various XAML-derived frameworks since then - and none of them have attempted to tackle the very fundamental flaws (declarative data-binding doesn't scale, INotifyPropertyChanged breaks causality tracking and cannot be unit-tested, mutable ViewModels were carved by Lucifer himself!) - and as you said, no care or attention is paid to applications needing high information-density display.

----

...so while all this is going on, the Office org came up with its own in-house GPU-accelerated UI introduced in Office 2013 which we can all agree is slow, bloated, glitchy (y'ever used Excel with a 500Hz mouse?), but also proprietary and undocumented, so the wider Windows ecosystem can't benefit from the Office org's framework which would have otherwise (almost) neatly filled the gaps left-behind by the Windows org.

I just want Satya to hire me for the job-title of "VP of Consistent User-Experience" and I'd make it a top-priority that Windows itself comes with a reusable spreadsheet+datagrid component that all applications can use - and it wouldn't cost the company more than a year and a few million dollars - but save billions by avoiding lost developer confidence - and would serve to remind everyone that native desktop UX can always be better than browser-based UX.

---

Sorry, am ranting.


MS UI-framework story is essentially over. From now one they are all about converting all UI to ReactJS or some other flavor of it. This will include the whole MS Office suite.

It is all about financialization of software so art and craft wouldn't matter. End goal is simply cloud desktop with integrated AI/ChatGPT user interface which can be charged per user / per month basis.


...so while all this is going on, the Office org came up with its own in-house GPU-accelerated UI introduced in Office 2013 which we can all agree is slow, bloated, glitchy (y'ever used Excel with a 500Hz mouse?), but also proprietary and undocumented, so the wider Windows ecosystem can't benefit from the Office org's framework which would have otherwise (almost) neatly filled the gaps left-behind by the Windows org.

That's funny, just a few hours ago I was looking at some pics of Office 95 running on Windows 95 and reminiscing about how Office's menu bars and toolbars looked/worked differently than the system standard ones despite both being presumably worked on in parallel. Some things never changed.


Well, yeah - Ever since Office 95, the Office UI has always been different to the base OS in some way or another (I think Office 97 was the closest it has ever been to the "stock" Windows UI), but not outrageously-so until Office 2007 came out; Office 2010 is my personal favourite edition because it was the last edition that had a coherent visual design with a fast and snappy UI that didn't require gigabytes of RAM and a GPU to run with reasonable performance, even if Office 2010 cemented the fact that the high-degree of end-user software customization we used to enjoy was now on-the-way-out: now all we can do in Office is choose either Light vs. Dark scheme - and pick from a limited selction of tacky-looking ribbon background images as though we're picking-out a new Trapper-Keeper for the new school-year.


I don't think Office 95 had a different UI than the rest of the system. The only thing I remember being different about it is the title bar where they introduced the gradient that was later picked by Windows 98 and 2000. The menus and toolbars looked and worked the same as other Windows 95 applications.

You may be thinking of Office 97, which introduced the "flat" look for the toolbars and the animated sliding menus.


I believe you’re right, thank you!


It's funny the most "consistent user experience" I have day-to-day is on GNOME.


The rumors I heard were that SwiftUI was originally meant to launch for watchOS first, to fill in the (sizable) gaps with the UI framework there.

I suspect even though it was always meant to be a cross-platform UI framework, the initial layout system was designed more toward composing and filling a smaller fixed space than for dealing with large resizable windows.


I think flutter is emerging as a compelling alternative for cross platform desktop development. I’m building a desktop app in it now and so far the experience has been quite positive.


Flutter is also mobile-skewed unfortunately, putting it in the same camp as WinUI if you need more classical desktop widgets. Third party widgets can fill this hole in some cases, but for me that defeats much of the point of using a pre-existing UI toolkit…


Except it uses a language no one knows, unfortunately. I considered Flutter, but fat chance finding contractors who know Dart.

We ended up building a cross-platform desktop app in Qt and QML, and it works great.


Dart takes like a month, if at that, to learn. Its mundanity makes it interchangeable.


I believe you. But try selling that to management. And when the language isn't used for any of the other software maintained by the company and nobody is going to be given time to learn it to keep the knowledge in-house... forget it.


I think your overall velocity with Flutter is going to be much higher than QT. The dev experience is so much nicer.

You are going to have to be willing to stick your neck out a bit though, that’s true.


I wish they'd used Kotlin. The endless proliferation of languages from Google devalues each one.


Might also be because Cocoa on the desktop had many problems solved already that are being solved by SwiftUI on mobile now. For example bindings to keep views in sync with a model.


I’ve been writing code for Apple platforms for over 15 years at this point and I still occasionally find myself surprised with how far one can take a Mac app with little more than Interface Builder and Cocoa bindings.


Settings is not a Catalyst app; it’s a poorly designed full rewrite.


Well that’s just inexcusable.


I agree that navigation in a pure SwiftUI app is annoying, but I don't think that was short sided. It's just iterative design.

At the start SwiftUI was something you could add to an existing UI/AppKit project, so individual parts could be rewritten. They've been building on top of that since. Refining the api and slowly letting you write more and more with just SwiftUI


The implementation seems to encourage that "leaf" views have to be aware of parent views, or what context they're in. I used to work with a guy who'd always say, "maybe I'm holding it wrong," that's how I feel about the navigation in SwiftUI. Maybe I need to see a clean example to get it.


I tried building a fairly basic app with swift ui and at this point I suppose I’d say I gave up. I got maybe 90% of the way fairly easily, but the final 10% seemed like it would be misery given the documentation and the issues I was encountering.

There’s a ton of potential there and I’m looking forward to having sufficient APIs and documentation to work efficiently with it. At this point it’s kind of painful for a hobbyist like me.


I'm really surprised at how little SwiftUI has come in 4 years, I hope it'll get better. My biggest problem with it is that often as soon as you need to do something slightly complicated it becomes a huge nightmare. It's almost always just easier to use UIKit


> IOS developers though tend to be pretty resistant to such “modern” paradigms. If you read the rest of the discussion on this post, you fill find that even using Swift, let alone SwiftUI is still a very debated issue.

Resistance to SwiftUI has more to do with the incompleteness and bugginess of the framework than a refusal to embrace modern paradigms.

A common impression of SwiftUI is that it makes hard things easy and easy things hard.


>brings us to the modern age of react

Thanks, I vomited a bit into my mouth.


I’m a web developer and I hate React and TypeScript. I love jQuery.


It's hard to make client-side webapps, using a procedural programming paradigm just makes it harder than it needs to be, but using jQuery for anything else is exactly why it got popular.


What’s hard is making up languages that compile down to JS/HTML/CSS, debugging through source maps, having 50k dependencies just to create a view to choose between 3 shirt sizes and 2 colors.

What’s not that hard is managing a bit of state and writing sensible CSS to make it reusable.


As someone who worked in a decently sized company working on a software product written with TS/HTML/SCSS (at least we had those) and jQuery: no, absolutely not.

as the apps get big, state becomes a mess, fully reusing code is very hard, making anything even slightly reactive becomes not only a lot of code, but a jumbled mess of mutability, usually copy pasted from somewhere else. I will take any other type of app over that - I don't care if it's angular, vue, react, next, whatever.

So i contend that not only is "managing a bit of state and writing sensible CSS to make it reusable" very hard, I haven't even seen it done ever, at least in my personal experience of the code I touched


Yes, exactly. The requirements of most functional production applications on the web already have so many complicated bits to get right, people shouldn't be burning their energy on what end up being inane details like synchronization UI with state.

At a much much smaller scale however, jQuery and CSS or in many cases just HTML and CSS will do just fine. Knowing which approach to take is the mark of someone with some level of experience above junior.


> people shouldn't be burning their energy on what end up being inane details like synchronization UI with state

Instead they burn an insane amount of energy (and unfathomable bandwidth) inventing, learning and debugging crazy abstractions and build systems (that change every year or so) on top of the native platform that is the web.

It really feels like a collective bad trip that I hope the industry wakes up from, eventually. But after more than a decade of this insanity, I’m not holding my breath.


Those are also hard, but supportive tooling to make that process run more smoothly is part of the decision-making process of whether or not to choose which tools for the requirements of your contract.

Sometimes you choose hypothetically difficult debugging if it eliminates the need to deal with the mechanics of the lowest level of interacting with a system. If you haven't needed to do that, then you'd be inclined to think it's an unlikely situation to be in.


They hated him because he spoke the truth


What inconsistencies are you referring to? I don't think I've had any desyncs or inconsistencies in the Fitness app.


The “Sharing” tab is particularly atrocious, the circles in each list widget frequently do not match the rest of the data in the widget (or indeed seemingly any data at all). This is resolved by scrolling the particular broken widgets out of view then back in. It would seem somewhere in the pipeline the widgets are templates/virtualized and the logic to fill a given widget with the right data goes completely bonkers at times and needs to be reset by hiding it then showing it again.

Besides that, many of the “Awards” views frequently show incorrect/ancient/impossible data. For instance I recently saw “You will earn this when you reach you move goal 100 times. You’ve reached your move goal 102 times so far.” with the seat not unlocked. The number sounds only slightly off, but keep in mind that’s 3 days of stale data. I saw similar bugs with the “Perfect Week” award and “Longest Move Steak” one. (At one point I had a move streak of 21 days, the badge said it was 14 days, and the perfect week badge was still not unlocked. You’ll note any 14 day period must contain a full monday-sunday period, not to mention in reality my streak was 3 weeks).

The daily summary view is admittedly fine. Though the refresh interval is something like 15 minutes which is absurd for a pedometer. Not sure if the app or the API is to blame there.

Sight aside, but not really. The settings/Account modal is impossible to discover, and when you do 5/6 menu options have a reasonable flow where you can click to go into the detail view, then hit back to return to the main modal (or exit entirely). Except for the “Change Move Goal” detail, which can only be exited entirely, it is impossible to return to the Account modal directly.

I’ve never used “Fitness+”, their paid offering, but I assume the quality there would be about as bad as everywhere else.


Health app is not much better unfortunately.

My weekly and monthly sleep averages look about right. The 6 month average is off by about 1.5 hours.

Funnily enough, it’s been off by that amount consistently for at least half a year now.


Apple needs to start polishing their UI again all the way from the start.

Preferences is such a mess. Many apps are simple backports of touch-only apps.

I don't think Apple requires all developers to have read the HIG.


Agreed. So many settings now are hidden away 5 levels deep and not easily searchable. Why is updating apps hidden away 2/3rd's down in some profile section on the app store.


It's not just that. Search for "shortcuts" for example. Or something with networking. You can't click half of the items, some items are inaccessible from normal navigation. The whole thing is a mess and has no structure, it's basically a tag cloud of random items.

vpn -> The menu makes NO difference in all options in the search results.

performance -> on an M1, it takes about a second to switch each pane.

Looking for a way to change the background image on your desktop? Better remember to type wallpaper.


That’s because the new preferences app on macOS was rewritten in SwiftUI which had known performance issues


actually that one makes sense since apps automatically update during the night


Really look forward to this every year, Timac is a legend. The rate of SwiftUI adoption and decrease in UIKit-only binaries feels significant. Thanks!


On the same subject, some talks from Apple folks,

"Introducing a Memory-Safe Successor Language in Large C++ Code Bases"

https://www.youtube.com/watch?v=lgivCGdmFrw

"Swift as C++ Successor in FoundationDB"

https://www.youtube.com/watch?v=ZQc9-seU-5k


I really enjoy working with Swift. (Although I'm old school and enjoyed ObjC just as much.) I wish I had more time to work with. React Native is stealing all the fun!


I learnt Swift several years ago for a App project Haven't really used it since.

My impression is that it is useful only in the Apple ecosystem. It that correct? Is it worth learning for things other than iOS and macOS applications?


One takeaway from these charts is that the great majority of binaries in iOS are still written in Objective-C.


Objectively a full rewrite had rarely led to anything good.


Yeah but the number of objective C binaries are still increasing.


That could be because many apps that are mostly written in Swift are still using Obj-C frameworks.


Or core code is written in another language that’s cross-platform and the UI is done in AppKit.


We're talking about iOS binaries supplied by Apple here, so cross platform support isn't hugely important.


In the context of Apple and Microsoft, cross-platform means across their own platforms, macOS, iOS, iPadOS, watchOS, and Windows x86, Windows ARM, Windows x64, XBox, Azure managed apps.


I (objectively) see what you did there


Having written ObjC since 2009 or so, honestly it's a fine language, and although I wrote a fair bit in Swift, I don't really see it as a significant improvement over ObjC, which is Good Enough™ to keep using. Something has to cause serious friction to be replaced with something significantly better, and ObjC/Swift just don't fit that pattern.


Yup, Objective-C's object system was really flexible and powerful to use. Dynamic mixins, swizzling, etc. were all useful tools for me. Having messages as first-class citizens was probably the most important part to me. It helps make the object system expressive enough that I don't remember writing much design patterns ;)

But seriously though, CLOS and Smalltalk-style OOP is probably the only flavor of OOP I really enjoy to use, and Objective-C gets you way closer to that than C++ and Java do. (e.g. the way KVO is implemented relies on "isa-swizzling", or dynamically changing classes at runtime)


Java is more like Objective-C and Smalltalk, than C++. It only took the syntax from the latter, the semantics and dynamism are from the former and reflect the authors experience with Objective-C frameworks at Sun.

Even JavaEE was initially born as a Objective-C framework, Distributed Objects Everywhere.


It's a false dichotomy to dislike OOP or prefer it. It's like saying I prefer hammers over screwdrivers. Just learn how the tools you have should be used and use them well.

The only app I'm currently maintaining and proud of[1] makes tons of use of "traditional" OOP. It uses lambdas and FP when necessary. I think it makes absolutely no use of JavaScript's dynamic features. I'm fairly sure this code would port easily to ObjC.

After 15-20 years, you just get bored of doing things in novel or "pure" ways, and do the bare minimum needed to get the job done that's in front of you.

[1] https://github.com/sdegutis/immaculatalibrary.com


I am not sure if you understood my post. I am in no way saying "OOP is bad in general" or even "OOP is good in general". What I am saying is "I strongly prefer Objective-C's object system over that of other languages." Then I provided examples of other object systems I liked, and how Objective-C feels close enough to them that I don't miss them when writing Objective-C.

Maybe saying "flavor of OOP" was too vague, but I am talking about implementations of object systems, not the (ill-defined) notion of OOP.

Using your analogy with hammers and screwdrivers, my post is less "I prefer screwdrivers over hammers" and more "I prefer screwdrivers with bit holders over screwdrivers without bit holders"


There’s still many people who regard OO in Objective-C as “purer” OO than, say, Java (or something like “the correct way”, whatever that means). I think that’s what they were referring to.


Oof, hard disagree. I absolutely hated writing Objective-C for years– I felt like I had to write unnecessary 'glue' with header files, handling of 'nil' was always jarring, and square brackets at the start and end of every call felt horrendous, to me at least.

I relished the day Swift was announced, and have been using it ever since.


Agree -- my experience with Swift is that it's far more readable and closer to my personal aesthetics than Obj-C, but I actually struggled a lot figuring out how to write things the way that felt intuitive to me (ex. maintaining an observable global state + config that you can access from anywhere, easy declaration of and access to arbitrary/deeply-nested associative array keys, JSON handling, declare-once-use-anywhere icons and colors, that kind of stuff).

Once I had all the convenience guts in place, writing actual functionality has been a delight though (outside of the overly-verbose let/guard and type casting)

That said, I'm pretty sure I'm also probably just hard headed and doing it wrong, and could've learned the accepted patterns/methodologies lol


I always thought the square brackets were clever. Like wrapping a letter in an envelope – which is a great metaphor for the message sending the syntax denotes.


> I felt like I had to write unnecessary 'glue' with header files

Headers are a feature, not a bug. They're the API. They help document the API and also keep it separate from the implementation.


Objective-C programmers say this, but I note that I've never once heard a Swift developer complain that it's too hard to discover API interface or keep things non-`public`.


> never once heard a Swift developer complain that it's too hard to discover API interface

Xcode presents the equivalent of a “header” when you follow a symbol to a framework you don’t have the source for… it’s a swift file full of definitions only and no implementations. The compiler emits this for you automatically as a .swiftinterface file

> or keep things non-`public`

I definitely am a swift developer that would complain about this. It’s way too easy to be cavalier about using the “public” keyword and making things part of the public API when they probably shouldn’t be. It’s like engineers have muscle memory from Java and just type “public class” without really questioning why first.


> The compiler emits this for you automatically as a .swiftinterface file

It's so incredibly slow, though, which is frustrating. It would be ok if only this were as instantaneous as checking a header file.

Ironically, almost everything about "Swift" is slower than Objective-C.


In my current and previous job, we talked about (and partially implemented) low level “contract” modules in order to avoid linking (and building) the entire module in order to share behaviour

That problem was already solved with header files; trivial to split interface from implementation, they’re just two different files. But sometime around the 90s, probably Java and this was deemed inconvenient. Now we’re trying to reinvent that same pattern


Everyone's brain is probably different, but when I first started writing swift I definitely missed header files. When I switch back from a c project I miss them again.


Only access control I wish Swift had is typeprivate so I could hide private things but make them available for subclasses (or perhaps protocol conformers). Unfortunately Apple has only added a package level so far, which seems fairly useless (you're either too big for it to be useful or too small to need it). Obj-C didn't really have ACLs at all, you just hid stuff in interfaces. Once found, those interfaces were no protection at all.


IDEs have improved so integration of Swift and searching is easy. Objective-C now could do without headers but I used it 25 years ago and having headers made life easier.


Swift allows you to program using “headers” - just have a protocol and an implementation, even in separate files if you want.

This is a good pattern for some cases, like the public members of a package. However, I love that I don’t need to do this for every class I write.

And if you do use this approach, at least swift will emit a compile error if your protocol and implementation signatures don’t match.

ObjC will happily compile if your header is missing an implementation and crash at runtime.


You don’t need “the API” in another file.

Headers are such an idiotic design, over-abstraction harms locality of reasoning.


They made a TON of sense when memory was very limited.

Why parse out a whole C file when you can get the only bits that matter for compiling your file from a 30 line header?


Except languages with modules already existed, in systems even more constrained memory predating C's invention.


TypeScript is no headache for me and has no header files.


Any time a language forces you to create boilerplate that could be autogenerated by your toolchain, it's not a feature.


I love the square brackets. I find them aesthetically pleasing for some reason.


With Xcode or any modern IDE the autocomplete "snap" of square brackets closing is a very satisfying code feel.


I find modern Swift to be increasingly ugly and difficult to read. Reading classic Objective-C code is refreshing.


In the whole I don’t mind Objective-C, but when I have to write it these days I definitely get annoyed by having to navigate and maintain header files. It’s more extra overhead than one might realize.

My other complaint with it compared to Swift is how one needs to pull in a bunch of utility libraries to do many things that come stock with Swift.


I really like Obj-C, but I much prefer Swift.

It’s less verbose (even if I’m not a square bracket hater. It has some really nice new abilities like async (way easier/cleaner than callbacks in many situations) and now actors.

But honestly 90% of it is true type safety. The type system is so much more powerful and expressive compared to Obj-C.

There is only one downside, and it’s real. Compiling Obj-C was instantaneous. Swift is MUCH slower, which also slows down error messages and hints. And the fancy type stuff can even timeout the compiler.

Combined with some Xcode issues (stale info anybody?) and it can be a pain.

But I’m happy we have Swift.


Message passing goodness and flexibility (almost!) of Smalltalk, coupled with C for all things low level and perf related. It's a great language! I've switched to Swift for all things Apple these days, but I still miss coding in ObjC.


I agree that ObjC is nice, and proven ObjC codebases probably don't benefit enormously from being re-written in Swift, but that has little to do with how much better Swift is (and it is much better, IMO).


Swift is an improvement over C for the problem domain. Something akin to Swift as a modern C replacement with keeping of the 'Objective' bits of Objective-C layered on top of it would have made for the ultimate language, though.


I don’t know how doable that would be.

Objective-C does just about everything it can to make sure you can mess with it at runtime and confuse the ever living hell out of any type checker that wants to be strict.

And the additional strictness is one of my favorite parts of Swift.


In theory, you are only reaching for the 'Objective' parts of Objective-C when your code actually benefits from being object oriented (in the Kay sense). Otherwise you can stick to pure C.

Of course, C has a lot of ugly traps which makes it less than ideal for this domain. This hypothetical subset language addresses those issues. While, again, you would only reach for the 'Objective' parts when your code benefits from being object oriented.

It is true that the inherit dynamism of message passing makes static analysis impossible to cover all cases, but as with all things in life there are tradeoffs. You lose the nice aspects of object oriented systems if you do not allow for that, and OO is particularly well suited to UI code.

Of course, Swift abandoned the object oriented model completely. Which is fine. But Objective-C showed that you can have your cake an eat it too, offering OO where appropriate, and a non-OO language for everything else.

Objective-C's downfall was really just in that C didn't age well – which, among other things, I am sure contributed to seeing the use of the 'Objective' bits where they weren't really appropriate.


Swift also does OOP, it did not abandoned anything.


Technically true if you enable it with the @objc flag. However, the documentation suggests that you are only use that when needing to interface with Objective-C code, so it is not how one would use Swift in a pure Swift environment. Swift's primary object model design is much more like C++.


Playing with words doesn't change the fact that Swift fully supports OOP, even without any presence of @objc annotations.

Classes, interfaces,interface and class inheritance, polyphormism, variance, type extensions, compile time and dynamic dispatch, overloading, associated types.


The original comment clearly states that, for the purposes of the comment, Kay's definition for OOP is in force. Your personal definition is cool and all, but has no relevance to the context established in this thread.

Is there some value in this logically flawed correspondence that I have overlooked? What is it trying to add?


I’m in a similar boat. Header files typically led to faster compile times. OCMock worked like magic.

Where both languages are poor is how large the binaries they produce are.


Sure if one loves typing []@ all over the place, and I know Objective-C since NeXTSTEP days.


I've had a similar experience, and generally agree with what you're saying. But I am glad Swift was created. All the plebs gravitate towards that language, so Objective-C remains unpolluted. I shudder to think how Objective-C would have deteriorated without Steve around.


ObjC is often no longer an option for building on new platform functionality


Metal is a counterexample of that.


Metal is not a new framework


Metal was released on the same year as Swift, could have been a way to show off Swift, instead used Objective-C with Swift bindings.


That’s because the Swift team almost certainly worked independently of the Metal team.


Hardly given that all Objective-C features since Objective-C 2.0 were aimed at improving the Swift interoperability story, as Chris Lattner has mentioned in a couple of interviews.


How is that related to what I said?


They were aware of Swift, and decided to make the upcoming OpenGL replacement framework in Objective-C instead of Swift, and only provide Swift bindings instead of doing it the other way around, implemented in Swift with Objective-C bindings for compatibility with "legacy" code.


Who is "they"? Chris Lattner works on compilers under Developer Tools. The Swift and Objective-C teams share an office and are often the same people. Of course Objective-C is going to get new features to help import it into Swift, because the whole point of Swift was to make a new language that worked well with the old one. Basically nobody outside that group had any need to know of the language at that point, especially since it wasn't ready for system use anyways. I would not be surprised if the first time most of the Metal team even knew Swift existed was when Craig introduced it on stage at WWDC.


Can always use a shim


Yeah but that’s a losing battle. As we go forward you’re going to have to shim out more and more of the new APIs that you need to use.

Possibly in ways that are very inconvenient/hard to use in Obj-C.


It's been 10 years and I haven't had any issues. Maybe in another 10 years this will be enough of a problem to switch, but by that point I'll be retired


And seems to be growing? I'm not close to this space, but it feels like Objective-C should be decreasing. I'd be curious to know why it's increasing.


Apple is all in on Swift. Everything is Swift first, with some new things Swift only.

You can still call new Swift only APIs from Obj-C, but you’ll have to write your own glue layer.


From the article:

> Again please note that a single binary can be counted multiple times, so the sum of the binaries in this graph is greater than the total number of binaries

So even if a binary only uses a small shim of Objective-C, it'll still count towards the Objective-C number.


Someone once told me "SwiftUI is Flutter but for Apple devices only", can't get that out of my head.


Just like Android Compose is Flutter but for Android devices only?

Now you've got two things to get out of your mind.


Oh my god...


You’re both wrong. Flutter is just Unity for apps.


I'd say you all are right ;-)



> In iOS 16, only 4 apps used the SwiftUI-based app lifecycle. In iOS 17, this figure has grown to 14 apps

What does this mean (what is app lifecycle)?


Presumably it means the root of the app is a SwiftUI App-conforming type rather than a UIApplication. Of course SwiftUI can bridge into UIApplication so it's entirely possible for apps to be using both (though you can't really bridge the other way).


Does anyone happen to know what the `HomeEnergyUI` mentioned in the article does? As far as I know the Home app doesn't have any energy UI at this point.


My Home.app has an energy button in the top right that details my current household consumption and local grid forecast


Having cert failures, and can't actually view this pace.


Loads fine for me.


These charts are almost meaningless. They're double- and triple-counting binaries: the slope of the Objective-C line and the slope of the Swift line are identical because nearly every added binary uses both languages. That doesn't provide any meaningful information except that Apple is still using both.

It gets worse when they get to the percentages: they list 61% as Objective-C's number for iOS 17, but 61% of what? According to the raw data, Objective-C is used in 88% of all binaries on iOS 17, with Swift used in 25% (instead of the 17% they list). Their calculated numbers are derived from a denominator of 8723 binaries, even though they only have 6030 in their data set, because they wanted the numbers to add up to 100%. That makes a nice chart, but it has no meaningful interpretation.


You're essentially looking at a series of pie charts, and it does tell a useful story: that there is more code being written in Swift and SwiftUI year over year.


The problem is that the pie chart is lying—it makes it look like the percentage of binaries containing Objective-C is going down over time, but it's actually going up just as rapidly as Swift is, because almost all added files contain both Objective-C and Swift. The only languages that are going down as a percentage of binaries are C and C++, because fewer new modules are being added using those.

The reason why it looks like Objective-C is losing ground is that every Objective-C+Swift+SwiftUI binary increments the denominator by 3 instead of by 1, and there are more of those added every single year. Since each of those binaries only adds 1 to the numerator, Objective-C's share of the binary pie seems to be decreasing even though it's actually keeping up with Swift.

It's entirely possible and even likely that Swift is increasing faster than Objective-C in terms of lines of code, but this data can't be used to show that.


I would guess that Swift isn't increasing in lines of code as fast as ObjC, because Swift gets more done per line of code. But "per feature" I'd guess Swift is slowly inching out ObjC.


The pictures in this article don't load without Javascript enabled even though there appears to be no interactivity.


I don’t think they are images. Looks like graphs rendered with some js library. I’m on mobile.


See the previous blog post on creating Charts in Markdown using Apache ECharts: https://blog.timac.org/2023/0627-charts-in-markdown-using-ap...


Thanks for the great posts. Will you also do a post about MacOS this year?


Judging from SwiftUI uses in macOS: various things will become noticably slower for older devices for no apparent reason. This is bad for the planet and consumers but great for Apple.

Edit: I see much interaction with this comment, but no response.


What makes you think SwiftUI is slower than UIKit?


One such example:

In macOS 13, the system settings was rewritten to use SwiftUI instead of UIKit. I had a 2019 Intel MBP. Before the rewrite, it was super swift (sorry). After the rewrite it took almost a second to go from section to section.

Now I'm on an M2 machine so it's fast again. Congrats Apple.


That’s not because they’re using SwiftUI. That’s because they don’t care.


On the other hand, the continued use of Swift by Apple will mean they feel the pain too and will improve the performance, which will make all apps faster as most new apps use Swift.


> On the other hand, the continued use of Swift by Apple will mean they feel the pain too and will improve the performance

LOL have you used Xcode lately? It's painfully slow, and getting worse.

Apple engineers use all of the same crappy Apple software that we all do. The problem is that Apple executives have decided they don't care and are unwilling to invest the time and resources in performance optimization or good design. The relentless yearly update cycle will continue until morale improves. Steve Jobs is gone not only physically but also in spirit. It's Tim Apple now.


I think you don't remember what it was like under Steve Jobs. XCodes enshittification was well on its way.

Honestly though, the problem with XCode for me is 90% that the autocomplete is just enormously stupid (they need to just ask JetBrains how to do it), and that Swift compilation is slow (which they are working on).


> I think you don't remember what it was like under Steve Jobs.

I do remember.

> XCodes enshittification was well on its way.

It's Xcode, not XCode.

Anyway, Jobs took a 6 month leave of absence starting January 2009, during which he got a liver transplant, and another leave of absence starting January 2011, after which he finally resigned.

Xcode 4 was released in March 2011, although it had been in development for some time before that. I wouldn't really say that its "enshittification" was "well on its way."


correlation, causation, etc


> iOS 17 contains 6030 binaries, up from 5403 in iOS 16. That’s 627 new binaries.

That's... insane. Even worse, the graph appears to show a superlinear trend. For comparison, iOS 2 had only 278 binaries.

With such out-of-control software bloat, secure computing is never going to happen. There isn't enough brainpower on the entire planet to secure a system that grows to the tune of hundreds of components per year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: