Hacker News new | past | comments | ask | show | jobs | submit login
Apple removes first-party firewall exemption in macOS 11.2 beta 2 (twitter.com/patrickwardle)
804 points by mortenjorck on Jan 14, 2021 | hide | past | favorite | 347 comments



In Case You Didn't Know:

Big Sur on M1 (and possibly on Intel) maintains a persistent, hardware-serial-number linked TLS connection to Apple (for APNS, just like on iOS) at all times when you are logged in, even if you don't use iCloud, App Store, iMessage, or FaceTime, and have all analytics turned off.

There's no UI to disable this.

This means that Apple has the coarse location track log (due to GeoIP of the client IP) for every M1 serial number. When you open the App Store app, that serial number is also sent, and associated with your Apple ID (email/phone) if you log in.

Apple knows when you leave home, or arrive at the office, or travel to a different city, all with no Apple ID, no iCloud, and no location services.

This has always been the case on all devices using iOS, too.

This change is essential for blocking such traffic, and I'm glad for it, but there is a long way to go when it comes to pressuring the pro-privacy forces inside of Apple to do more.


At least in the EU, it sounds like this should be in violation of the ePrivacy directive (aka the cookie law).

There’s an open complaint [0] about the IDFA on the same basis...

[0] https://noyb.eu/en/noyb-files-complaints-against-apples-trac...


That seems to be stretching it. The user is informed about the IDFA on setup (one can argue that that is a violation), so it’s not like “Apple’s operating system creates the IDFA without user’s knowledge or consent” because they do inform you.

Regardless, you can always reset it if you want. And “at” WWDC 2020 (half a year before this complaint), Apple made cross app tracking opt-in.[0]

I applaud the EU for leading the way in consumer protection, but every time I hear about it in regards to technology, it always feels heavy handed with the arguments being a stretch sometimes.

[0]: https://www.adexchanger.com/privacy/apple-wwdc-2020-a-versio...


It not stretching. The ePrivacy directive requires that user _is offered the right to refuse such processing by the data controller_, so it also refers to Apple itself.


How do you know that apple is logging GeoIPs and performing this association with appleIDs? Or are you just saying it’s possible to do so?


He’s just saying it’s possible to do so.

This claim that Apple are tracking your location because they use TCP/IP to receive connections, has been made many times now.

Nobody has so far presented evidence that Apple does in fact geolocate people or even that they persistently store IP address information related to user accounts.

I don’t know for sure that they do not, but I do know that they are aware that keeping IP addresses is a potential privacy leak, and so at least some of their services are definitely designed to scrub ip addresses from records at the point of ingestion and replace them with anonymized keys before they are passed on to services within the company.

So they know that keeping IP address logs is a potential privacy issue and are working to alleviate that.

I would be surprised if they do this for everything yet, but as far as I can see Sneak is making only a theoretical accusation, and not one which he has more than speculation about.

As far as I can see, statements like these...

“Apple knows when you leave home, or arrive at the office, or travel to a different city, all with no Apple ID, no iCloud, and no location services. This has always been the case on all devices using iOS, too.”..

...are complete bullshit as written, even though we can’t rule out the possibility.

“ Apple doesn’t retain a history of what you’ve searched for or where you’ve been.” is on Apple’s privacy page here: https://www.apple.com/privacy/features/

So if someone can find evidence for such accusations, perhaps there is some legal liability for Apple.


There are ways to communicate over the internet that don't disclose the source IP of the client doing the connecting. Tor also uses TCP/IP, so your oversimplification of my post is... not accurate.

> Nobody has so far presented evidence that Apple does in fact geolocate people or even that they persistently store IP address information related to user accounts.

We're talking about IP address logs related to hardware serials, which cannot be changed. User accounts can.


If you are saying that by using Tor, origin information can be hidden, I’d say that’s true.

So for everything not using such a method, what I said is true, which is almost everything.

If you said “Apple should use Tor for everything so that eavesdroppers cannot deduce people’s locations via GeoIP” that would be a fair statement.

Sayid “Apple keeps records of your location history”, is speculation which you have never substantiated, despite repeated challenges.

However what I said still holds despite this small exception.

IP address logs certainly can be changed before storage.


Perhaps Apple doesn't log your hardware UUID + IP. You'll have to take their word for it.

But there's even less guarantee that the government doesn't log that information.

After all, Apple dropped plans of implementing E2E encryption of iCloud backups after the FBI asked them [1]. So "Apple doesn't retain that info" might boil down to semantics since it might be allowing someone else to do it.

[1] https://www.cnbc.com/2020/01/21/apple-dropped-plan-for-encry...


Well the iCloud backups not being encrypted yet is a serious problem.

Weirdly, this isn’t news - anonymous sources have said before that it was due to FBI pressure.

But this doesn’t have anything to do with Apple logging locations.

If sneak’s claim was correct, there would be nothing we could do about it.

If we’re talking about iCloud backups, at the very least you can turn those off and do them locally.

I’m pretty sure that even if e2e backups do come, they won’t be on by default because of the problem of users managing their own keys.


Apple had at least a partial implementation of e2e backup that was resilient to users losing their passwords, via something like friends-and-family secret sharing to perform data recovery.

The implementation was scrapped.

There are ways of solving these problems, throwing up hands and saying "it can't be done anyway" is silly. Apple has done a lot of things that couldn't be done: a computer without a floppy or serial ports, a phone without a keyboard, a headset without cables between your ears.

Building the iPhone was difficult. Building APNS and iCloud was difficult. Building the App Store was difficult. Building the Apple Watch was fucking difficult. Building the Ax line of mobile chips was difficult. Building the M1 was difficult. Don't forget about airpods, homepods, and all the other mindbendingly hard shit Apple does all the time now.

Apple does insane technical achievements on a regular basis. Secret sharing for e2e backups is well within their capabilities. Google even managed to e2e encrypt Android backups.

The problem is that Apple serves at the pleasure of the US military intelligence apparatus, and they know it.

It doesn't take a weatherman to know which way the wind blows.


“it can't be done anyway”

Nobody is saying this.

Also I agree with you - e2e secret sharing is possible, although hard for users with just one device - I.e. a lot of people.

And yes, Apple has solved a lot of fucking hard problems, slowly, and incrementally.

Just because they haven’t done it yet doesn’t mean they won’t.

You have no evidence at all that any of this is because “Apple Serves at the pleasure of US military intelligence”.

You keep making claims that are pure speculation as if they are true.

I actually agree with keeping pressure up on Apple to implement E2E backups.

I guess you don’t think that’s ever going to be possible though.


"There's a camera in your bathroom, but how do you know it's recording?"


Apple has to log client IPs on these systems to prevent abuse, to stop people doing things like scraping every public key for every iMessage user and then publishing the diffs.

IP to ISP/Location mapping is just a lookup table, and can be done at any time now or in the future.


With a datetime and IP, you can geoIP any time in the future. It's a single ETL operation. So you treat it like they do, one way or another.


This assumes they don’t scrub it before storing it, which we know they do for some services, and we have no information about others.

We can’t in fact treat it like they do. We can only treat it like they might be able to.


Anyone monitoring the traffic outside of Apple can do it, as well. IIRC TLS client certificate information is not encrypted on the wire, but I'd need to review modern TLS protocol negotiation to confirm. This would allow anyone monitoring Apple's upstream, passively, to perform this same location logging that Apple does.

Apple knows this, so shipping systems that leak information in this way is tacit acceptance of the military spying going on on the networks to which Apple's servers are connected.


“Anyone monitoring the traffic outside of Apple can do it, as well”

Well this is true of every single connection made by every single app on every single device, for every upstream.

That means everyone is tacitly accepting the military spying going on on the networks to which their servers are connected.

That’s not actually an unreasonable position as far as I’m concerned, and you previous comments about this being true unless Tor is embedded have some validity. I say some because I’m unconvinced that Tor is quite ready to handle all traffic yet.

What is unreasonable is your focus on Apple.

By excluding the fact that your complaints are a general problem with TCP/IP and apply to essentially any service, you don’t seem to be doing a great job of informing people about the reality of the problem.

It’s also worth noting that if your position has now moved to how the tracking could be being done by someone monitoring Apple’s network rather than Apple themselves, you are tacitly acknowledging that your claim that Apple is keeping records of your location are just speculation.


Well anyone can do the same and you’d be none the wiser.

Unless of course you go on and install LittleSnitch or the Windows equivalent, which is something I’m not sure even the all of the HN does bother to do anymore.

And then you’re left off with trusting Microsoft or intel or AMD with their unaudited management engines running on-cpu with DMA access, oh and whatever is running in the EFI firmware...

It’s (did)trust all the way down


While I'm glad they fixed it (upgrading past Catalina was not going to happen), It's going to take me a LONG time to trust them again.

The fact that they could rationalize their own thought process to think this was a good idea or even anywhere close to acceptable, at best leads me to doubt their judgement. At worst, well it's best to assume incompetence over malice but you can see what doors this would open in the future. Now we know they CAN do this without it being obvious to most users, which is a new dimension of risk regardless of intent.


This firewall issue isn't the only privacy feature strip from Big Sur release. Unfortunately no big media care about other huge problem Apple introduced.

My only hope they will also fix full disk encryption in this update. Since Big Sur broken installation of macOS on passphrase-encrypted disk partitions.

I bought into M1 hype and now it's end up that you no longer able to have separate password for the disk encryption.


On the M1, that's by design. If you install macOS on an external volume it doesn't have that behaviour.

On the internal disk, it's there because they carried over the iOS infrastructure, where your login password is the FDE one.

macOS also now boots before asking for your password, like iOS.

(the OS volume itself isn't encrypted and is read-only, the data volume is encrypted with your password)


How do you know it's "by design"? By default macOS was always using this encryption scheme, but there was always possibility to have an optional FDE. Now this is broken and I can't even manage to get macOS installed when any encrypted partition is present since it's also cause installer to fail.

I obviously find it being absolutely terrible "design" decision since there no way on earth anyone can count disk encryption key that is unlockable by user password or faceid secure.

PS: If someone have any idea how having separate boot password can be hacked aroud I'll really appreciate the advice.


Apple Silicon Macs use per-file encryption tied to the credentials: https://support.apple.com/en-gb/guide/security/secf6276da8a/...

Was carried over from iOS.

A way to bypass it _should_ be possible, but will entail having the System volume of the volume group to have different properties than the Data part.

Otherwise the OS will fail to load. (on Apple Silicon Macs, macOS is fully booted already when you input the password, so if you encrypt macOS...)

On older Macs, a Preboot UEFI application application prompts you for the password prior to booting.

What you can do as a workaround:

Create a second account which you'll only use to unlock the drive and then run sudo fdesetup add -usertoadd unlockUser and then sudo fdesetup remove -user PrimaryUser. That'll give the rights to unlock the drive only to that unlock user.

You can also use sudo fdesetup removerecovery -personal to destroy the ability of the recovery key to unlock the drive.


Does this mean that every user account has their own data volume or that every user account has their home folder encrypted on a per-file basis? Or neither?

What is the privacy implications of two users (both with administrator accounts) sharing an Apple Silicon Mac?


One data volume per OS install.

Both users have access to all the data in that case. It got carried over from iOS which didn't have multi-user support.

(and this is by-design, protection granularity is the volume)


Thank you very much. I'll try to setup it using additional user as you explained.

Is it possible to make sure that encryption key only available using this "unlock" user passphrase?


You can use sudo fdesetup list -verbose which tells you which users have their password attached as an unlock token for a given volume.


Why is that so important? Your disk encryption key is certainly stored in memory for the duration of your session (which on Macs might as well be forever since they don’t need to shutdown), so anyone with your user password can gain access either way.


It is important because M1 is iOS-derived hardware and unlikely to keep disk encryption keys in memory that you or anyone can freely dump. And hardware attacks against TPM are both costly and hard to perform.

Also in case of travel or emergency it's much easier to just power it off. At the same time there is tons of ways how someone can steal your day-to-day lock screen password.


> (the OS volume itself isn't encrypted and is read-only, the data volume is encrypted with your password)

Wait, WHAT? Can someone with an M1 encrypted volume Mac check this directory and see if you see thumbnails?

> $TMPDIR/../C/com.apple.QuickLook.thumbnailcache/

A full writeup of this is at the link [1]. This has been a well-known thing in computer forensics for many years, which is why *full* disk encryption is so important.

[1]: https://objective-see.com/blog/blog_0x30.html


Everything in the Data volume is encrypted. The System volume is signed by Apple and is the same on every Big Sur Mac (SSV feature).

This can be disabled through csrutil though.


The OS volume is a read-only image now - just system files only, and is signed etc


No such file or directory on my M1 Air.


Most users don’t want a separate password for disk encryption though, so I’m not sure it’s a huge problem?


IMO that's a big problem. They are completely different risk categories. My FDE password is absurdly long and complicated, since I never want someone who gains physical access to get all my data, but my Linux user account password isn't as long since it's main purpose is to stop someone from getting passed my lock screen if I was to leave my system unattended.


Both are 'physical access'.

If one does not power down your system, your FDE is unlocked. So they only need your Linux user account password to get access to the data on your disk.

FDE only protects your data when it's locked. Normally this is when your system is shut down.


The main difference is in the attack surface. Attacking the FDE can happen offline with infinite attempts with a large-scale operation dedicating lots of compute power. Breaking my user password has to happen on site at that exact moment and my lock screen can detect N number of failed logins and shutdown.


True but it's a slightly different category as he wouldn't leave his system on but locked in a potential high risk scenario.


You mean he would shut down a MacBook every day after us? That’s blasphemy.


At least with 10.15 and earlier you can configure MacOS to hybernate after certain amount of time when it will ask for FDE password on wake up and load everything from the disk.


Separation is still recommended.

Storage encryption can be attacked differently than a running system.


Most users probably also didn't care about the first-party firewall exemptions. They could have asked people if they wanted a separate password for disk encryption (e.g. a small checkbox).


It is highly non-trivial to extract private key from Apple encryption chips, last time I heared the price is at least 100K USD, and probably much higher now. So unless one values own secrets that high, a short password could be OK.


It's about choice and control over your data - an educated user knows that with hardware encryption, it is very difficult to retrieve data if the hardware fails. There's also the trust factor where you would prefer to have the keys, rather than trust some device.

(E.g. Some Western Digital drives have problems with their hardware encryption and made the data on it irretrievable for many - https://github.com/andlabs/reallymine/issues/53 . More here - https://carltonbale.com/western-digital-mybook-drive-lock-en...).

What is being questioned and criticized is the removal of this choice. Especially when a product claims a commitment to privacy.


Is it no longer possible to set a separate password by reformatting the disk in Recovery to one of the "Encrypted" options before (re-)installing to that volume?

That's how I set up multiple passwords for FDE on a recent hackintosh build, but I don't have an M1 and this wasn't Big Sur, so maybe I'm missing something obvious that has changed lately and I'm way off-base.

It had to manually be done in that order though, otherwise it defaulted to the user account password for FDE.


Hm, isn’t the trick is to have a separated account just to boot into with a strong password and remove the primary account from a set of accounts allowed to unlock the disk? I have used that on older macos not to bother with reinstallation of os to enable password-encrypted partition.


I am glad that the public backlash forced them to fix a deliberate BACKDOOR that they had introduced (by design) in the Network Extension Framework that macOS Big Sur now forces all the firewalls to use. (At least, they claim to have removed it). But it is hard to trust them again, and I would prefer to use a firewall that uses its own kernel extension to manage the network than using Apple's API again. (Obviously that's going to be really hard with the changes they have made to the OS).

I know many Apple's fan see this as a positive move.

But let's not ignore the pattern of privacy violations and user data collections due to deliberate design and the "apology" and "changes" that follow once CAUGHT. A few of these that immediately come to mind are:

- Apple selling user data to US government: https://www.theguardian.com/world/2013/jun/06/us-tech-giants...

- Apple iPhone 11 tracks user location even when location services are explicitly turned off by user (another BACKDOOR): https://www.silicon.co.uk/mobility/smartphones/apple-iphone-...

- Apple macOS tracks every app that you use: https://sneak.berlin/20201112/your-computer-isnt-yours/

- Apple introduces BACKDOOR in its API to allow Apple apps to bypass application firewalls: https://www.patreon.com/posts/hooray-no-more-46179028

(For those who want to diss me for the above, realise that Apple's new found love for privacy doesn't mean shit without such public scrutiny and discussions. And if you want it to last, remain suspicious and VOCAL on any such possible violations.)


"Please don't use uppercase for emphasis. If you want to emphasize a word or phrase, put asterisks around it and it will get italicized."

https://news.ycombinator.com/newsguidelines.html.


Apple has no love for privacy nor ever had. They are in a market position where their main competitors - Google primarily, Microsoft and Amazon - are highly dependent on revenue streams extracted by monetizing personal information.

Apple is in a position to cut that stream without affecting its bottom line, so it does it and claims privacy as a core value.

I won't look a gift horse in the mouth, but I have no doubt that the tables could switch at any time.


When Apple launched iOS 6, it was the first operating system to include per-app privacy controls around access to things like microphone, camera, photos, etc. Controls we consider fundamental today. It did not mention it a single time in any of its PR or marketing at all. The first reference you find to it will be from Apple blogs who were surprised to stumble upon it in the iOS 6 beta. It took Android two more years to launch a similar feature.

Yes, Apple is doubling down on its competitive advantage here. But to claim it does not and never cared for privacy is just ignoring the facts and the history. It moved the industry forward.


The first except for Blackberry. Before IOS or Android even existed Blackberry had granular per app permissions.

I don't disagree with your argument though, of the modern mobile OSs Apple moved to towards the per-app model before everyone else. I just find it interesting when Apple or Android gets coverage/credit for a feature that has long existed but was forgotten or ignored.


Actually, BB10 had an even better feature, you could provide dummy data to wrapped Android applications. So you could fill the Android contact data with an empty contact list and the app would be none-the-wiser. This feature was never advertised.

But noone bought the phones so who cares?


The problem is BB wasn't around by the time smartphones became more than a niche product.

The list of prior work that influenced Apple, or any tech company, is usually far too long to list.


>The list of prior work that influenced Apple, or any tech company, is usually far too long to list.

Agreed, but that doesn't mean we should default to "Apple was the first" simply because we can't wrap our heads around the pre-iPhone days.


I did not know this, thanks for teaching me something new! It goes at least back to BlackBerry OS 4, running on the BlackBerry Pearl, released in 2008 (but likely further back):

> You can set permissions that control how third-party applications on your BlackBerry device interact with the other applications on your device. For example, you ca control whether third-party applications can access data or the Internet, make calls, or use Bluetooth® connections.

https://www.t-mobile.com/support/public-files/images/legacy/...


If you’re curious about more of the history (now, almost a decade ago), this was the scandal that likely motivated the above:

https://www.theverge.com/2012/2/7/2782947/path-ios-app-user-...

Congresspeople sent letters to app developers as a result (not even Apple, ha). iOS 6 was then seeded to the public four months after that article.

But! It actually goes earlier than this. Apple started phasing out access to device identifiers used to track users across apps in iOS 5: https://techcrunch.com/2011/08/19/apple-ios-5-phasing-out-ud...

(again, none of this was marketed to anyone)


If it was seeded 4 months after the article, it had absolutely nothing to do with the article.


Yes, that makes sense, development timelines for these features are often far longer than we imagine.


That's not true at all. Even early Nokia (Symbian) or Blackberry phones had fine grained permission prompts. As did early Android, actually, but they were deemed to annoying for users.

Apple does deserve credit for leading and influencing Android in this regard, but neither the concept nor the implementation is new.


Do you have a link to the Android thing? I'd be curious to learn more. I always thought early Android had transparency (showing what an app uses) but no control over changing it.


Android had permission system before iOS implement it, but it was just a prompt for list of permissions at install time so not much worth like current one.


This comment has no basis of truth.

https://www.youtube.com/watch?v=39iKLwlUqBo

Jobs @ D8

People are quick to forget because back then everyone (including HN) was praising Google for everything under the sun.


In that video, Steve mentions the fact that iOS had location permission prompts before iOS 6 – is that what you are referring to as incorrect? Because that is a good catch, permission prompts were present at least as early as iOS 4.2:

https://developer.apple.com/documentation/corelocation/clloc...

EDIT: Oh, I see, if you are referring to the PR or marketing portion, I think it is certainly clear that Apple had a pro-privacy stance, but that did not make its way into the company's _consumer_ marketing:

https://web.archive.org/web/20120718122643/http://www.apple....


Having been a programmer through the 90's, and watched Microsoft (and Oracle, et. al.) through their most malfeasant years, I'm very cautious about giving companies the benefit of the doubt. However, the fact that Apple is leaving hundreds of billions of dollars on the table by NOT monetizing their aggregated user data does seem to indicate that their will is strong here, and that "they" mean what "they" say about privacy. The kind of money they AREN'T making from this move would try any mortal's soul.


Or it wouldn’t really be that much more. How do you come up with hundreds of billions of dollars? Who are the buyers?


Advertisers. It's always advertisers.


I believe it's based on Google's income derived from advertising, on financial statements.


Isn't "it's in our financial interests right now" about as much "love" as you'll get for anything by a corporation? Saying "Apple has no love for privacy, they're only doing it because it sells" sounds moot to me, every company only does things because they sell.


By choosing which markets you operate in and which products you develop you have a fair bit of influence which things are in your "financial interest".

E.g. creating a company with a business model which benefits from taxing CO2 emissions (Tesla) is morally great. Whereas having a business model which benefits from cheap oil (VW) is less so. Product decisions (electric vs. fuel engines) have a large effect on your long-term financial interests.


If Apple knew it could make more money in say for example, arms dealing, wouldn't it be obliged to pivot to serve the shareholders? I guess it's somewhat democratic as shareholders could vote against it for moral reasons


No. The common refrain of "but publicly-traded corporations have to maximize shareholder value" is a myth, not a law.

There are bits in the law about management having a fiduciary duty to shareholders, but that mainly means that they aren't supposed to be stuffing their own pockets at the expense of investors. There's a wide birth for management to decide what kinds of profit are and are not worth it.


It’s never just about “more money”. It’s about time-discounted, risk-adjusted “more money”. Doing morally repugnant things increases risk substantially. This is not the risk profile the owners signed up for.

This change may even depress the stock price directly as disgusted owners sell (this harms the ability of current shareholders to earn a return, as the loss is now but the future revenues are discounted).

The market is as moral as its shareholders, which is far less than perfection but a lot better than zero.


No. Not all corporate actions have to be aligned to maximal profit extraction.


No.

The case in question is Dodge vs. Ford and rather than believe memes about it just read the darn wikipedia page: https://en.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.


> The case in question is Dodge vs. Ford and rather than believe memes about it just read the darn wikipedia page: https://en.wikipedia.org/wiki/Dodge_v._Ford_Motor_Co.

I'm all for pointing someone to the sources, but let's not instill the idea that consulting Wikipedia is doing substantially more diligent research than looking at memes.


In common speech, "love" is expected to last eternal, not just to the end of the quarter. I'm not really a romantic, but using the word "love" in a corporate context defiles it.


Companies are made up of and run by people and the decisions made by those people are not necessarily solely profit-driven.

That doesn't mean that making money is not important to these people; of course it is. But it's not the only factor.


Corporations are owned by shareholders, aren't they the ones who ultimately make decisions?


Only nominally. In practice you will have a board and executives and managers who are subject to the conflicts of interest inherent to the principal-agent relationship. This is a major problem of economics.


That's quite an extreme-end of capitalist way of looking at it.

Companies build a vision or image for how they behave and a lot of that is going to be driven by marketability.

For example Microsoft has taken a very pro-developer stance since Satya Nadella took over. Not just because it's directly profitable to be pro-developer, but because it helps their long term image, culture etc. This goes a long way to explaining a lot of their recent actions like helping Github be available in Iran again and open sourcing large parts of C# / .NET.

So the question becomes: are Apple being pro-privacy because it's a long term stance they want to take and make a basis for their company culture because it's something their customers really want. Or are they taking the stance simply because it doesn't impact their own profitability right now, but would drop it if there was an obvious potential income stream.


> Not just because it's directly profitable to be pro-developer, but because it helps their long term image, culture etc.

And thus is indirectly profitable.

> That's quite an extreme-end of capitalist way of looking at it.

It's only extreme if you can show that companies routinely take the moral stance even when it impacts their short- or long-term profitability. Is Microsoft good now despite its best interests, just because it decided to take a moral stance?


ironically, most of these companies are out of China because they don't want to comply with Chinese laws. Not apple https://applecensorship.com/


> ironically, most of these companies are out of China

Of the three companies named:

- Google's user-facing services (search, email, app store, docs, ...) are blocked, but Google Ads (which are censored) and Android (which comes without any content that would require censorship) are still sold.

- Microsoft: I'm not aware of any of their products being unavailable. Windows is the dominant desktop operating system in China, and I'd be surprised if the app store wasn't censored. Bing search results are definitely censored (they tell you so at the bottom of the page).

- Amazon isn't selling much that could run afoul of censorship, except possibly books (remember when Amazon used to be an online bookstore?) but in China their market is mostly targeted at the niche of high-end imported goods. (Note the country-of-origin indicators on https://www.amazon.cn/ )


Worse than this, and to the point of this post - you can’t offer comms service in China unless it has a way to be snooped by government. So some comm services like Skype have to offer a separate app, just for China, which affects all messages sent and received by users of this version of the app, even if it comes from a regular app.

I was telling my friends who moved from China and kept their iPhones, to just buy a new one... just in case...


Google doesn't certify devices in the Chinese market which run Android. Android is open source. Devices made for the domestic Chinese market run versions of Android created by the manufacturers and lack Google apps and services.


The majority of Android phones manufactured in China are not intended for the domestic market but are exported, and do come with Google apps and services, which Google licenses to the manufacturers.


no idea why I thought facebook is on that list too... There is AWS in China, but you need to sign "special agreement"


"Brain test game was deleted from the Chinese App Store".

These are just changes to the app catalog, what reason do we have to believe that a bunch of brain training apps being dropped or withdrawn is evidence of censorship? Has any of this been verified with the App publishers?


Apple is leaving a massive amount of money on the table by not monetizing user data or using it to serve ads. They are by far the most privacy focused of the large tech companies, though they're obviously not perfect.

The only way to do better than Apple is a full FOSS stack, and that comes with different challenges and is more of a hassle to maintain.


> Apple is in a position to cut that stream without affecting its bottom line, so it does it and claims privacy as a core value.

Well, the GP has a quite damming list showing that it doesn't. It's only empty marketing.


The PRISM revelations in particular made me realise that we can really only rely on Linux for security, since Apple, MS, Amazon and all the big tech companies are onboard with cooperating with the NSA. If you've read the way eg the CIA installs snooping software on Macs and PC's, they hide the Mac version in your hidden EFI boot volume, even from the factory.

It's enough to make you never trust them again.


Factory installed CIA snoop software on Macs is news to me, especially bearing in mind most of the factories are in Taiwan. Where can I find out more?

Also if the spyware is installed in firmware at the factory, how is Linux going to help you?


> especially bearing in mind most of the factories are in Taiwan

Zyxel, Asus, and other manufacturers of networking devices (with backdoors of course) are also there.

https://arstechnica.com/information-technology/2021/01/hacke...


OK, so some Taiwanese network device manufacturers have poor default account practices, news at 11:00. I'm not seeing the CIA connection.

Devices like this are used by the government and military contractors as well, and as you can see such vulnerabilities are trivial to detect so you can't count on the opposition finding out about it and using it. This one was picked up days after the firmware release. The smoking gun would be government and military admins secretly being advised by the CIA to close these security loopholes, so the government is protected but everyone else isn't. IMHO that would get Snowdened almost immediately. There's no way they'd keep a lid on that, there would just be too many people involved.

As with a lot of this conspiracy theory stuff, it only makes sense if you don't think about it too much. Once you actually start thinking through the consequences and practicalities, it doesn't hold together.


> poor default account practices

Understatement of the year. Secret account is exactly what people call "a backdoor".


Well I think it's mac specific software.

I learned about it from wikileaks. Eg https://wikileaks.org/ciav7p1/cms/space_2359301.html


That's iOS software, not Mac software, with the exception of info on some tools to install on a Mac to help hack iOS devices.


Yes that wasn't the precise link I apologise. They had details on the mac stuff too. If you look around. https://wikileaks.org/vault7/

https://www.pcworld.com/article/3184435/wikileaks-documents-...


Right so they have firmware malware and tools for infiltrating it into machines. That’s not a surprise. The extraordinary claim that I challenged was that this is being installed on Apple computers at the factories. So far as I can tel, there is no evidence for it.

This is like someone claiming it will rain next week and when asked how they know, they say they can prove it rained last week. That’s irrelevant. Yes I know they have firmware attacks. Where does the claim they are putting it on machines in the factory come from? How many times do I need to ask the same question?


I might have mistaken it for the evidence that they installed hacking tools on factory fresh iPhones, not macs.

https://wikileaks.org/vault7/

>"NightSkies 1.2" a "beacon/loader/implant tool" for the Apple iPhone. Noteworthy is that NightSkies had reached 1.2 by 2008, and is expressly designed to be physically installed onto factory fresh iPhones. i.e the CIA has been infecting the iPhone supply chain of its targets since at least 2008.


Factory fresh just means fresh from the factory, not necessarily in the factory. The attack targets phones in their manufactured state with the OS and vendor firmware installed. In other words it's not an attack that depends on end user software (Apps) being installed, or on user behaviour, or even on features of the mobile network.

By supply chain, when they say mail orders and other shipments, they just mean between the vendor and the customer. In this case the use of "supply chain" could be miss-understood, this is a post-factory attack which would be carried out in transit, probably at a US border.

We have seen that done before to shipments of devices such as computers and network gear that have been intercepted and hacked before delivery to a suspect, or a target organisation or country.

I don't think this can be reasonably construed as evidence for Apple conniving with the CIA. In fact I still don't think that would make any sense from a CIA perspective. The factories aren't even in the US. Apple employees aren't background checked or sworn agents, they're a potential security risk. Why involve them if you don't need to?


Alright then they probably aren't infected straight from the factory. However Apple is definitely collaborating with NSA as are other major US tech companies.


Security isn't the same as privacy. Linux desktop security is poor but its privacy can be okay.


If it's important for you and Linux security is not enough for you, consider using Qubes OS.


Why us it poor? (Genuinely asking).


No real push to use sandboxing or to limit access to personal information. Any app you install can do anything it wants with all of your data.


I know, it's amazing, isn't it? Just think of the amazing possibilities this new "general-purpose computing" could unlock!


Claiming that a lack of security is a feature, actually, is not a great strategy.


This definitely is a feature. You can use sandboxing if you want to. See kernel namespaces (used by Docker), Ubuntu's snaps, etc.

But breaking almost all software by default just because some incompetent users keep choosing to install malware is not a great strategy.


It isn't the dichotomy you set it up to be. macOS solved this without "breaking almost all software by default" using per-app, per-directory permissions for the file system, over and above the decades-old POSIX file modes model.

You're making excuses for the lack of security innovation on Linux workstations. They've fallen behind.


To be frank, Mac is not a model I would want to follow.

I am the sysadmin and owner of my machine, not Apple or some other organization. They have no business telling me what software I can and can't run, or what files that software can access.


They do neither.


They most definitely do, using their developer program. Look up Apple vs Epic for a case where they weaponized this.


You can run any software you want on macOS.

You can't publish any software you want on the App Store.


https://developer.apple.com/documentation/xcode/notarizing_m...

> Beginning in macOS 10.14.5, software signed with a new Developer ID certificate and all new or updated kernel extensions must be notarized to run. Beginning in macOS 10.15, all software built after June 1, 2019, and distributed with Developer ID must be notarized

So, no. You need Apple's approval to be able to create software that can actually be ran by end-users even if you do not distribute using the App Store.


First, that is "software signed with a new Developer ID". You can just not sign software and it does not need to be notarised.

Second, notarisation is not an "approval". It is a malware scan.

The only thing that requires notarisation, which, again, is not "approval", is kernel extensions.


How can you notarize your software when apple has suspended your developer account? Would you not say that notarization requires you to have a developer account, which requires Apple's approval?


Can't you run apps on behalf of restricted users?


Via CLI you can, but GUI apps connect to your X server session, and then the fun begins - any application you allow to connect can essentially capture your keyboard, mouse, clipboard and a ton of other fun things,as there is no sandboxing applied between them. It's inherent in the design of the X protocol.

There are solutions that are intended to force the sandboxing by opening a new Xserver for every application, e.g. Firejail [0], but that comes with another set of interoperability problems.

Wayland was supposed to address some of these concerns, but it will only do so for applications that natively talk wayland protocol, not the ones that connect through x-protocol via xwayland

[0] https://firejail.wordpress.com/


I expected that Wayland isolates the applications by default, not just when they allow it.

So you might be interested in Qubes OS, which provides a very strong isolation through virtualization.


XWayland is essentially a translation layer consisting of Xserver and Wayland client [0]. Therefore it has all the same problems a normal Xserver has, which they do acknowledge:

> A Wayland compositor usually spawns only one Xwayland instance. This is because many X11 applications assume they can communicate with other X11 applications through the X server, and this requires a shared X server instance. This also means that Xwayland does not protect nor isolate X11 clients from each other, unless the Wayland compositor specifically chooses to break the X11 client intercommunications by spawning application specific Xwayland instances. X11 clients are naturally isolated from Wayland clients.

I use QubesOS, but it comes with its own set of problems as well.

[0] https://wayland.freedesktop.org/docs/html/ch05.html


That does not sound different from what Windows does. By default all programs running under the same user can access all windows of other applications (except UAC elevated ones). It's a relic from when OLE and Clipboard in Windows 3 just was (very simplified) a pointer to RAM.


The only reason it is worse with X11 is that it is an inherently networked protocol, so the same statements also apply to any remote connections you might allow to your X server. It also makes it somewhat easier to capture Xkb / Xinput events purely through API, without need for any elevation or excessive polling of the devices ("it just works").

That includes any systems you might have SSHed into with X forwarding enabled, as it automatically extends the trust there. Yes, your ssh client might try to enable X SECURITY extension (which clamps acesss to just the current window), but it is disabled by default or bypassed anyway by the users as that extension is known to crash quite a few programs.

Both are a product of their time when the prevailing approach was to trust the programs you run.


Not easily, no. Doing so is a big kludge, rather than being part of a system actually designed to offer useful security measures.


It’s inaccurate to say “Apple selling user data to US government”. That’s not what the article claims (the word “sell” doesn’t even appear in the text), and there are in fact many consumer data brokers who really do sell data to law enforcement.


I don't think it's inaccurate; the IC pays the data providers (presumably for implementation/overhead) for receiving the FAA702 (PRISM/FISA) data.

Which data is picked by the US government, and no warrant is required. Apple provided data on 30,000+ users to the US government without a warrant in 2019, per their own transparency report.

If they received money for the program, they are indeed "selling user data to [the] US government".


A reimbursement for effort/overhead is not the same as selling for profit. Again, there *really are companies who sell consumer data to law enforcement for profit*, so it's important to use the correct language and make the appropriate distinction. Do I like that Apple does that? No. Do I think the actual policy conversation is best served by accuracy in language? Yes.


You're just playing word games. Apple gave user information to the US government in return for money. That's selling.

Nitpicking about whether they made a profit has no bearing on the statement “Apple selling user data to US government”.


Why do you call it a deliberate backdoor when the Apple developers (see elsewhere in this thread) have said this was a bug?


(That tweet has been deleted by the Apple developer).

Before macOS Big Sur / Catalina, many of these application firewalls - Lulu, Little Snitch, HandsOff, TripMode, RadioSilence etc. - all used their own kernel extensions to effectively monitor and block any processes from connecting to the internet.

Firewalls are system security softwares. And naturally Apple would prefer to oversee and have this in-built in their OS. Apple also wants to discourage kernel extensions on macOS (they have some good reasons - a poorly designed kernel extension can make the OS unstable; but mostly its about feature control with Apple).

So they informed all such firewall app developers that their individual kernel extensions will no longer be allowed, and Apple had instead created an OS API specifically for their use case. (They described the features it would have and invited them to give their feedback). And so all application firewalls were forced to update their apps and use this OS API.

But this API had an undisclosed, in-built list of Apple approved applications that no firewall was allowed to block. Someone created that list. Someone added that list in the system, and coded the API to specifically give them special privileges to bypass any application firewalls.

Bugs are accidental. Backdoors like these are intentional.

(You can however take exception to the usage of "Backdoor" here - perhaps from Apple's perspective it was a good design decision as many of these services go wacko, and sometimes even freeze your system, when they aren't allowed to do what they are coded to, like do some operation over the internet. I've often seen CPU spikes and slowdowns when you block some of these services.)


I agree, this is 90% likely malicious. The non-malicious usage I can imagine is that for debugging the firewall you don't want to lock your other services out of it in case something goes wrong (or as you said, for reliability issues)


My fear is that Apple will now make the design decision to make these services more unreliable if blocked. Like I experienced, others too have noticed similar behaviour:

> It’s worth noting that Big Sur and its predecessors are built to assume that they can talk to Apple at any time, but when we don’t allow it, a few unwanted side effects pop up. For example, the keyboard sometimes takes longer to wake up from sleep mode. Or, in certain situations, the Mullvad app takes longer to detect that the computer is online.

- https://mullvad.net/en/blog/2020/11/16/big-no-big-sur-mullva...

(Ofcourse, as a developer, I can sympathize with the Apple developers - when you design a product to use the internet, you don't really think hard about all kinds of use cases where internet access is deliberately denied).


> when you design a product to use the internet, you don't really think hard about all kinds of use cases where internet access is deliberately denied

Why wouldn't you, though? That seems like a pretty big oversight. Lazy at best, negligent at worst. Not everyone in the world has constant internet access, and it seems ridiculous to design an _operating system_ with that assumption. I like to take an eBook reader to the park, for example. Prior to owning that device, I took a laptop when learning a new programming language. If that laptop had been running BigSur, I'd see all these same issues based on Apple's un-thought-out "design decisions".


It's not like these things fail to work without internet access. Obviously, the vast majority of people using the OS are going to have internet access so assuming actions based on internet access and then failing if you can't connect is accounted for, regardless of the reason (weak connection, no connection, etc). I don't see how anyone can call this behavior an oversight when it works exactly as intended.


You don't create a whitelist system literally called ContentFilterExclusionList by accident.


A backdoor has some malicious connotations to me. Having security profiles controlled by a list seems a thing

Without spending a lot of time sshd has allowLists (AllowGroups) and match directives

Sudo also has per group config

As a user on any operating system what you can and can’t do is controlled by a list

Perhaps the issue is the list isn’t user controlled ?

Perhaps the issue falls into the “I own and control my device” vs “we sell you a safe, foot gun safe, easy to use device (in their opinion)”


If they claim "here's the API you can use to control network access" but can then put arbitrary apps to work around that, that's the definition of backdoor access to the device.


I’m not supporting their approach. Why can’t this be thought of as a control API with a built in allow list ? Where the list is hidden by obscurity


A "back door" is any kind of mechanism that was added by the vendor to circumvent security mechanisms.

Back doors are typically not disclosed to the user, and can't be turned off. So for example an automatic software update mechanism isn't a back door, as the user is aware of it and can typically turn it off if they are concerned about security.

An undisclosed mechanism that allows Apple apps to circumvent firewalls does very much fit the description of a back door.

Intent doesn't matter with regards to back doors. Most back doors are not made with malicious intent, or at least the vendors usually claim that they only had good intentions for the back door. (Eg. see the recent reports where a router manufacturer had a secret password that they claimed was only used for software updates)

The danger about back doors is that malicious software can use the back doors to circumvent the security measures, just like Patrick Wardle demonstrated that it was possible to use Apple's content filter exclusion to circumvent firewalls.


How a 'ContentFilterExclusionList' creation can be a bug? Somebody hit the wrong keys by mistake and nobody noticed during the pull request review?


Entirely possible if an infinite number of monkeys type along on an infinite number of keyboards.

https://en.wikipedia.org/wiki/Infinite_monkey_theorem


Incidentally, probably an apt depiction of Apple's development process :P


> Why do you call it a deliberate backdoor when the Apple developers (see elsewhere in this thread) have said this was a bug?

They're lying.


Why would they lie and not just shut the hell up? It makes no sense.


Sometimes people talk, even when they should shut the hell up. (The Apple developer has since deleted his tweet).


He deleted his tweet because everyone was piling on him with nasty replies


Every year the same actually.


Perhaps they meant it was a bug in management because the code was very much intentional


"Hanlon's razor" is a gift to the malicious. I've stopped believing in it entirely.


Agreed. Nearly every kid discovers the 'it was an accident' lie/excuse. And I don't think adults ever forget it.


> have said this was a bug

They can say whatever they like, it's another story they've got no credibility. It was quite obvious it was very much a deliberate action (just look at the naming, itself)


It's always a bug. Fool me once....


Your first link was debunked the day it was printed. https://mobile.twitter.com/search?q=Greenwald%20direct%20acc...


I will not assume anything. When they roll official update and people do proper testing (Wireshark etc), may be I will update. But for me the romantic days of "trust us we care for users privacy" are over. Period. I read Apple actions, not intents. And reading actions for me is: "Mac OS is a valuable part of vertical integration map of Apple services". When I buy something - I own something. If idea of ownership is problematic for Apple they must state it openly and rename all the hardware store buttons from "Buy" to "Rent".


> I am glad that the public backlash

Due in no small part to HN. If it weren't for this community, the word would not have spread as far as it did to pressure them to make this change.

Thanks to all of you and everyone at HN who works hard to keep this place as awesome as it is.


Keep in mind, pf still worked as intended. With caveats that macOS doesn't work properly if all traffic is dropped, imposing 30 second timeouts before publishing a default route to the routing table, among other things throwing a hissy fit. But, at least on macOS, there has always been a way to block all traffic.


While I agree that one should remain suspicious and be vocal about privacy violations and security issues, I find your attitude of continuing to attack Apple inappropriate.

Apple competitors Google and Microsoft which control the great majority of OS installs both for mobile and desktop don't even pretend to care about privacy. I have collected over the years reports about dozens of underhanded tactics they use to manipulate users into sharing data, when they're not downright forcing them to do it.

Currently Google is showing in the EU a modal pop-up asking users to accept to be tracked or fuck off to a labyrinth of "See more" and "Other options" which are obviously violating the GDPR. They got fined five times already for GDPR violations.

Microsoft have told their non-enterprise customers to bugger off and learn to live with telemetry. They're actively working around people blocking telemetry and they're being investigated for these practices.

How about a thank you that at least someone at Apple listens to their customers?


> I find your attitude of continuing to attack Apple inappropriate.

I do so because I am an Apple user - this is being typed on a mac mini. I also own other Apple hardwares.

I also advocated for Apple hardware within my family & friends to switch from Android to Apple quite successfully (I am the IT guy in my circle). I did so because I would like to believe their commitment to privacy they have publicly stated. (Tim Cook being Gay adds to that trust because he understands that privacy is not just about hiding secrets but protecting ourselves from political persecutions by those who do not like some part of our identity - whether it be regional, gender, political, cultural, religious, sexual etc.).

It doesn't mean I trust them blindly or completely or will allow them to screw my customer rights (like right to repair, and OWN my device). Would you?


Your CV or Apple credentials are not relevant. If one considers privacy important, as you seem to, then they should engage with companies which at least try to behave in a privacy-friendly way instead of typing backdoor in all caps several times and painting those companies in a bad light while not recognizing any of their contributions to improving the privacy of their customers.

And here are those contributions spelled out for you: Apple is the only company preventing Google from having the private information of all smartphone users on the planet on their servers.


> Your CV or Apple credentials are not relevant.

Maybe not to you. It should be for Apple, if they actually care about their users / customers.

> Apple is the only company preventing Google from having the private information of all smartphone users ...

No, it isn't. There are other worthy contenders to both ios and Android, like Sailfish OS. (In fact, using a Sailfish OS mobile phone actually protects my data from both Apple and Google - it's a double win for me).

And unlike you, my idea of privacy isn't trusting one corporate over another, but ensuring that no corporate has access to my personal data itself in the first place - I absolutely do not want Apple to have access to any of my data. (And whether you like it or not, until Apple does precisely that, I will keep criticising it).


As a queer person myself, I think your trust in the "gay experience" of rich guys is dangerous.

We just had this Szajer scandal, where a powerful outspoken homophobe was caught in an gay orgy.

The gay experience (shame, rejection and discrimination) also comes with increased chance of "co-morbid" personality defects, which may be more pronounced worh exceptional wealth and status.

Queer solidarity by gay men is not a given anymore.


The fact that their competitors are as bad or worse in this regard does not make Apple saints - and this has all the hallmarks of an intentional addition to position Apple apps differently from the others, which is a classic Apple move.

Compromise in security and prviacy clearly has been deemed worth by someone at Apple before the stink was raised.


This kind of black and white thinking is very impractical and self-defeating, except maybe for RMS, to remind us what we should strive for.

For most of us, the real world decision is to either work with a company which is actively working on undermining privacy or with one which is trying to improve things.


Did I say I would stop working with them?

In the real world, Apple still gets to have their business and people work with them, but enough stink is raised to both externally and privately to get them to change their decision. Which they have evidently done here, so working as intended. Reputation damage is a thing if it involves conversations with other F100 companies.

This particular debacle is one of the reasons why $CurrentCorpo I am occasionally working with decided to skip Big Sur until much later in the lifecycle - not the only one, though.


very nice - this was one of my main contentions on Big Sur.

Will upgrade once 11.2 is stable :)


My other major contention is that Display Stream Compression has been broken since .0 betas through .1.

I have 2 27" 4K HDR 144Hz monitors.

On Catalina I can drive them at full spec on my 2019 Mac Pro, with a W5700X.

On Big Sur, my options are HDR @ 60Hz, or SDR @ 95Hz.

Not to mention the monitors also got a firmware update to unlock 165Hz, which neither Catalina or Big Sur recognize.


Which monitors are you using?


Two Asus 27GN950-B's (https://www.lg.com/us/monitors/lg-27gn950-b-gaming-monitor) connected by USB-C to DP.

I have heard about the issue on a few other forums with other monitors. The only one I haven't heard about in either direction is the ProDisplay XDR.


Err, LG, not Asus...


I'm still on Mojave to use 32-bit apps. Any way to use 32-bit in Catalina or later?


No, and there never will be. It's not the execution of 32bit code that's the problem (you can trivially re-enable that with a kernel flag, if you want), but rather the removal of every 32bit library from the OS. Without those libraries, apps won't run.

You can sometimes get away with copying a handful of individual frameworks from older versions of macOS to newer ones, (I recently got Mountain Lion's QuickTime to work on Mavericks this way), but for so many libraries, many of which operate at a low level, it's just not going to happen!

Mojave is still a perfectly fine OS for a while longer. And you can of course use virtual machines, although personally I'd probably opt for dual booting if it really came to that.



Yes, that is interesting! I wonder if there was anything extra they did, or what apps they actually ran.

I did try this, when Catalina first came out. But maybe I gave up too quickly (It was a "there is no way this will actually work" type of thing), or maybe I tried the wrong app...


For my 32 bit binaries, the easiest way to use them is to run the Windows version, because that OS has better backwards compatability.

It's possible to get a Mojave VM up and running but it was nontrivial when I gave it a shot


If I'm not mistaken, WINE is now capable of running 32-bit Windows binaries on 64-bit-only macOS as well, so that may also be an option particularly for more simple apps.


Even the 64-bit WINE can't run 32-bit apps on Catalina. Their website specifically says they support up to macOS-10.14. Apparently 64-bit WINE can run 64-bit Windos apps on Catalina, though.


I don't think Wine can, but Crossover definitely can.


The CrossOver fork of Wine can.


I’m using a Mojave VM with VMWare Fusion on my laptop (that runs Catalina) on the rare occasion of needing to run 32bit apps. Install & setup was easy, without any problems. VMWare Fusion is a paid app, but VirtualBox should be able to do the same for free.


The latest Vmware Fusion has a free license for personal use.


Why do you want to upgrade though? What's Mojave lacking that Catalina has?


The latest XCode that supports deploying to iOS 14 devices, unfortunately.


Ah ok. Yeah, that makes sense.


Same. Still on Catalina due to this.


An Apple employee tweeted after the news with 11.0 that this was a bug so I'm not surprised, but happy to see it fixed!


Adding entire feature without justification (ContentFilterExclusionList) is not a bug.

Calling it a bug is misleading.


The feature could be a workaround for a bug. That is, one of their internal services didn't handle being blocked well, and instead of fixing that service they just added that feature as a temporary workaround to make sure the new firewall functionality didn't break any important/core features of MacOS

I'm not saying this is what happened, but without actually knowing what happened we can't assume the intention was to exclude everything in this list forever either.


A bug? Not a feature?

Glad they changed their mind anyway.


Link?


There's no link because the tweet was deleted soon afterward. The tweet seems to have been very ill-advised.

I don't know how anyone can describe this as a "bug", because as the linked article describes, there's an explicit "ContentFilterExclusionList" in the Info.plist file with a list of the specific Apple services excluded. That's not by accident, it's by design.


bug was in the design. (or in decision leading to design)


I don't think I'd call it a bug. It's a poor decision. I can easily believe that it wasn't a decision made with malicious intent; the culture at Apple seems (at least from the outside, judging by results) to encourage an "it's okay to give our own software special exceptions" mentality.


You could call it an entitled opinion.


I see what you did there. (+1.)


A bug is unintentional, through error or coincidence of unforeseen circumstances.

Coding a feature and providing a configuration file thereto is not a bug.


https://www.zdnet.com/article/apple-removes-feature-that-all...

"The bugs were related to Apple deprecating network kernel extensions (NKEs) in Big Sur and introducing a new system called Network Extension Framework, and Apple engineers not having enough time to iron out all the bugs before the Big Sur launch last fall."


Does anyone know how this impacts little snitch?


It impacts all application firewalls on macOS - Lulu, Little Snitch, HandsOff, TripMode, RadioSilence etc - equally. Meaning, they can all now block the apps that Apple had exempted, in future versions of macOS Big Sur.


It lets Little Snitch 5 block Apple processes without editing a system file.


Which processes should I block? I prefer to share less.


This is really responsible of them!

Before, I was trying to figure out how mac's would ever be used anywhere near something classified or secret for a company.


It would be responsible of them if they had done it in a situation where they weren't pressured into the decision by media outlets.


Apple gives two fucks about media pressure. Their longstanding history of giving in to pressure almost exclusively is responsive to their users, either:

- pro/dev users who act as their trend setters (see recent backpedals on keyboards and Mac Pro form factor)

- people hacking stuff where it’s popular enough they want more influence on the UX (bootcamp)


Just search for example "apple backpedals" and it's pretty clear that Apple is not immune to the effects of publicity (like all companies)


The iPhone Headphone Jack has entered the chat.


Followed by the EarPods


I cited a pretty huge example (bootcamp) where there was no media pressure at all. There was definitely tech pressure, but the media had no dog in the fight.


Finding an example of Apple changing something for a reason other than media coverage is hardly evidence of media coverage never swaying Apple.


Name an instance where Apple has reversed a decision where media outlets haven't reported on.


Its possible that they saw it as a purely positive move that would improve security on MacOS and then after seeing everyone's response they reevaluated its importance.


or they started getting security bugs from the fuzzer teams. (it's a hard problem to sanitize so many exectutables, and the people who use VPNs are probably high-value customers, like business people wfh)


Having to be told by outsiders that this backdoor could be abused by malware is pretty embarrassing. It's hard to imagine Apple's engineers weren't aware of that.


> positive move that would improve security

That they purposefully added in the first place to let only Apple apps bypass third-party firewalls?


The amount of people reading about firewalls in the latest OS release surely does not qualify as “media pressure” on any kind of scale.


Don't underestimate the power that a few angry nerds can have when it comes to the overall narrative about a technology.

Every family has that one computer geek who everyone asks for advice, which ultimately influences purchase decisions.


When it comes to media pressure, Apple is like the honey-badger—they don’t care.


They want to make it seem like they don’t care…but they very much do.


Nah, it's not media outlets Apple is scared of, it's the front page of Hacker News...


As much the community wants to think they are evil and want to purposefully violate trust, most often the easier explanation works very well. Its an oversight or a resourcing issue.


Tweet[1] by Apple developer Russ Bishop:

"Some system processes bypassing NetworkExtensions in macOS is a bug, in case you were wondering."

Reply[2] by David Dudok de Wit, developer of TripMode:

"Glad to see it's being reconsidered as a bug, because Apple told us it 'behaves as designed' (FB7740671 + FB7665551). And why is there an exclusion list in the first place? I'd love to know more and see this documented."

Reply[3] by Russ:

"Can't get too specific but I promise it's really mundane/boring software development stuff... like two features that interact in an unintended way kind of boring."

Comment[4] on Russ's original tweet by Sérgio Silva:

"Yes. A bug with its own configuration file /System/Library/Frameworks/NetworkExtension.framework/Resources/Info.plist ContentFilterExclusionList"

[1] https://web.archive.org/web/20201118140434/https://twitter.c...

[2] https://twitter.com/david_ddw/status/1329017113709842437

[3] https://twitter.com/xenadu02/status/1329030446269620224

[4] https://twitter.com/sergiojdsilva/status/1328991480657162242


The tweet by the Apple developer has been deleted - hope he didn't lose job, and at worst only earned a reprimand. (Nobody with experience would call it a bug, when it was clearly a deliberate design decision).


Twitter indicates that he is still employed.


Inexperience is less concerning than trying to publicly whitewash the misdeeds of a corporation. I'm not sure you can even chalk this up to inexperience; my charitable guess is he probably didn't look at the code or config, assumed the company he likes would only do something like this by accident, went to twitter to say as much, then got a little carried away in the heat of it.


As much as I'd like to believe it was just an oversight, how do you accidentally have your services bypass the firewall? That feels like it would have to be a deliberate choice under the assumption that "our apps are signed by us, and the OS verifies that, so all traffic through these apps should be OK, right?" I don't mean this snarkily; it's a genuine question. I don't know how OSes work.


My guess is that this started small (“we shouldn’t let firewalls block security updates or Find My Mac”) and once that mechanism was there people kept adding other things to it thinking about support (security filters are notorious for people blocking things without understanding the implications and then file big reports) but not the users who would be upset about not being able to block those services.


Perhaps they wanted a bypass as system recovery option, or preference, not on by default.


I don't see why half the shit in that list would be needed during a recovery process, let alone need to bypass a VPN as well. If Apple wants to claim this as their defense, let them. Until then, I see little value in dreaming up excuses they aren't willing to make for themselves.


They already have a "system recovery mode" where no such firewalls run, and only select system tools are available. There's also "safe mode".


That could really make sense of why you need a config file and it is just a simple bug at the same time.


> Before, I was trying to figure out how mac's would ever be used anywhere near something classified or secret for a company.

Relying on a personal firewall on the device itself seems ill-fated. Maybe it could be considered an additional layer of security, but I've yet to work at a place where a personal firewall is part of the security concept, no matter which OS. It's either firewalls at the gateway, maybe additional ones for certain departments, or mandatory proxy servers if you're stuck in the 90s.


An application firewall on the device serves a different purpose to that running off-device, namely the ability to filter traffic based on the origin (or destination) application.

Clearly if your kernel or userspace are compromised that's not much use, and that's where external controls kick in.

You can't determine (absent some custom network and protocols) which piece of software was responsible for a given packet once you leave the device though, so that's the (current) best place to do that - if you want to impose policies controlling the hosts and protocols an application can use, you will want to implement this on-device, then firewall for the superset of all of those at the network level.

In essence it's about raising the number of independent failures required to result in a compromise. If you imagine the application firewall on the device has its policies managed rather than selected by the user, it starts to make more sense.


To fight apps phoning home, I agree. But even the tweet linked in OP refers to a tweet that shows how to abuse the now removed whitelisting by piggybacking your traffic through one of those whitelisted apps. On a locked down system like Android or iOS this isn't that trivial, but in a classic desktop OS use case it's easy to abuse another app to exfiltrate data.

> In essence it's about raising the number of independent failures required to result in a compromise.

Sure, it doesn't hurt, minus maybe the case that a vulnerability in that firewall itself is used.

> If you imagine the application firewall on the device has its policies managed rather than selected by the user, it starts to make more sense.

That's a requirement I guess. You don't want accountants and HR people handling popups by a firewall app. :-)


I mostly agree with your assessment of their usefulness, but any organisation that handles credit cards likely has to use personal firewalls. Requirement 1.4 of the DSS:

> Install personal firewall software or equivalent functionality on any portable computing devices (including company and/or employee-owned) that connect to the Internet when outside the network (for example, laptops used by employees), and which are also used to access the CDE.

The most common place this would come up would be with SREs/Devs that have access to prod (and thus the "Cardholder Data Environment") from their laptops. It can also apply to business users that have access to certain admin dashboards in some organisations.


Defense in depth suggests using controls to protect each individual resource. This includes individual apps, individual servers and individual user devices. A firewall is a single control, but never the only control. This from experience working with... government things.


The firewall is just the tip of the iceberg.

Microsoft provides the sources and special builds for sensitive environments.

They work with governments worldwide and open their source code to get certified.

As far as I understand it never was Apple's priority.


[flagged]


I wasn't aware that VS Code was "proprietary malware"? All the code for that is right on GitHub.[0] If you don't trust the prebuilt ones, you're free to build it from the source yourself.

Want something else? Their GitHub repo list has almost 4000 repositories totaling 130 pages![1]

Just because the OS itself isn't open source[a] doesn't mean that Microsoft doesn't open source a whole crap ton of stuff. And sure, Window's telemetry can easily be construed as "malware", but Windows is not the entirety of Microsoft.

[a]: And that's a lie too (sortove). It is open source.[2] (ahem source available (sorry, FSF)) You just need a valid reason to look at it besides "I want to". Sidenote: I personally would hope that Windows gets open sourced, but I'm not holding my breath.

[0]: https://github.com/Microsoft/vscode

[1]: https://github.com/microsoft

[2]: https://www.microsoft.com/en-us/sharedsource/

P.S. Can we not use the "M$" moniker? It's almost childish. Just like "Crapple", "Microshit", etc, It serves no purpose.


Of course VSCode is proprietary malware - unless, as you suggest, you build it yourself and get rid of that proprietary telemetry malware (https://github.com/VSCodium/vscodium)

But please don't suggest that we should praise Microsoft because they're decent enough to almost give us a somewhat convenient-if-you're-a-dev way to avoid being tracked?


Calling things “proprietary telemetry malware” is why nobody takes this seriously.


One clarification, VS Code does have closed-source components, like the python extension and remoting.


> [a]: And that's a lie too. It is open source.[2] You just need a valid reason to look at it besides "I want to"

Google Search is open source as well, you just need a valid reason for them to hire you for their search team! /s

I agree with your overall comment in that Microsoft is not bad, but calling Windows 'open source' when the only ways to access it involves a long application process and a lot of money is quite a stretch. Nearly all source code is open under these definitions, as you can always pay for access, get hired, hack their servers or straight out buy the company. That's not what people usually consider open.


Source available means available to all. It's just proprietary if they can decide you can't see it.


> I wasn't aware that VS Code was "proprietary malware"?

I'm sorry to see you are discovering this. VS Code is under this proprietary license [1]. As for the malware part:

> Data Collection. The software may collect information about you and your use of the software, and send that to Microsoft. Microsoft may use this information to provide services and improve our products and services. You may opt-out of many of these scenarios, but not all, as described in the product documentation located at https://code.visualstudio.com/docs/supporting/faq#_how-to-di.... There may also be some features in the software that may enable you and Microsoft to collect data from users of your applications.

(emphasis mine).

Indeed you can rebuild your copy or get Codium without the telemetry under the MIT license, and the software is really good, but it is a crippled version and does not make VS Code free software.

(edit: it's not pure / pointless theory! I'm sure there is an agenda behind VS Code not being really free. I would not be surprised if more and more "convenient" or important features were released as proprietary, and they also control the extension center (the market place), which is one of the main reasons Theia [5] exists. Beware not to lock yourself down in this ecosystem too much.)

> Just because the OS itself isn't open source[a] doesn't mean that Microsoft doesn't open source a whole crap ton of stuff

Microsoft is huge. The ratio between their proprietary code and the code that they open source is probably tiny. More importantly, they only release developer-related things, never things that target end users, except their telemetry-riddled Calculator [3].

> It is open source.[2] (ahem source available (sorry, FSF))

Hum. The reference for Open Source is the Open Source Initiative, not the FSF [2]. The FSF defines Free Software. These are almost equivalent things but still have two separate definitions.

Microsoft is not an open source software company. They just happen to be a huge software company and every huge software company open source a lot of code when it is strategic. I'm not judging, it's a fact. I'm happy to use some of their quality open source software targeted to the developer community like TypeScript, but an Open Source company would release their important code and be based on a business model around this.

And, no, we can't even really call Windows a source-available software. They share its source code to big entities so they can audit it, probably under NDA, and not everybody can access it. Actually you can find leaked code, but it is just this: leaked code. Mapbox-gl-js is a source-available software which is not open source (anymore) [4].

[1] https://code.visualstudio.com/License/ [2] https://opensource.org/osd [3] https://github.com/microsoft/calculator/ [4] https://github.com/mapbox/mapbox-gl-js [5] https://theia-ide.org/


> Indeed you can rebuild your copy or get Codium without the telemetry under the MIT license, and the software is really good, but it is a crippled version and does not make VS Code free software.

What can't it do that regular VSCode can?


talk to many language server plugins released by MS and containing checks to make sure they only talk to regular VSCode


As ianlevesque said [1], the python extension and remoting. Probably other things too.

[1] - https://news.ycombinator.com/item?id=25772409


I'm not in the business of commending or admonishing. The question was whether Apple could/should be used in government classified/sensitive business environments.


I find it amazing that recently on a presumably ‘hacker’ forum opinions showing a ‘freedom software’ perspective get a bully response in form of simply downvoting and shutting up the person. I urge the admins to stop this practice. I wish to hear such points of view and consider things from such perspective.

It is very logical to assume that once you have no direct access to the sources of software, that software could do things that malware does. Yet this obviously logical reminder get downvoted like it is irrational or off topic.

It is on topic, it is rational, it is a good reminder and we see Microsoft and Apple consistently disrespect a right of a person to control own _Personal_ computer(PC). On recent M1 you can’t even have own OS without Apple permission, which makes it useless brick for me. Do some people still understand what ‘personal’ means ?


> On recent M1 you can’t even have own OS without Apple permission

That's not true. https://asahilinux.org/about/ , "Does Apple allow this? Don’t you need a jailbreak?"


It would be good news when it’s finished ( If it will be finished at all) but it is not the case so far.

Also, from the same source: “ Will this make Apple Silicon Macs a fully open platform? No, Apple still controls the boot process ...”

Without open firmware and bootloader ... well, is it really Own OS?


There are very few if not zero modern computers with completely open source firmware and boot loader. Including computers with open source as their primary selling point.


Some people find it psychologically easier to sit in the shit when they discover they are not alone. Some even go further and trying to justify the shit when there is a lot of it. I am not one of them, so for me shit is still shit and I prefer to see it for what it is.

Besides, can two wrongs make something right ?


I am not suggesting it is perfect, but that it may be a little hyperbolic to say any os you load on a Mac is not it's 'own OS' because it relies on closed bootloaders/firmware. If that is the case, then what does every other computer run? Is Linus's own operating system not his own because he loads it on an Intel or AMD processor?


We will know the answer to that question once we study thoroughly all versions of microcode those processors have/ had. Alto had how much? 128k ? and some big part of it was for display memory? And it was a full OS. Just imagine what you can have in firmware/bootloader now days. Is it that hard to imagine for everybody?


But, again, as of today the situation on Intel is no different.

That doesn’t mean it’s okay, but, well, I guess I would be interested to know what computer you’re using. :)


And Apple released a build today which provides the kmutil options for it, too :)


great, and how it helps to boot own os?


You make a boot object and use that to point your Mac at it.


If that is your idea of Own OS I am happy for you.


You're changing the subject. The discussion is about your implication that one "can’t even have own OS without Apple permission", and that is simply not true.


Am I? Then what ‘closing bootloader’ is if not their form of giving permissions? If I understood correctly they would not “help” with specs and we do not know what bootloader capable of. Why? if their goal to keep it open? Let’s see the tendency.

They have put a complete control over IOS devices since the beginning of IOS, including the apps that were not allowed at all. Since then they are trying to bring this into a ‘personal computer’ domain as it seems.

they started slowly but steady to put more and more control over Mac apps,

then they started limiting root access,

Now the bootloader ...

So where they are going ? As I see it the tendency is to close MacOS completely just as IOS unless they face a strong resistance, then they go for ‘as much as they can get’ or ‘as much as they can get away with’ strategy, feeding some ‘calming pills’ on the way that some perhaps are happy to swallow. They have changed things from, “of course it is not our business what you boot” , to “of course, we may allow unsigned kernels .... for now” (if it’s true at all, we still need to see this in reality) Yes it is not as strong as “we will not allow at all”, but for sure it’s their permission now. You may say, oh it’s just as before. No , it’s not, they have taken some of the existing freedom and intend to take more next year possibly. And who knows what their bootloader does? What if it would be controlled remotely ? What if to load some “unsigned kernels” which are called just kernels by the way, they would still require some online check? Who knows? What of the above is not true or incorrect?


Using Micro$oft at any point in your discussion is a great way to get people to not hear what you have to say.


It's great to see community feedback actually works, I really hope people can talk about having the option to disable .DS_Store now


Nice! I can finally upgrade.


Nice to see that Apple isn't so big that it doesn't think it has to listen to reasonable/rational public feedback that it is making poor decisions.

Now, if they could empower lower levels to make these decisions before the issues blow up in the wider world context, all the better.


Having worked at Apple and other big companies it's almost always Engineers and PMs making these decisions. It's not like Tim Cook or Craig Federighi is running around demanding people add Apple apps to a firewall exclusion list. They have much bigger things to worry about.

It's just that as an engineer you are often in a bubble and can't foresee every implication of your decision. That's why Apple has the Developer and Public Beta releases for iOS/OSX so that external users can provide feedback. And on this occasion just like on many other they will take action if necessary.


> That's why Apple has the Developer and Public Beta releases for iOS/OSX so that external users can provide feedback. And on this occasion just like on many other they will take action if necessary.

Except that Apple did not take action. Firewall developers such as Little Snitch did become aware of the issue during the beta releases and gave feedback to Apple, which Apple ignored and shipped it anyway to the public. https://blog.obdev.at/a-hole-in-the-wall/


> Except that Apple did not take action.

Look, I don't mean to criticise. But how do you know that Apple didn't start working on a fix when they were told about it?

Apple doesn't exactly say when they start working on a fix for something, or else we would have known earlier.


It's not up to customers to assume the good intentions of a large organization. Apple has internal decisions and processes that result in them not communicating in a timely manner. Whatever fallout from that is on them.


There was a ContentFilterExclusionList key in the /System/Library/Frameworks/NetworkExtension.framework/Versions/Current/Resources/Info.plist file. macOS 11.2 beta 2 removed the ContentFilterExclusionList. Does that take 6 months?


Noticing the issue, discussing it, setting meetings to agree to revert, and handling all other higher priority stuff before an eminent GM release and the most pressing x.1 update release, can take more than 6 months.

Not to mention that "removing the ContentFilterExclusionList" is a hacky fix suggestion. Doesn't mean it's the actual hollistic fix, and there weren't other under the hood changes for this issue.


> Not to mention that "removing the ContentFilterExclusionList" is a hacky fix suggestion.

You've got in backwards. The ContentFilterExclusionList was itself a hack. It never should have existed.

Some people are handwaving about a mysterious vague problem that calls for a ContentFilterExclusionList, but Little Snitch has existed for many many years on the Mac and has been able to block everything, including Apple services. There was no problem with that until Apple decided to exempt itself from getting blocked.


>You've got in backwards. The ContentFilterExclusionList was itself a hack. It never should have existed.

It might or might not be a hack, but that's orthogonal to the functionality or whether it uses a ContentFilterExclusionList.

The fact that it wasn't there before, or that it is a misguided feature idea, doesn't mean it was done as a quick and dirty implementation or that it's hastily made feature done via cutting corners.

If you accept the need for your own apps to bypass user application filtering (eg. because you consider your traffic/apps integral to the OS operation) then that's the kind of thing you'd implement -- and you could do it with a team of 100, working for months with fine specs, to deliver the same thing.

There's nothing inherently hacky about it.

>There was no problem with that until Apple decided to exempt itself from getting blocked.

That's neither here nor there though, as to whether it was done as a hack - or, to get back to the point, as to whether they could just rip it off trivially.

People forget this is not just a single feature, but part of a change to how network filtering is done (not through a third party kernel extension anymore), accompanied with new APIs.


> doesn't mean it was done as a quick and dirty implementation or that it's hastily made feature done via cutting corners.

I wasn't implying that. The ContentFilterExclusionList was already present in the first WWDC beta and could have been there internally for many months prior, who knows. I was using "hack" more in the sense of bypassing a security system. I said "It never should have existed", which is not a comment on the quality of the design of the thing that did exist.

> People forget this is not just a single feature, but part of a change to how network filtering is done (not through a third party kernel extension anymore), accompanied with new APIs.

I'm not "people". I'm well aware and haven't forgotten. I've been a professional Mac developer for 15 years. I've used the Network Extension API myself. You may remember me from such news stories as the Mac OCSP appocalypse. https://techcrunch.com/2020/11/15/apple-responds-to-gatekeep... It's incredibly tiresome when HN commenters try to "Macsplain" to me.


>It's incredibly tiresome when HN commenters try to "Macsplain" to me.

Isn't it also tiresome when people on HN assume we know who they are from their handle, or that we are somehow obliged to have followed them outside HN, and remember/know who they are?

I might recognize pg, or patio11, or tptacek, and a few more, but not everybody. And most handles, I just glaze over, they are not the important part in the discussion. I'm pretty sure most of us on HN have dozens of HN handles that we don't otherwise know who they are, or even keep tabs on from one HN thread to another.

I wouldn't try to "Macpslain" if your comment didn't seem to me to imply that this is just some an isolated thing with ContentFilterExclusionList, that can just be reversed like that, or if it mentioned that this is part of an extensive change to how network filters / kernel extensions work (or rather, don't work anymore) in Big Sur.


> Isn't it also tiresome when people on HN assume we know who they are from their handle

No, I don't expect people to know who I am. However, I do expect people to avoid assuming that I'm ignorant of the subject at hand. This ought to be the default approach you have toward anyone.

In this same thread, I was referred to as "My sweet summer child", as if I didn't understand software development at all. This shouldn't happen, regardless of whether you know me or not. https://news.ycombinator.com/item?id=25771925


> Does that take 6 months?

If this one change is in a pool with tens of thousands of other possible changes, and it also has to go through one or more QA cycles? Sure, why not 6 months?


In some industries 6 months from initial report to consumer deployment would sound almost irresponsibly fast.


> Sure, why not 6 months?

Because the first WWDC preview version of Big Sur was released to developers on June 22, 2020, and Big Sur was released to the public on November 12, 2020, so Apple needs to be able to fix issues identified during the beta period much quicker than in 6 months.


Apple doesn't have to "fix issues identified during the beta period" much quicker than the release date.

No OS does, including FOSS distros / OSes.

Apple just has to fix "the most important issues" with the most bang for the buck identified during the beta period before release.

Which they do.

The ones they consider less important are put in a backlog.

You can find "issues identified during beta releases" still open and unfixed for all OSes, some even going 10 years back, long after the release was out...


> The ones they consider less important are put in a backlog.

True, but this just proves my point. It still doesn't take 6 months to fix this issue... if they wanted to fix it. Deprioritizing it was a deliberate choice by Apple. The reason the exclusion list shipped to the public in Big Sur wasn't technical, the reason is that Apple's priorities are messed up.

From my perspective, the explanation is simple: ContentFilterExclusionList wasn't a "bug", it was a deliberate "feature". So from Apple's perspective, there was nothing to "fix". At least one developer was told it "behaves as intended": https://twitter.com/david_ddw/status/1329017113709842437

Only the public backlash caused Apple to remove it.


>True, but this just proves my point. It still doesn't take 6 months to fix this issue... if they wanted to fix it.

Just because it was reported as an issue doesn't mean it was thought as a bug (or an issue to fix) by Apple. That's what they wanted to do. People coded it explicitly.

>Deprioritizing it was a deliberate choice by Apple.

Of course. Why wouldn't it be?

>From my perspective, the explanation is simple: ContentFilterExclusionList wasn't a "bug", it was a deliberate "feature".

Again, of course. Some people complained this was a bug, Apple thought it wasn't, the issue remained on the back burner, until some time it was given more consideration and was decided to fix.

What I'm saying is "why this took 6 months" doesn't make much sense as a question. Why wouldn't it? Unless something is a show stopper or high impact bug, it would take time. Even to be accepted as an issue to fix in the first place will take time. Plus all the internal red tape.


> What I'm saying is "why this took 6 months" doesn't make much sense as a question. Why wouldn't it?

For the 2nd or 3rd time in this thread, I have to remind that I was replying to a comment saying this: "That's why Apple has the Developer and Public Beta releases for iOS/OSX so that external users can provide feedback. And on this occasion just like on many other they will take action if necessary." So, maybe you should argue with that comment instead of with me?

My point was that developers filed feedback about this issue during the betas, yet Apple did not address that feedback, and thus "That's why Apple has the Developer and Public Beta releases for iOS/OSX" is not a valid point in this context.


>My point was that developers filed feedback about this issue during the betas, yet Apple did not address that feedback, and thus "That's why Apple has the Developer and Public Beta releases for iOS/OSX" is not a valid point in this context.

Well, to that I agree.


> Does that take 6 months?

You're assuming that changing that list is the only thing they needed to do. Have you thought about why they felt they needed that list to begin with? Maybe because they wanted to quality control that all their core services could graceful handle being blocked by a firewall first? That is, the job wasn't changing the list. The job was probably quality control of everything potentially blocked by that list.

Or they just didn't think it was such an important issue. Most MacOS users by far probably don't care.


> You're assuming that changing that list is the only thing they needed to do.

No, you're assuming that I'm making that assumption. Why would you assume that?

> Most MacOS users by far probably don't care.

It's frustrating that people keep ignoring the fact that I was directly replying to this comment: "That's why Apple has the Developer and Public Beta releases for iOS/OSX so that external users can provide feedback."

If feedback during the betas does not make Apple take action, then the comment I was replying to was wrong.

Your response seems a bit ironic, though, because public backlash is exactly what caused Apple to backtrack.


It’s possible they were working on other changes so firewalls didn’t break the whitelisted components.

They shouldn’t have been broken in the first place, but hey.


My sweet summer child I hope you never have to develop or schedule software releases at a level of complexity where 6 months sounds anything less than luxurious.


I've been a professional Mac software engineer for 15 years.

I've also lived my entire life in the North (which for some reason is called the Midwest).


See other comment re: realizing who I was responding to. Also I spent last winter up there (Saint Paul).


Why do you think Apple should've solved this issue immediately? I'm sure you (and the other people in this thread) care a lot about the firewall exception, but from the perspective of Apple this must've been a non-critical issue at best. I don't understand why you assume Apple should've dropped everything and fixed this as soon as it was reported. And clearly, as opposed to what you say, they did take action - otherwise they'd never have removed it.


> from the perspective of Apple this must've been a non-critical issue at best.

From the perspective of Apple, this wasn't even an issue at all. It "behaves as intended". https://twitter.com/david_ddw/status/1329017113709842437 So that's why Apple didn't fix it. You don't fix something that you don't think is broken.


To be “fair”, it’s a fairly well known secret that official feedback channels for all Apple software are basically a trash chute.


It's reasonable not to take action based on the feedback from a developer who sells an app-level firewall - it's neither a broad nor disinterested constituency. Big Sur's been out for less than two months - for a bigass company, it's decent turnaround.


Who else would Apple take feedback from during the betas? Who else would even notice that there was a hole in the firewalls except the firewall developers?


The gigantic majority of users who are not firewall developers as well as the gigantic majority who never touches the betas both seem like sensible and important sources of feedback.


> the gigantic majority who never touches the betas

I was replying to a comment that literally said, "That's why Apple has the Developer and Public Beta releases for iOS/OSX", so maybe you should argue with that comment instead of with me.


> Having worked at Apple and other big companies it's almost always Engineers and PMs making these decisions.

I wonder what the backstory to the original decision was.

Pure speculation: a "bug" was reported in which something was broken, and it turned out to be because a third party firewall was blocking access to some Apple-hosted service, and someone started working on "how to fix the bug", and the "bugfix" was to make sure third party software can't do that.


This sounds like a credible situation an engineer may come up with and implement without thinking through the reality of what they're doing.


> Having worked at Apple and other big companies it's almost always Engineers and PMs making these decisions. It's not like Tim Cook or Craig Federighi is running around demanding people add Apple apps to a firewall exclusion list. They have much bigger things to worry about.

This more or less confirms my hunch. Thanks for inside perspective on it. That’s how literally every other org works but I know Apple got this reputation for having a very executive-micromanaged culture under both Jobs eras, it’s good to have a reminder that at the end of the day it’s not the borg.


> it’s good to have a reminder that at the end of the day it’s not the borg.

The impression I always got was more like a mid-to-late USSR than the Borg (in structure rather than efficacy, the USSR didn't work, apple does). Everyone in the Borg is (supposed to be) identical, whereas Apple seems like a structure of lots of groups of often brilliant people working in seperate-but-equal subtrees, working under a (especially under Jobs) ideologically-inspired dictator from the top down. The way Apple are fairly reticent to document what they make partly informs the comparison.


I think their reticence to document is more a function of how long they’ve been able to get away with not doing it. Almost every engineering culture that finds that balance unfortunately embraces it.


IMO Apple move too fast and try do too much. They're always shipping half baked features and making bad decisions like this.


This happened so quickly I suspect there was never a decision for or against. Pretty likely a boneheaded engineer did what us boneheaded engineers do and cut corners, likely with all of the fun boneheaded management and deadlines that come along for the ride.

Why, you might ask, would this be a matter of cutting corners? Well, fault tolerance is hard. What happens when the OS can’t reach external services it depends on for security features? Well, Apple found out recently in quite an embarrassing way. Given the almost immutable yearly release cycle, I would be astonished if they didn’t just duct tape some whitelist in because a more resilient solution wasn’t ready.


The imagined timeline of this comment doesn't seem to align with reality. The exclusion list was already present in the first WWDC builds in summer 2020, as the Little Snitch developers noticed: https://blog.obdev.at/a-hole-in-the-wall/

The Mac "OSCP appocalypse" occurred in November on the day that Big Sur was released to the public, with the exclusion list still present, and a number of firewall developers were already aware of the exclusion list and had reported bugs to Apple.


I don't think the comment you're replying to made specific claims about this timeline.


You’re right, I didn’t. Thank you for saying so with much more clarity than I probably would have.


> This happened so quickly

This was your claim. What is the justification for the claim? The comment seemed to imply that it was the OCSP problem. Otherwise, no other explanation was offered by the comment.


It happened quickly on the heels of the public release. The problem I cited was about how embarrassing a half assed solution could be, not about prompting a different response.

Good engineers who boneheadedly cut corners are already tracking their omissions and FIXMEs. The fact that they shipped and quickly turned around a better solution reads to me like engineers doing their dang job.

Edit: I just realized who I responded to and even if we don’t see it the same way I just want to say I appreciate your work and even your particular cynicism.


Given how they have been always marketing on the privacy aspect, and this exception they made for themselves has been found out and shown to be bad for privacy, I'm not surprised.


Curious why you phrased Apple listening to their customers in such a negative way. Maybe from pessimism bias?

I observe a lot of HN commenters don't vibe with how Apple controls & develops their ecosystem. Yet, instead of go elsewhere, complaining and acting like being oh so very special enough to know how things should be done is preferred; while expecting a major company to just cater to their personal whims.

I'm unsure if it's entitlement I'm witnessing or just people that feel like they have no faith in Apple's competition at making anything better than Apple currently has.


To be honest I think having a pessimism bias towards large corporations is healthy. Sure, they might in the end do a good thing (or a less harmful thing than they could have), but they have a huge amount of power and very little oversight.

I like Apple products, but I try to avoid any illusion that they’re good guys on a corporate scale. They’ve proven they’re not in a lot of ways (see many recent articles about their labor practices). That doesn’t mean every thing they do should be equally scrutinized with a pessimistic predisposition. But I really don’t fault anyone for assuming a large for profit organization will prioritize their own priorities over everything else.

The thing I try to moderate that with is understanding their actual priorities and not just falling into blind cynicism.


Well, I've had a Mac continuously since the original 128k Mac, so I'm pretty well situated as an observer of all things Apple and more knowledgeable about their history and their machines/software then the vast majority of their employees, some of whom are close friends.

I want them to listen, since the decisions they make affect the ecosystem I've adopted for my extended family (for whom I'm the main technical point of contact,) and poor ones will affect me and them disproportionately.

So to me, it doesn't feel at all like entitlement, rather, the wishes of a longtime, loyal customer hoping they continue down the path of listening to those of us who have promoted their products and helped make them successful.


>have no faith in Apple's competition at making anything better than Apple currently has.

110% this.

Other laptops are awful hardware. Including the Dell XPS line and the new thinkpads.


There is no call to invent reasons of entitlement or faithlessness in engineering prowess when the desire is simply to control your own device.


I think your second point is next to impossible, businesses make mistakes no matter how much they try to stop them and what makes them who they are is how they respond.

In this case Apple responded well and that's great. We don't know how many of these decisions are already being squashed each day through appropriate governance.


Sadly, they could just as easily add it back in with a later update.

This means that after I update to 11.2, to preserve location privacy from Apple, I must now verify each and every OS update in the future.

We need a better system.


This is only in the beta so far though, right? I’m not sure how durable this stuff is generally in the beta releases, but there hasn’t been any official announcement or guarantee it won’t be reinserted?


Prediction: this will somehow still be spun as Apple wanting to own your Mac even though you bought it and turn it into an iDevice.


No, as someone who does often criticize Apple for taking away control from their users, this is a good move.

Their original action that triggered this was them wanting to own your Mac.


I sincerely doubt that was the motivation. See my kittycorner comment: https://news.ycombinator.com/item?id=25771339


I don't think you are necessarily disagreeing with what they are saying.

First of all, there are surely many people who influenced the creation and release of this feature and I don't think it would be accurate to assign a single motivation to all their work.

Second, I don't think it's wrong to say that "boneheaded management cutting corners" by prioritizing the needs of other internal teams over the needs of the user is in some sense them "wanting to own your Mac". Although that is a dramatic way to describe it.


> I don't think it's wrong to say that "boneheaded management cutting corners" by prioritizing the needs of other internal teams over the needs of the user is in some sense them "wanting to own your Mac". Although that is a dramatic way to describe it.

It’s beyond dramatic. It’s absurd. I’ve worked on so many projects and deadlines that have had to make such decisions, even decisions that eventually fell to me. In numerous cases I could cast aspersions on some of the management motivations, but even so I couldn’t describe it as them trying to own the product they’d sold to customers. In most cases it was a balance of market pressure and resources. In the cases where I’ve had to make those calls, it’s been balancing market pressure specifically so I had more room to satisfy users.

I don’t think Apple engineers are immune to this. They have to ship big things on a tight deadline. When they can’t satisfy everyone they have to make choices. Sometimes they don’t make the best choices.

Hopefully you don’t have these weights in your job. But lots of us do.


Of course nobody is immune. I believe it takes active work from the users of any vendor's software to disincentivize similar user hostile "improvements". That is just the way that business works in a human society.


Agreed. Even though they obviously didn't make this change out of the goodness of their hearts, at least they correctly reacted to the public concerns about the feature.


Imagine being told what software you can and cannot run on your machine.


The linked tweet/TFA is actually about Apple's boneheaded decision to let its own apps bypass firewalls etc., and not about software permissions.

FWIW you can just turn off the Gatekeeper feature which is what I assume you're complaining about.


So does this mean we won’t be seeing the behavior I just documented the other day on here?

https://news.ycombinator.com/item?id=25746007

In sum, they’re intentionally circumventing DNS, /etc/hosts, and even IPv4 blackholing by attempting to send their phone-home packets through IPv6. Then if you block that as well your computer constantly freezes.


You make it sound so sinister, but yes of course Apple will use IPv6 if that’s the only (or best) route. That’s a good thing.

Where does “constantly freezes” come from? You didn’t mention that in your linked post. And if your computer “constantly freezes” with Apple blackholed, why wouldn’t it also be constantly freezing when your network connection doesn’t reach the internet? I’m pretty sure they use an apple.com URL for their reachability test, so your blackhole should be blocking that too.


Why are they ignoring /etc/hosts if this is good behavior?


Are you also writing

  :: domain.example
Because the difference is gethostbynamev2 (the most likely function being used, or I suppose the Apple equivalent) looks up a ipv6 hostname before it looks up a ipv4 hostname, which means a "0.0.0.0 domain.example" entry won't override the result of ipv6 lookups.

  $ echo "0.0.0.0 cloudflare.com" | sudo tee -a /etc/hosts
  $ getent hosts cloudflare.com && getent ahosts cloudflare.com
  2606:4700::6810:85e5 cloudflare.com
  0.0.0.0         STREAM cloudflare.com


I disabled IPv6 altogether, and Apple’s processes are still finding a way to resolve their real IPs after my DNS and /etc/hosts resolved their domains to 0.0.0.0.

DNS should be enough, I shouldn’t have to black hole Apple’s entire /8 to stop macOS from phoning home when I’m not using the computer and no apps are running.


I suspect the number of macOS machines on networks with misconfigured DNS is far higher than the number of macOS machines whose admins want to prevent them from talking to Apple. So I’m glad they’re going to great lengths to preserve their ability to communicate with Apple even under adverse network conditions.


I appreciate giving people the benefit of the doubt, but this feels overly charitable to me. I doubt they’re worried about accidentally misconfigured /etc/hosts files.

Much more likely is that they were aware of outbound firewalls like Little Snitch and want to evade user attempts to block their software, as discussed in the main article here.


Why are you using /etc/hosts to modify the behavior of the Apple resolver system? That’s not how macOS directory services work, and Apple only appears to offers it as a legacy backwards-compat stub with no guarantee of support or effectiveness for modern anything. It’s no surprise that it’s not an effective solution for you.


In their book a bit of malware might have modified /etc/hosts.

It's for your own good!


That's an odd one. If my Mac is offline it works just fine. Perhaps your filtering/firewalling isn't complete so it gets partial connections and then times out on the rest? If you do that in large quantities, any OS will start to show trouble.


Timing out packets instead of denying them could certainly be an issue (I run across this a lot with internet being down while the router and internal DNS is still up).

But your claim that “any OS will start to show trouble” is not how it’s supposed to work, nor how it used to work before 24/7 connections, nor even how it should work assuming you’re ok with phone-home daemons.


Of course it's not how it's supposed to work, but that is how it tends to work ;-)

An OS in general has a few layers with caches and monitors and resolvers etc, and if you block a few of them, in a partial manner, they tend to get in to an extreme version of the bus bunching problem. Windows still does that with their 'network identification' where sometimes something goes wrong in the probing process and the connection hangs on "identifying" indefinitely. And that's just a silly "optional" service (which should default to the public profile and only change from there instead of defaulting to no connection at all).


Flaky network stacks aren't rare. I've experienced plenty of consumer-facing operating systems that will freeze on boot for 10 to 60 seconds if the network interface is up but DHCP is unreachable.


I had that freeze issue and the DROP vs REJECT idea came to my mind too. As far as I remember, it was REJECT everywhere. So no.


Come to think of it, I did hear about this from someone a while ago who was using some sort of public WiFi HotSpot (bad idea in general) which had a broken redirect page and in turn didn't open up the firewall and gets you that 'partial' connection that causes all sorts of problems. Was on both macOS (10.12 I think) and Linux (some Ubuntu version) at that time.

Seems to be an interesting problem: if it works fine with no connection and fine with a complete connection but not 'in between' (which is the best way I can describe it so far) you'd think it must be some common library or component in a network stack that causes this. macOS has some reachability system that might be in play here, perhaps if it flags the network as 'reachable' but then gets REJECT'ed it goes bad? Or the other way around: marks network as 'unreachable' but traffic flows anyway?


init-p01st.push.apple.com and *-courier.push.apple.com requests come from apsd(8), the Apple Push Notification service daemon. It's attempting to get push notifications, rather than some sort of phone home telemetry thing.

Now, I do have an issue with apsd specifically on Big Sur, but "sending phone home packets" isn't it.


The screenshot shows init-p01st.push.apple.com and ##-courier.push.apple.com, none of which have IPv6 AAAA records that I can see.


If you proxy macOS connections overnight with no apps running and the computer idle you get about 15 hosts from 7-8 Apple processes phoning home, a handful of them resort last to IPv6 after ignoring your DNS and /etc/hosts

The screenshot is an example of traffic to Apple unsolicited by the user.


Does Apple allow DNS hijacking / local override of their domains? Some security sensitive software will resolve using known good resolvers for things that shouldn't be redirected locally (ie, google may use 8.8.8.8 in some of their VPN products rather than rely on comcast or your malware infected local source?


> The screenshot is an example of traffic to Apple unsolicited by the user.

Do you expect a dialog box every time it resolves a hostname? Users want features like Messages which depend on the push notifications service shown, not the details of how it’s implemented.

This is also Exhibit A for where the idea for things like firewall allow lists come from: there are always people who will block something they don’t recognize and then complain about “bugs” after the system does exactly what they requested.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: