The author of this blog post is lobbing some pretty serious accusations at Apple of being "neglect[ful]" and "deliberate," when in fact it's much more likely that the responsible teams are carefully trying to backport the security patches without causing regressions. Sometimes a little patience is in order.
Apple invites this kind of reaction by not being transparent and not cooperating with the larger security community, for reasons that are difficult to understand.
This week's episode of the the podcast run by the Intego Mac Antivirus company talked about a new malware affecting Macos that was severe enough that Apple released Xprotect signatures for it, but didn't provide details to the rest of the security community, so anti-virus vendors had to reverse engineer the Xprotect signatures to figure out what they were for. Apple usually only updates Xprotect for the highest severity malware that's circulating widely.
Most of the tech industry participates in information sharing through groups like the Cyber Tech Accord, the Cyber Threat Alliance, and several others. Apple is conspicuously absent from these groups.
What reason does Apple have to withhold information about vulnerabilities from the rest of the industry? It just puts their customers at risk. They have a trillion dollars. There's no reason they couldn't dedicate entire teams to disseminating information in a responsible way, just like every other tech company that you've heard of.
In the case of these Big Sur / Catalina patches, what benefit is it for them to not share their plans if they are in fact planning to release patches once the "regressions" are accounted for?
> Apple invites this kind of reaction by not being transparent and not cooperating with the larger security community
Eh, I disagree. While it's fair to wonder what's taking them so long, attributing malice or incompetence is unreasonable without more evidence than mere delay.
> What reason does Apple have to withhold information about vulnerabilities from the rest of the industry? It just puts their customers at risk.
I think the jury is out on the conclusion. While Apple is unquestionably peculiar with respect to their security community engagement, I think most would agree that they also have an outstanding overall security track record when you take into account the immense number of devices out there, all of which are connected to the Internet. It's difficult to identify a company that does better (again, relative to the overall risk exposure) than Apple in this aspect.
> They have a trillion dollars. There's no reason they couldn't [insert anything here]
Money can't buy you everything. Even Apple's war chest can't buy them the exact talent they need at the exact time. Talent is scarce and often happy and well-compensated at other engagements. Same goes for any of the FAANGs, one of whom I currently work for.
A pattern of non-communication about high risk security issues to developers that could help mitigate their effects or to customers that will be affected by that lack of mitigation seems like intentionally malicious behavior to me.
Lack of transparency is inexcusable for a business with such an overwhelmingly prolific ecosystem that has such a broad impact on derivative technologies and the businesses that use it.
Apple are culturally allergic to transparency. It’s going to be either a seismic corporate culture change if it happens quickly or it will take on the order of a decade or more for them to become comfortable with being open whenever the opportunity is appropriate. Apple is a “deny by default” sort of company at its heart.
You think that they do not know that these are the consequences? When vulnerabilities are being exploited in the wild, they know that harm is being done, and then they made a deliberate choice to withhold information that could help prevent that harm. How is that decision not malicious?
There’s no debate, unless you truly believe that intent should play no role whatsoever in how we judge actions, in which case, you disagree with the vast majority of post-Enlightenment society. I don’t think you would like living in a world in which strict liability made every mistake a criminal act.
There’s no evidence that they chose to do nothing. Again, they are probably working on it right now and we should not be surprised to see a backported fix in the coming weeks. There is already a long history of them doing just this; why do you think this time it’s different?
Not to take anything away from the other things you’re saying, but there’s a large difference between a company’s valuation and how much cash plus equivalents they have. Apple has roughly $20B, most of their valuation is from other things. :)
I look askance at hair-on-fire security alarmism but unpatched, acknowledged, in-the-wild remote code execution vulnerabilities are a pretty huge problem. Indeed, their words are sharp, but judiciously applied pressure is warranted.
For some, lagging updates for critical applications render a complete OS upgrade infeasible. Apple patching old versions was reliable enough to inform security praxis and they should warn users about delays, even if the policy hasn't changed.
There just isn’t much software that works on Big Sur but not Monterey, if any. It’s not like a windows 95 to XP update. With yearly OS releases, the ecosystem is somewhat more up-to-date and resilient to typical changes.
Many custom or niche domain-specific or hardware specific applications get caught up in this. I can think of 3 expensive pieces of equipment I use that rely on Mac software which always lags OS upgrades by months. People don't choose to use applications like that— they usually don't have a choice. Their OSs not being up-to-date is a problem.
Yes, they are megacorp. In the general case I think that working to uphold brand standard is the parsimonious interpretation rather than hahaha won't fix.
They are a megacorp which employs actual humans who generally aren't evil. The evilness is usually an emergent behavior.
I doubt the security team is sitting there choosing not to patch this, given that they are passionate about security. And it's unlikely that management has told them "don't release the patch, we want the people on old OSs to suffer!!".
Most likely is that they just don't have as many resources for writing and testing the patch on the old versions, so it's taking longer.
This is probably off by a factor of 2 at least. Is if you told me that Monterrey has 5x as many users, I’d agree. Not 10X. That said, there are likely a sizable group (5%?) of customers that cannot upgrade to Monterey. Apple should be worried for these customers, too.
That probably depends on the number of users whose Macs get pwned in this "extra time" they are taking - I guess these users would have preferred a potentially unstable fix to no fix.
If Apple released an unstable fix that caused crashes, we'd be here arguing about how they should have taken more time to get it right. The odds that a buggy patch that's pushed to your computer crashes it is far higher than your computer being pwned by attackers because a fix is still in the works.
This is my first reaction as well. Apple has a pretty good track record and a strong motivation to fix these vulnerabilities. It's a strong accusation to claim they're "neglecting" this deliberately and that needs some evidence.
Here's some media literacy training for anyone seeing this: the title uses the word "neglecting". It's an odd word to choose but extremely deliberate. The author can't say "refusing" because that would require evidence as would "downplaying" or "dragging their feet". But "neglecting" allows you to use no action as "evidence" so you should be automatically suspicious of it.
It's the same as "[Big Company] considering [controversial thing]". The word "considering" is deliberately chosen because it allows baseless speculation... which is the point.
When you're "trying to backport the security patches without causing regressions". There's no silver bullet for patching large codebases at huge organizations.
Every product team I've been on that kept multiple versions of a product going would fix the newest first, then the next newest and so on down the line until the oldest version got fixed. This is nothing new.
Backporting to older releases means finding potentially new solutions to the same problems depending on where the bugs sit in the codebase. This could also mean backporting more than just the fix itself, depending on how the codebase has evolved.
Is it not entirely normal for a version currently or recently under active development to be easier to confidently work with than something you have to pull out of mothballed status? I know good practices improve that situation (the "natural" default for which seems to be "lol good luck, you may as well re-write from scratch" absent any forward-looking effort beforehand), but surely there's still some friction that comes with that even in places that handle it very well.
Big Sur should see at least another 18 months of support. Admittedly, that's just guessing based on past behavior, not a guarantee, which does kinda suck, but in fact it will probably see that much more support. Will likely remain safe to use for some amount of time past that, though who knows how long. Realistically, and accounting for 18 months being a bit pessimistic, one might reasonably expect another 24-30 months of use with a machine that can't upgrade past Big Sur, before having to make some kind of choice (install Linux, install Windows, or buy a new machine, I guess)
It's the reboot, for me. It's just inconvenient enough that I put off upgrades for a long time, usually. I usually apply iOS updates promptly, because it's much easier to say "yes" to those since it's not going to lose much useful state on re-boot. Plus they seem to complete a lot faster.
This is not equivalent whatsoever. TFA's point is that the CVEs are unpatched in supposedly supported macOS versions. The link you provided just talks about the CVEs broadly.
Knowing that the ticket is still open is helpful, yes. But it can be expressed as a Boolean, dedicating a whole essay to it is pointless doomsaying. I’d much rather read the technical details and level of access needed to exploit it so I can take steps to protect myself.
That a company selling antivirus software is more willing to devote an essay to doomsaying than technical info that might actually help users is a good indication to look for alternative sources.
> Jin observed that M1-based Macs running macOS Big Sur remain vulnerable to CVE-2022-22675.
So the first vulnerability affects only M1 macs, all of which are compatible with Monterey. There’s your patch.
> We have high confidence that CVE-2022-22674 likely affects both macOS Big Sur and macOS Catalina. Nearly all vulnerabilities [..]
“High confidence” that it is “likely”? I have “absolute certainty” that it is “possible”, then.
Sure, it’s a reasonable guess. But there’s a lot of strongly worded outrage in this text. Personally, I’d hedge a bit on the accusations until they are shown to be true.
This can still help inform decisions for internal IT departments. My IT dept has put an update blocker to prevent users from upgrading to Monterey, for example. This information could help them remove the block.
Can someone explain to me what "zero day" still means these days? I thought it was the time the developer was aware of the vulnerability, so once they're patching it or deciding not to patch it, it's by definition not a zero-day exploit anymore, isn't it?
I've always interpreted it as meaning no patch is available yet for a vulnerability. Does not matter whether a decision to patch or not has been made. Doesn't matter if it's been exploited yet.
Nowadays a 0-day basically means the person who found it decided to exploit it instead of report it. So the company finds out because there is an active exploit instead of having it responsibly reported.
It's shorthand for "an exploit that wasn't responsibly reported".
Yes, it has essentially lost all meaning as most vulns are exploited before ever being reported. And even those who responsibly report them call them “0-days”, which is sort of true on the day they report them, but only if the company didn’t receive any other reports, and why would they divulge that information unless they had to, like to give someone else the bug bounty? “Sorry, it’s actually a 14-day, we’re working on the fix.” Right.
> as most vulns are exploited before ever being reported.
I don't think that's even remotely true.
And given that it's not true, it's still a useful distinction. 0-day means an exploit that is being exploited before it was reported to the vendor, which is different than most exploits, which are reported to the vendor first.
> Nowadays a 0-day basically means the person who found it decided to exploit it instead of report it. So the company finds out because there is an active exploit instead of having it responsibly reported.
Any vulnerability that was reported and not fixed could probably be considered one?
Or any unfixed one for that matter (any widely exploitable vulnerability).
This kind of post isn't terribly helpful to end-users (though I guess the audience really is apple, for pressure purposes) without some kind of insight into how hard it is to actively exploit whatever these vulnerabilities are.
Like, there's a big difference between actively exploited vulnerability that requires physical access to the machine or conscious user decision to run an untrusted executable, versus actively exploited vulnerability that can bite a user when they visit a website or read a text message. If these vulnerabilities are in the second category, maybe I freak out and upgrade to monterey today; if in the first, I just rely on the perfectly good lock on my front door until apple backports the updates.
But, of course, hysterics get you clicks.