Hacker News new | past | comments | ask | show | jobs | submit login
Zero-Click Calendar invite vulnerability chain in macOS (mikko-kenttala.medium.com)
461 points by jviide 57 days ago | hide | past | favorite | 159 comments



Thankfully I don't use iCloud Photo Library, but it's both weird to learn that when the photo library location has been changed, the new location does not get any protection. I would have expected the exploit to fail after setting /var/tmp/mypictures/Syndication.photoslibrary as the system photo library and opening Photos because the Photos app should know to protect this directory.

I just did a quick test on my Sonoma 14.6.1 system. Hold the Option key while opening Photos to create a new photo library in ~/Pictures; then use an app without full disk access permission and without photo permission to access that folder. That app was denied access. Then do the same except the new photo library is created in /tmp. That same app is allowed access. This behavior is baffling and inconsistent.

If Apple really intends to support the feature of allowing the user to relocate their photo library to anywhere on the file system, they need to apply the protection properly.


I kind of get it. /tmp has historically been a world-readable/world-writable location in the directory hierarchy. If you want to save something private, it's not a great choice.


mkdir -m 700 /tmp/myprivatedir

you're welcome


TCC has historically always been kind of weird and full of holes in this way.


Linux now you can trivially isolate everything better than osx. Even without apparmor or firejail, most services gets their private tmp by default.


Lots of comments on this thread about bounty payouts. If a tech giant with a standing bounty program isn't paying a bounty, the odds are very strong that there's a good reason for that. All of the incentives for these programs are to award bounties to legitimate submissions. This is a rare case where incentives actually align pretty nicely: companies stand up bounty programs to incentivize specific kinds of research; not paying out legitimate bounties works against that goal. Nobody on the vendor side is spending their own money. The sums involved are not meaningful to the company. Generally, the team members running the program are actually incentivized to pay out more bounties, not less.


No, it's because Apple's 'product security' team that investigates and pays out bug bounties is horribly mismanaged and ineffective. It was recently moved from the SWE program office to SEAR (security engineering & arch), and the manager was recently shown the door and went to AirBNB. The team members are mostly new college grads (ICT2's and 3's) who wouldn't pass a coding interview elsewhere in the company, and mostly function as bug triagers. They spend more time going to conferences and hanging out with hackers, than in front of a computer screen working. Their portal of 'open investigations' shows a graph that only goes up (aka they only get more swamped with emails and don't even try to catch up).

Shaming Ivan, the head of SEAR, on Twitter is how people who should get paid bounties, but aren't, make progress.


I have no idea about how well the bounty program at Apple is managed, so, without affirming this, I acknowledge this is another plausible explanation: it's just an understaffed team that needs to get its act together.

The only crusade I'm on is against the idea that companies ruthlessly avoid paying bounties, which is, on information and belief, flatly false, like, the opposite of the truth. I think it's valuable for people to get an intuition for that.

Thanks for this!


Honestly, Apple is a 3.5 trillion dollar company. If the bug bounty program is understaffed then it's an intentional choice and they should fix it. And I say that as someone who's generally sympathetic to Apple.


Sure. My comment isn't really about Apple specifically so much as bounty program misconceptions generally.


I think suspicion of bug bounties even from organizations who would clearly benefit the nost from doing them right are well founded and you are over simplifying the situation.

Every organization includes a mess of situations where the overall best interest of the organization no longer comes through. Groups and individuals don't want to admit mistakes both personal and in wider senses and have alliances, competitions, team and organizational loyalty that twists their behavior.

A lot of organizations know they would benefit from having a proper whistle blower program and then proceed to crucify the first person who uses it.


Bug bounty programs aren't whistleblower programs.


AP is saying they can suffer from the same corporate politics.


That doesn't make sense, because bounty programs can't punish vulnerability researchers other than not awarding bounties, and whistleblower programs can punish whistleblowers. I got what that comment was trying to say, but, no.


It becomes corporate politics when 'blame' is assigned to the team responsible for the bug.


The preceding comment, I could follow. This one I cannot. But I think we're doing the same thing that's happening all over this thread, and trying to axiomatically derive how these programs work. I'm not doing that; I (like a lot of people) have direct knowledge of them. It's not much of a secret.


Huh? Whistleblower programs exist to defend them and fail to combat the problem, one that directly punishes would be like a bounty program that actually crafts the legal threats to security researchers.


That is being done too. Teenagers showing vulnerabilities in school systems have been prosecuted in Sweden... Needless to say, they didn't get much help with looking for holes after that so who knows how many security holes they still have.


> the idea that companies ruthlessly avoid paying bounties, which is, on information and belief, flatly false

Eh, it's likely usually true, but I've worked for a company which was attracted to the bounty program idea mainly for the optics and very much did push back on/was very reluctant to pay out on bounties.

And when I say "for the optics" I mean not only for the company being able to boast about having a bounty program but also the executive in question having something for his quarterly report. Having it not be too expensive was definitely part of the deal.

Needless to say this was a terrible company with terrible leadership, but it's a data point...


Ok but not a company as reputable as Apple, yes?

Apple historically used to have a deservedly good reputation for this. I was quite shocked at this story.


> not a company as reputable as Apple, yes?

Definitely not, in fact rather the opposite. I was just sharing the anecdote as a counter to the otherwise fairly blanket claims being made upstream.


> Apple historically used to have a deservedly good reputation for this.

Are they? Apple only started their bug bounty program (with monetary rewards) merely 5 years ago, 12 years after first iOS release and well after everyone else. They are not very transparent about bugs and payouts (which is understandable) so I wonder where this good reputation comes from?

(if you count their invitation-only program then it started in 2016, 8 years ago)


That's the same thing.


Obviously no.


Getting paid to fuck off at conferences and hang out with hackers on the company dime instead of staring at a screen in a cubicle all day sounds pretty awesome. Do I detect some jealousy or resentment that you haven't mastered the art of the corporate grift?


This is like every software security team of every form in the whole industry. Sometimes it's real, sometimes it's not, but it's evergreen problem.


Similar problem when if you're an innocent software engineer who introduces a bug, the security people will find it, make up a fancy website and logo for it, go around giving conference talks about it, get bounties (or not), give each other prizes, post on Mastodon about it from their accounts with cool hacker nicknames, presumably go have Vegas orgies, etc. Nobody's doing that for you.

I think they could use a little more ritualized shaming: https://en.wikipedia.org/wiki/Leveling_mechanism

Only Linus is brave enough to do this.


that's the thing though: security teams composed of grizzled talent absolutely benefit from going to conferences. they bring back what they've learned and leverage their new connections to bring more value to the company. so now you've got this industry-wide norm that the security guys are kind of out of pocket and spend a bunch of time at conferences, but they know their shit and protect the infra so it's all good. if it worked at the last X companies $CISO worked for so they're going to be hesitant to drop the hammer on the netsec team networking.


practice of the art of the corporate grift does take a toll on one's soul. Usually only pyscho/sociapath can do master this and do it for a long time without any emotional/mental consequences.


You might be right - maybe Apple's poorly operated bug bounty program is a result of incompetence rather than intentional malice.

But does that matter to security researchers or the public? No. Apple should fix their bounty program regardless of the reason it's broken.

Ultimately, this blog post is just another example on the already large pile[1][2][3][4][5]

1: https://arstechnica.com/information-technology/2021/09/three...

2: https://mjtsai.com/blog/2021/07/13/more-trouble-with-the-app...

3: https://medium.com/macoclock/apple-security-bounty-a-persona...

4: https://theevilbit.github.io/posts/experiences_with_asb/

5: https://shail-official.medium.com/accessing-apples-internal-...


Until we get to the total market dynamics (ie, the idea that "black markets" are an immediate substitute for bounty programs) I don't have a dog in this hunt or any reason to litigate the importance of changing how this particular program is managed. If it can be managed more effectively to the benefit of researchers without breaking internal incentives for the bounty program, I'm all for it.

I'd be rueful about leaving so many holes in my original argument, but I think these are useful conversations to have. Thanks!


Unless the implication is that the author of this point is misrepresenting things, I'm struggling to think of what "very good reason" there could be when there's a clear record of someone reporting a bug well before it's fixed. At best, it seems like typical slow bureaucracy, which I don't think is a particularly good reason. There's no reason it should take over a year for someone to approve something like this if the company actually incentivized it. Your logic might be sound, but it's hard for me to look at a situation like this and think "company is either stingy or overly bureaucratic like companies overwhelmingly tend to be in almost every other circumstance" is less likely than "company has legitimate reason not to pay out a bounty that ostensibly has been fulfilled". It just seems way more plausible that the incentives that happen pretty much everywhere else have bled into this domain, assuming the author is accurately describing the events.


Vulnerability researchers misapprehend the dynamics of bug bounty programs all. the. time. and are virtually never doing that in bad faith. I don't need to determine which of these two entities are above board; I presume they both are.

If you think that any major vendor bug bounty has incentives to stiff researchers, I'm commenting to tell you that's a strong sign you should dig deeper into the dynamics of bounty programs. They do not have those incentives.


Other than bad press there's no immediate incentive for the company to avoid stiffing researchers. Bug bounty programs work if the company is vulnerable to bad press and it would actually impact their bottom line.

This is not from an examination of when bug programs work but when they have very demonstrably not worked in the past.


Press is a perfect example of incentive alignment in these programs, since not paying a bounty a researcher believes is deserved is practically a guarantee of an uncharitable blog post.


Which process ensures that the company should actually care in the slightest about an uncharitable blog post or two, especially when its motivations are opaque enough that the lack of payment might be chalked up to "there's a good reason for that"?

If the cost of an uncharitable blog post is less than the cost of paying out the bounty, then a company would still be incentivized to find as many reasons to reject a payout as possible, as long as future reporters still believe they have a good chance of receiving a payout (e.g., if they believe they can sideskirt any rejection reasons).


The cost of an uncharitable blog post is massively more than the price of a bounty, like, it's not even close. The cost of an uncharitable blog post is potentially unbounded (as in: not many people in a large tech company would know how to put a ceiling on the cost), and the cost of a bounty, even a high one, is more or less chump change.

Another in my long-running dramatic series "businesses pay spectacularly more for determinism and predictability than nerds like us account for".


> The cost of an uncharitable blog post is potentially unbounded (as in: not many people in a large tech company would know how to put a ceiling on the cost), and the cost of a bounty, even a high one, is more or less chump change.

Look up "apple bug bounty" on Google, or any other search engine of your choice, and you'll find absolutely no shortage of people complaining of issues with the program. If these complaints each cost Apple a bajillion dollars, then why haven't they shut down their program already?

Or, if almost all of those complaints are just from the reporter being dumb, then how are potential future reporters (who would care about the company's prospenity to pay) supposed to find actual meaningful complaints among the noise?

I don't think that sporadic blog posts are nearly as powerful as you're making them out to me: my intuition tells me that the company can usually ignore them safely, short of them making front-page news.


Look, I believe you, but people complain about all these bounty programs, some of which I know to have been extraordinarily well managed, and usually when you get to the bottom of those complaints it comes down to a misapprehension the researchers have about what the bounty program is doing and what its internal constraints are. I acknowledge that another possibility is that the bounty program itself isn't performing well; that is a possibility (I have no actual knowledge about this particular case!)

The only thing here I'm going to push back on, and forcefully, is the idea that bounty programs have an incentive to stiff researchers. They do not. I cannot emphasize enough how "not real money" these sums are. Bounty program operators, the people staffing these programs, don't get measured on how few bounties they pay out.


My point is that while the sums might be "not real money", the costs of stiffing researchers is even moreso "not real money", so that it makes sense on the margin to do it, whenever the situation isn't incredibly clear-cut.

After all, it's not like Apple goes around handing out free iPhones on the street, even though a few thousand units are similarly "not real money". Businesses care about small effects on the margin.


No, I don't think this logic holds, at all.


Which part does not follow? Even supposing that the members of Apple's bug bounty team are all well-meaning, but that the program itself is chronically mismanaged, one might conjecture that Apple is disincentivized from investing in making the program better-managed.


I'm not deriving this axiomatically. The bounty programs I'm familiar with incentivize their teams to grant more bounties. I don't have recent specific knowledge of how Apple's program works. Obviously, Apple is more fussy than other programs! They want very specific things. But a just-so story that posits Apple's bounty incentives are just wildly different than the rest of the industry isn't going to get you and I anywhere. It's fine that we disagree. I do not believe Apple ruthlessly denies bounty payouts, and further think that claims they do are pretty wild.

(I have no opinions in either direction about whether Apple is denying bounty payments because of difficulties operating the program!)


Perhaps I've been somewhat too harsh: I don't see any particular 'ruthlessness' in Apple's actions. But I do think that its program, as well as many other bug bounty programs, can easily end up more byzantine in their rules than they'd otherwise be, since there's not much incentive counteracting such fussiness.

After all, one might easily imagine a forgiving rule of "we'll pay some amount of money (whether large or small) for any security issue we actively fix based on the information in the report", and yet Apple seemingly chooses to be more fussy than that in this case, unless they're just being extremely slow. I just don't see any way to square such apparent fussiness with your experience of bug bounty programs leaning toward paying out more.


> I'm going to push back on, and forcefully, is the idea that bounty programs have an incentive to stiff researchers. They do not

I replied upstream as well, but let me push back here as well. They can actually, if the bounty program is being run for the wrong reasons, which can happen - I know anecdotes aren't data, but I've seen one case first-hand.

If a bounty program is treated as a marketing project and/or an "executive value" project then they can and will be managed as a cost center and those costs will be deliberately minimized. Bang for buck. Now obviously this is perverse but if making your manager happy isn't an incentive then I don't know what to tell you.


Companies are not set up to accurately and effectively gauge the impact of intangible costs to themselves.


Exactly, which is why intangible costs will tend to be overpriced compared to risks with low cost ceilings, like "paying out an extra bounty".


I think both the point you’re making and the idea you’re arguing against ascribe a level of agency and rationality to large organizations that doesn’t reflect their reality. In that way they’re both “not even wrong.”

But then I can see your point to a degree at least.


I want to say again that I'm not making this point by way of a first-principles derivation of what's going on. I know for a fact that the norm in large bounty programs is to incentivize payouts. I don't know that for sure about Apple's program, but it seems extraordinarily unlikely that they depart from this norm, given the care and ceremony with which they rolled this out (much later than other big tech firms).

None of this is to say that the program is managed perfectly, as has been pointed out elsewhere on the thread. I'm not qualified to have a take on that question.


The bounty cost is not relevant for the company, but what about admitting liability?


Not a real concern.


Could you explain this in more detail?


The cost here is Apple changing their processes which is exceptionally painful for them


What processes would those be, and do you have actual knowledge of them?


The processes that involve interacting with external parties, which has long been something Apple has been really bad at.


Maybe not “immediate” but withholding rewards results in fewer researchers participating in bounty programs which defeats the purpose.


Not if the (true) purpose of having the bounty program is simply PR, rather than an honest desire to find and fix bugs.


The true purpose of these programs is to direct research to specific threats and engineering areas.


Have you ever reported security and privacy issues to Apple? I have. In fact, I have more than one incident open with them right now. One of them could be fixed in one line of code with no adverse consequences. It’s been open for two years. Apple’s Security team is either highly disinterested or highly incompetent. I don’t care which, neither is good.

It’s one of the most infuriating and frustrating experiences I ever had in computing. They clearly don’t want you sharing the issue publicly, but just string you along indefinitely. I’m honestly reaching my limit.

I don’t even care about the bounty money, I just want the bugs fixed. I’d give them all the latitude in the world if I thought the matters were taken seriously, but I don’t believe they are.


Not saying any about Apples bug bounty program, i manage my companies bug bounty program and for every good submission we get about 10 from India where they xss themselves in web browser console or similar hard to read texts that lead to nothing.

And now we starting to get a lot of AI generated submitted stuff. Take a lot of effort just sort trough the bullshit to accept the good ones, and then to manage it and fix things within SLA when not critical is very easy it gets pushed very down the backlog, competing with all different kind of request from customers to fix things. Code changes might be a one liner but testing etc can blow up stuff to be a very long process.


Yes.

See the rest of the thread for a further response on this, esp. w/r/t Apple itself.


There have been many documented cases where tech giants have outright refused to pay out, employing practices like: changing the rules of engagement post-factum, silently banning security researchers from active bounties, escalating good-faith disclosures to law enforcement, extreme pettiness from managers, etc.

> The sums involved are not meaningful to the company

Which makes it the more bewildering to see how mishappen the handling is


Give me an example of a good-faith disclosure escalated to law enforcement? Some examples come to mind, but the ones I'm thinking of won't support your argument.


I'm sorry tptacet, some examples come to mind?

I was really expecting you to say this doesn't happen, I'm now left wondering why security researcher's are willing to take such risks.


You are generally not going to be legally liable for things you do in ordinary security research, but you will sure as hell be liable if you do unauthorized serverside research. Apple bounty stories are invariably about clientside work with little to no legal risk.


> the odds are very strong that there's a good reason for that

The easiest way to show this would be to give the responsibility of managing the bug bounty to a third party who isn't involved in the business.


What I haven't had time to learn more about is when bounties are a such a tiny drop in the bucket for such an enormous number of users and revenue, how is it not a win-win?


With tech giants there's really no win win, only 1 win. They win either way. So why bother?


They don't win when an important-sized customer cares, especially when they're government-sized and can regulate you.


Which is why everyone who has received a large bounty payout from Google or Apple has worked for a government-sized entity.


It is a win-win.


I'm betting Hanlon's razor (= incompetence) helps with divining the reasons of the tech giant in question here.


Ah another way to mess with the quarantine flags, the other being: https://imlzq.com/apple/macos/2024/08/24/Unveiling-Mac-Secur...

Seems just way too many different systems have the ability to modify those flags.


> An attacker can send malicious calendar invites to the victim that include file attachments...Before fixes were done, I was able to send malicious calendar invitations to any Apple iCloud user and steal their iCloud Photos without any user interaction.

What's the scope of this? Can anyone on macOS anywhere really just send random invites to anyone else who uses icloud? Who would even want that?


Not to be smart -- but how else would invites work?


How often do you get a calendar invite from a person who you never interacted through email before and don't have in contacts vs the opposite, and actually take the meeting?


I, in UK, book things on Eventbrite, they email you with a calendar invite. Same with other booking systems for events IIRC. You can probably add people to an invitation? Maybe if you can exploit such a system then people would have them in their whitelist in any case?

A little adjacent to your question but relevant enough I think.


This is a regular part of the recruiting process, where you may start chatting in LinkedIn and then get an invite on your email.


If the recruiter doesn't ask me first (or I don't agree to a meeting), this is called "spam", and I would be happy for the system to just not allow it.


I have never encountered a situation where recruiter starts immediately with an invite without prior conversation (such invite also blocks the time slot of the sender - it would be stupidly ineffective to do that). It is hypothetical and improbable scenario that is not even worth mentioning here.


Okay, so why wouldn't you be able to whitelist them ahead of time then?


It just doesn’t make sense to do it ahead of time in such situations. Email client could simply ask if I trust the email before processing the attachment (and some clients do that). Automated pre-processing of attachments is a general risk that doesn’t apply only to calendar.


Often, a coordinator sends the invite - not the recruiter.


I've received Apple Calendar invites containing Chinese characters from individuals I've never heard of. I deleted them, but just receiving them was a bit alarming.


Not unrealistic as a consultant. My boss sells me to a project. Then clients might be asked to send me the meeting invite to kick things of. I might not have directly communicated with client at any point at this time.


In a certain way, the Nigerian Prince con artist is a “consultant”…


I recently booked a haircut that sent me a calendar invite via email after booking it. I had never interacted with that email before, but I accepted the invite.


Pretty often at work. I'm often interacting with client/vendor teams or even new people at the company I work for. Probably a few times a week I'll get an invite from someone I have never exchanged an actual email with. Maybe Teams/other chat messages, maybe exchanged information with one of their colleagues, or talked over the phone.


HR / Recruiter setting up interviews? The person doing the inviting might be different from previous calls/emails.

Customer meetings I get invited to often come from someone I’ve never dealt with before, but include others who I work with who were responsible for bringing me into it.


I think there's a pretty big gap between "people at my company are allowed to add things to my calendar" and "random stranger anywhere in the world can add things to my calendar".


Neither of the above examples would come from people in my company.


"others who I work with who were responsible for bringing me into it" sounded to me like people at your company, who I assumed would be able to add you to the meetings. I guess I might have been mistaken


Depends on who is running the meeting. If the customer is hosting, the others I work with will provide my email to the customer so they can add me to the invite.


Project manager from other team arranging a cross team meeting?

Secretary office admin doing their job?


In-org usually has the whole domain white-listed and the whole organization would normally be auto-synced to your contacts.


There are possible safeguards -- only allowing invites if you are on each other's contact lists, for example, or the same domain, or something else. Apple had a big problem with Calendar spam that they have not really fixed.


I'd want to whitelist specific people before they could send me a calendar invite. Every other invite request should never reach my device. If I don't even know you, why would I want your invites anyway?


Because you work with people outside of your company, support, vendors, sales people etc.

Boss: Why aren't you in the meeting with our vendor to upgrade our X system?

You: Oh I whitelist all my invites. You see, I am thinking about security and don't want to receive invites from someone I don't know.

Boss: Clear your desk, security will walk you out.


The way I understand it now, they attach an invite to an email that you don't even read, but it shows up on your calendar. Is it too much effort to open the attachment yourself? Normally you think twice about opening an attachment from someone you don't know.


Or the much more sensible, and MSFT way of handling it (in outlook)

ExternalUser: Hello here is a calendar invite I would like you to attend, please confirm or deny

User: Thank you, now I can verify the request and choose to add this to my calendar or not


> Because you work with people outside of your company, support, vendors, sales people etc.

If I work with them, I would have them whitelisted. If I've never even heard of them they have no business sending my devices calendar invites.

Boss: Why aren't you working on that project I gave you?

You: Some stranger in Indonesia invited me to a sales meeting instead.

Boss: If I need you to go to a sales meeting with someone from Indonesia I'll tell you to! Clear your desk!


Idk, other members of the third party company get pulled in all the time and might schedule something. I can't imagine using a calendar whitelist or why you'd even want to.


Well, to eliminate a source of spam, reduce exposure to phishing, and prevent vulnerabilities like the one talked about in the article by reducing attack surface.

If someone is going to make some demand for my time, the very least they can do is give me notice outside of my icloud calendar. An email, an IM, a phone call, etc are all very easy and they allow me to make sure it's real before it has any chance to interfere with my schedule. "Hey Boss, this guy says he's our new IT guy and he wants to talk about my network settings" or "Hey $vendor, I just got a call from $rando saying he's our new contact, can you verify that for me before I tell him everything I know about your propriety applications?"

It helps that I like to keep my work devices and my personal devices entirely separate. If someone in the office wants to pull me into a work meeting through outlook, they'll already have to have an account set up on the company's exchange server. Anyone outside of the company I should already have a relationship with or at least a heads up.


I don't understand, how is receiving a calendar invite different from receiving any other email? Does MacOS automatically do something with calendar invites by design?


Is g cal not the same?


I think this isn't specific to iCloud, just in general invites are automatically picked up from emails. Calendar invites have long been a source of spam, so I'm not surprised there's also a vulnerability.


> If the attacker-specified file already exists, then the specified file will be saved with the name “PoC.txt-2”. However, if the event/attachment sent by the attacker is later deleted the file with the original name (PoC.txt) will be removed. This vulnerability can be used to remove existing files from the filesystem (inside the filesystem “sandbox”).

That's bad engineering.


Don’t love the bounty state here — security researchers, is it typical to wait this long with Apple or other FAANG type companies?


Very yes.


seems this just encourage researchers to sell zero-day exploits to organize crime and/or alphabet letter agencies. No wonder we have no digital security at all! Big tech don't really care about security or privacy. Why are we even using their stuff?


It does not. Bounties and zero-day markets are different things. Lots of people actively sell to both.


And you think this is fine?


I do, yes.



Step 1 is a crazy vulnerability on its own. How did Apple not consider this?

> The attacker can exploit this to conduct a successful directory traversal attack by setting an arbitrary path to a file in the ATTACH section with: “FILENAME=../../../PoC.txt”.


I think this speaks to a larger problem that likely exists in every company: certainly someone at Apple had written a library function to do this safely, but how do you enforce that that function is used, rather than reimplemented unsafely from scratch? Especially if code reviewers are also unfamiliar with the library. Are there any modern solutions for this?


There's probably a library function that's so annoying to call that people don't bother. Like you gotta first convert the NSString to an NSPath, acquire your library path using some singleton, then construct NSFileHandle (don't take literally, I haven't used objc/swift in ages).

Edit: and there are actually 4 library functions with subtly different behaviors


Static code analysis tools that can flag for the use of the insecure function?


Easy, by not firing people left and right.


I get a thrill every time there's a big-time non-memory-safety security hole. I know it's petty, but I love the idea of all the time and energy invested in Rust being eventually wasted by a path traversal bug.


Does Lockdown Mode prevent this?


Totally speculating, but I’d hope so. After all the prior zero-click image attachment related exploits, which I think lockdown mode was built to address, I’d figure all files are treated in that manner.


You can't treat "all files" like that. This would be akin to the "why don't they just make all the file handling out of Lockdown mode code" ;)


Wow. That's a fairly old-fashioned exploit. I remember reading about paths in filenames, like, a decade ago.


Great write up.

Any guess on the bounty amount for this zero-click vulnerability, with a 5 step exploit chain for macOS?


Has to be at least 6 figures. I got $47k on a pretty insignificant flaw with TCC and I would assume this is much more serious. The wait time is crazy though. It took almost a year to get fixed and another 6 months for the bounty to be paid. Then another year for them to even credit me for the CVE.

The fact that security researchers are completely at the mercy of the companies made me choose to do software Eng instead. Much more stable.


Dude likely could have sold this to malicious threat actors for 6 figures.

Weird that it's been 2 years now and Apple still hasn't paid anything.

Really highlights why people might tend to gravitate towards that route instead of going thru the legit bug bounty process.


Does it work on an iPhone? If not, you're probably not selling it for 6 figures, or even 5.


It does not.


Super interesting, though I doubt they'll pay a bounty on something they've already fixed.


>though I doubt they'll pay a bounty on something they've already fixed.

CVE-2022–46723 was reported 2022-08-08 and fixed later on 2022-10-24, which the author of this post was credited by Apple for reporting.


So he likely got the bounty, then, or will get it. Any idea how much it is?


Definitely not received yet.

>2024–09–12: Still no bounty [...].

Apples bounty payouts are ball-parked here:

https://security.apple.com/bounty/categories/


Relevant section states:

> Zero-click unauthorized access to sensitive data $5,000 to $500,000


$5?!? Really incentivizing selling it on the black market.


Which black market? Who is buying it? The reason they quote such a huge range of prices is that there is a huge range of utility across different exploits, and many of them aren't worth much at all, including some that seem ultra-powerful on the tin.

Keep in mind also that the economics of bug bounties are different than those of the "black market". Bounties quote lower prices because they're offering assured payouts, often with lower exploit proof and enablement requirements. They're not actually apples and oranges.


If only Apple had a better cash-flow situation so they could pay out more. Alas...


Surely depends on the severity. If the attacker is only able to read if you prefer dark mode from a calendar invite then nobody will pay a lot.


I am not sure what Apple defines as “sensitive data”, but surely that would be something more tasty than user UI configuration.


Apple generally does not pay a bounty before they fix something.


Come on Apple, do the right thing, reward the bounty already.


Should have sold it to the Israelis

NSO Group would have paid more, quicker


It's unclear that NSO group is interested in gaining access to iCloud accounts or Photos, nor is it clear that this entrypoint is something that would meet the bar or be useful for signals intelligence, since it requires sending a calendar invite and clicking on the attachment.

Bug bounties will pay for any bug. Offensive firms only pay for things that are practical, and they don't pay everything up front---it depends on the lifetime of the exploit. The business model is closer to a subscription or services.

There is no reason to believe NSO group would pay more, and they certainly wouldn't pay quicker.


> since it requires sending a calendar invite and clicking on the attachment.

I thought it was a zero click exploit?

As for being interested in iCloud and photos, is the argument that the people they’re looking to attack are unlikely to use iCloud? Cause otherwise getting photos and potentially email access seems quite valuable.


The bigger thing here I think is that the target platform is macOS. An important detail to internalize about major grey market buyers of vulnerabilities: they tend not to stockpile; every vulnerability they buy they need to maintain, and there's not much benefit to maintaining vulnerabilities you aren't going to use. There is, how should we put this, probably not a whole lot of scarcity in macOS RCE vulnerabilities? It would be wild to learn that a threat actor at NSO's scale doesn't already have macOS (and Windows, and Ubuntu) wired for sound already.

(This stockpiling thing isn't me guessing; it's something I learned pretty recently).


I'd assume most western journalists would have Mac laptops.

No idea what portion non-western journalists use Macs.


Again I'll say I'm not axiomatically reconstructing the relative values of exploits on different platforms, and observe that this is something you can go research and learn about. No, macOS exploits are not as valuable as iOS exploits.


> Bug bounties will pay for any bug.

This one didn't.


No they would not have.


No he shouldn’t.

That mentality is cancerous to society.


Yes that is right! That is cancerous. Apple, not paying the peanuts as a bounty, is fully responsible for spreading these terminal diseases.


so is not paying a bug bounty


And yet Apple still hasn't paid up. Need to just start selling these to people who will use them at this point.


Good luck with that.


Apple still not paying bounty or needs to be publicly reminded…


It sure is a good thing that Apple has fixed all these, and has put out patches for all effected versions, since they care about their users' privacy, right? Right?

I know Apple has now switched to 10 years for MacOS, and 7ish years of iOS, but I hope the EU passes some laws to make this a requirement, rather than something a company can choose to provide or not.


Yes? As the OP states:

2022–08–08: Arbitrary file write and delete in Calendar sandbox reported

2022–10–24: (No CVE) fixed in macOS Monterey 12.6.1 and Ventura 13 (Ventura beta3 was vulnerable)


https://digital-strategy.ec.europa.eu/en/policies/cyber-resi...

One thing I think you won't like about this is that it's easier for large commercial vendors to comply than it is for open source projects.


Apple can increase those times because that's how long it'll take them to patch issues like these.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: