Hacker News new | past | comments | ask | show | jobs | submit login
Google warns about two iOS zero-days 'exploited in the wild' (zdnet.com)
189 points by LinuxBender on Feb 12, 2019 | hide | past | favorite | 79 comments



So far the two parent comments are quite negative, which surprises me. I understand the anti-Google sentiment, but Project Zero has been a much needed booster to the security of the public and it has born fruit. The fact that an iOS vulnerability is actively being exploited is notable. I think their method of responsible disclosure is reasonable.


I'm about as far from a Google fanboy as one can get, but I'm definitely seconding this comment. Project Zero, OSS-Fuzz, a host of other projects have contributed enormously to software security in the last few years.


There are cases that I'm all for bashing Google when they don't give the company they're targeting enough time to patch something (recently, seems mostly directed at Microsoft). This isn't one of those cases.

They seem to have waited until Apple had a patch ready, they disclosed it to Apple and gave them an adequate amount of time to patch the vulnerability, and users are better for it.

So in this case and others similar to it, kudos to Google.


>There are cases that I'm all for bashing Google when they don't give the company they're targeting enough time to patch something

While I understand the common ethos of our current culture supports this, has there been analysis if giving what could constitute a second chance to fix security issues leads to less prioritization of security initially? I could definitely see a business deciding to lower their security expenditure since if an issue is found, they will be given a grace window to fix it before the world hears about it. It would still be damaging, but it would be far less since the PR machine could spit out that it was patched before it was announced to the world.

There has to have been some agreement to limit the grace period since people will go live once a reasonable time frame to fix it has passed and they won't be judged negatively if others agree reasonable time was given. So if we won't judge someone for giving only 6 months instead of 3 years, what about the one who gives only 2 weeks instead of 6 months? How do we calculate which of two time frames is better?


I imagine they share proof of concept 100% of the time, and if that is the case, I’d say it varies: target a window, say 2 months. At that point, show progress on the bug to Google (or whoever). If at the 2 month mark it is obvious it was low priority and not really looked at, the vendor of the application failed in which case I would say disclose away (bonus points if they provide something to mitigate it, if possible, though onus is not really on them either way). If they can tell the software vendor is making progress/genuinely attempting, then I’d say an extension would be fair.

In the Microsoft case that vaguely comes to mind, I believe the issue was one that required a bit of work because it was pretty low level for Windows. I want security patches on my system ASAP, but I also don’t want someone to release something that breaks my OS’s functionality or renders my files (or the ability to open files) fubared either. If memory serves, they were making progress on it, but it went past the time period Project Zero set and they were unwilling to give an extension and as far as was reported, didn’t seem to be exploited in the wild. But then you have something unpatched that is disclosed by Google. That doesn’t help users all that much.

That is all to say it isn’t being verifiably exploited in the wild. When that is the case, that changes things to the point users need to be made aware as soon as possible and if it means “turning off” a feature, if possible, as a stopgap, give that info to them.


If only Google would hold themselves accountable to the same standard. Android is a gigantic security mess, all caused and enabled by Google.


No it isn't? Android has a bug bounty program: https://www.google.com/about/appsecurity/android-rewards/

and regularly has strong showings at pwn2own. Android's security for the past couple of years has been superb.


Android as an abstract project, yes. Android, as what's actually used by users, it's not that superb.

Google is slowly trying to fix it, but average Android device is way behind average iOS device in the wild, and that will be the case for many years to come.


> Android, as what's actually used by users, it's not that superb.

It is, though. The Android that's most commonly used by users is the one from Samsung, who also issues monthly security patches for a large range of devices: https://security.samsungmobile.com/workScope.smsb

LG ( https://lgsecurity.lge.com/security_updates.html ) does as well, and so do at least Motorola & Nokia.

> average Android device is way behind average iOS device in the wild, and that will be the case for many years to come.

[citation needed]

Average iOS device just got hit by 2 zero-days in the wild. And jailbreaking is a long and well established practice on iOS, which is literally privilege escalation exploits. There's a constant, continuous stream of those on iOS. There doesn't seem to be many (any?) on Android for a while now.


>There doesn't seem to be many (any?) on Android for a while now.

To be fair, there are a variety of reasons why this isn't the case that have nothing to do with security. An Android jailbreak is less valuable for a few reasons, among them that you can often purchase android devices with root privs, the same isn't possible for iphone.


It's one thing to release a security patch. It's a different thing to get it installed on user devices. If a user never has an opportunity to install the patch, that patch might as well not exist from that user's standpoint.


There are millions of unpatched Android devices, probably forming a massive botnet by now. When you read it in the news sometime in the future, remember this post. You read it here first.


Agreed — also these vulnerabilities can put human life at risk, as we’ve seen in recent months.


Google in general has a pretty bad reputation when it comes to considering others. Which is usually fine because it is their business. Project Zero is different, but they adopt largely the same, or even worse, attitude. And at the end of the day software is a "human business". But it's a contentious issue so I am not sure it is worth discussing in a popularity based forum.


I wonder if one of these was used by the FBI's unlocking tool from the San Bernardino shooter case. That sort of just... fizzled out, with the FBI saying they could unlock iPhones themselves. Everybody kind of just said "yikes" to that statement and moved on...

https://en.wikipedia.org/wiki/FBI%E2%80%93Apple_encryption_d...


I mean, the whole point of that issue was that the FBI wanted Apple to develop tools that would make it a lot easier for the FBI to do it later if they wanted to. It probably cost them a lot of money/time to do it the way they did. Plus, wasn't that the suspect's work phone anyway? So there really couldn't even be much that would have incriminated him on that phone. The point was setting a technological precedent.


> the FBI wanted Apple to develop tools that would make it a lot easier for the FBI to do it later if they wanted to. It probably cost them a lot of money/time to do it the way they did

No, they wanted to set a legal precedent.


Precedent was already on the FBI's side, just as in the Lavabit case.


AIUI the mechanism the FBI used was already known to not work on newer devices at that time and only worked because the iPhone in question didn't have a secure enclave.


I don't think so... these look like privilege escalations for apps, not a way to unlock the phone.


I consider all of these likely-

>FBI found someone inside they could negotiate with(IE- better patents or the threat of denied patents)

>FBI found someone they could bribe

>FBI hacked and broke into the iphone

3/3 of those Apple wants to keep quiet.


Side question: whatever happened to Chrome blocking autoplay videos like this horrible and incredibly loud one?

It's supposed to have been in place for a year or so... but it's clearly not working. If this particular one isn't blocked, then what ones are?

I'm on up-to-date Chrome 72...

[1] https://developers.google.com/web/updates/2017/09/autoplay-p...


Chrome has an overall whitelist on top of a user-specific one. I assume that ZDNet overall has a high enough MEI score across all Chrome users that it's allowed until you train the algorithm that you don't want to see it.

Additionally, if you navigate within a site (ie, click on another ZDNet article from that same page), that counts as a website interaction, and the new page will be allowed to autoplay.

Firefox's upcoming controls are user-controlled instead of based on algorithmic behavior, and they don't have a whitelist. They're available right now, but not turned on by default (yet).


The autoplay blocking involves a complicated set of heuristics involving the domain name and your past behavior with that domain...


So I just checked my heuristics at chrome://media-engagement/ and zdnet.com has a personal MEI of 0.0 with 7 visits (for comparison, YouTube is 0.76), and the stated threshold at the top of that page for allowing video with sound is min 0.2 max 0.3.

So just ugh. Disappointed in Chrome that zdnet.com is somehow considered high enough quality to play videos with audio automatically. :(


zdnet.com has a score of 0 for me, and it still autoplayed.


Man, the first time this happened to me on ZDNet, I literally tweeted out to their Twitter account that they are idiots for putting that pattern in place. It's the loudest autoplay I have ever heard.


or you can just turn it off in new firefox...


zdnet and a few others seem to be doing some tricks to get around it. I right click the tab and mute the site. CNN does the same and it's really freaking annoying.


To avoid the auto-playing video with loud volume, here's the entire content of the article:

https://twitter.com/benhawkes/status/1093581737924259840

"CVE-2019-7286 and CVE-2019-7287 in the iOS advisory today (https://support.apple.com/en-us/HT209520 ) were exploited in the wild as 0day."

--

A Google top security engineer has revealed today that hackers have been launching attacks against iPhone users using two iOS vulnerabilities. The attacks have happened before Apple had a chance to release iOS 12.1.4 today --meaning the two vulnerabilities are what security experts call "zero-days."

The revelation came in a tweet from Ben Hawkes, team leader at Project Zero --Google's elite security team. Hawkes did not reveal under what circumstances the two zero-days have been used.

At the time of writing, it is unclear if the zero-days have been used for mundane cyber-crime operations or in more targeted cyber-espionage campaigns.

The two zero-days have the CVE identifiers of CVE-2019-7286 and CVE-2019-7287.

According to the Apple iOS 12.1.4 security changelog, CVE-2019-7286 impacts the iOS Foundation framework --one of the core components of the iOS operating system.

An attacker can exploit a memory corruption in the iOS Foundation component via a malicious app to gain elevated privileges.

The second zero-day, CVE-2019-72867, impacts I/O Kit, another iOS core framework that handles I/O data streams between the hardware and the software.

An attacker can exploit another memory corruption in this framework via a malicious app to execute arbitrary code with kernel privileges.

Apple credited "an anonymous researcher, Clement Lecigne of Google Threat Analysis Group, Ian Beer of Google Project Zero, and Samuel Groß of Google Project Zero" for discovering both vulnerabilities.

Neither an Apple or Google spokesperson responded to requests for comment from ZDNet before this article's publication. It is highly unlikely that the two companies will comment on the issue at this time, as both would like to keep the zero-day specifics to a minimum and prevent other threat actors from gaining insight into how the zero-days work.

iPhone users are advised to update their devices to iOS 12.1.4 as soon as possible. This release also fixes the infamous FaceTime bug that allowed users to eavesdrop on others using group FaceTime calls.


Any word on what apps included these exploits? I'm guessing there was at least one since they say "in the wild"


[flagged]



I don't know about Project Zero in particular, but yes, Google has security teams investigating their own projects. Why would you think otherwise?

And if you think there are security holes, you're welcome to report them to get a bug bounty.


I think it might be just harder to find exploits in your own code because of organizational blind spots.


Well personally I would like it if they had write up's on them, even if only after the fact, as Project Zero has always come across to me as kind of a publicity stunt to try to improve their reputation to the tech industry and show how important they value security when the truth is quite different.


What is the truth in your view?


If there is bias it would be a good for us all in the end imo. Makes it more likely for Googles competitors like Apple to spin up similar teams and fund them aggressively to find exploits in Google tech like Android.


It is probably just a consequence of Project Zero being made up of experts on different topics. The iOS experts are quite prolific.


If you navigate to https://googleprojectzero.blogspot.com the first story is about a vulnerability in skia, a library made by Google:

https://googleprojectzero.blogspot.com/2019/02/the-curious-c...


Yeah, it's amazing that this is not called out more often. Android is a security mess.


Anybody can make an android phone and damage the brand. Compare pixels to iphones in pwn2own contests. They are at the very least equivalent.


That covers the 3 million phones a year that Google sells what about the other 1.2 billion plus?

https://www.forbes.com/sites/chuckjones/2018/03/10/apples-io...


If you are worried about Google's security practices, why look at things other than Google's products?


Android is as much a Google product as Windows is a Microsoft product.

No one would say that if you want a PC that is secure, buy a Microsoft Surface.


> Yeah, it's amazing that this is not called out more often. Android is a security mess.

Nope, Android security is quite good actually. Not as good as iOS, however very good nonetheless.

Of course, fragmentation issues and the fact that most Android devices are not updated do not help.

However there is active mitigation of security issues both in kernel and in Android user space.


Just to complemented my previous comment, Android is probably one of the most secure Linux based devices in the world, probably only losing to ChromeOS.


I hold the unpopular opinion that Google Project Zero is pretentious and unprofessional.

I mean here we have the most valuable company on Earth, specifically scoping out competing software and hardware constantly looking for zero day vulnerabilities. They don't submit to the bug bounty, so if they have a disclosure that you disagree with you better agree quick because they'll just go public.

How fast do you think I would get sued if I found a zero day in a Google product, told Google to fix it, shove the bounty, and then went public 30 days later?


I will put aside Project Zero for a second -- in this particular case they're saying that they discovered that someone else has exploited the vulnerabilities. Perhaps that means that they weren't necessarily looking for them with analysis or fuzzing but a honeypot trapped a case where an attacker was successful. Can't we all agree that this discovery is to everyone's benefit? Google, Apple customers, Apple?

With respects to the merits of Project Zero:

> How fast do you think I would get sued ...

I don't think you would but even if you would -- your argument really seems to underscore the value of Project Zero. Google can publicize these bugs and have the resources to stand up to lawsuits from irresponsible vendors.

I'm not silly enough to think that Google doesn't exploit competitive advantages but in this case I think they are trying to catch up with the public perception of Apple's superiority wrt secure product design. Objectively we can see that's the case with many of the design elements of iOS vs Android. And it's not by putting down Apple that they do this, instead it's by demonstrating that they are leaders in security, not followers.


They're using iPhones themselves inside of the company, trusting the devices as part of their Beyond Corp security mechanism, so they have an interest in the software trusted to their company network being as secure as possible.


And we don't?

I want my phone to be as secure as possible because I use it for work, but of course, since it's not brand new I can't get updates.


This update is available to iPhones from the 5S and newer. The 5S was released in 2013. If your phone is older than that, it's a bit older than "not brand new".


I think the point is that he has an Android, but he can't update, while Apple users can.


[flagged]


> If he bought an Android it's because he doesn't care about the security of his phone.

> Not trying to be snarky; Android has been around for over 10 years and we all know how irresponsible all OEMs are, including Google. He had the information when he made his purchase.

I care about the security of my devices and I don't want to pay the Apple tax (that is really absurd in Brazil; even with my quite good engineering salary it is still half of one month of work). I always buy supported devices and have monthly security updates on my Android One device.


Or alternately, you buy from Google, keep up with OS upgrades, and plan on buying a new phone every 2-3 years or so. While not ideal, this is certainly doable.


So the only way that you can get an Android phone with any type of security is by buying one from Google.

So much for Andy Rubin and his promise of openness and choices...


The other alternative is also locked into one company.


Motorola is pretty good too. Maybe others?


That's a symptom of cybersecurity.

If a law made a security bug a refundable or warrantied defect, I bet you this shit would stop.

But noone gives a shit.


It would also seriously stymie innovation. If I risk having to refund an item when I push out improvements, I'm never bothering to push out improvements except bug fixes.

I'm also incentivized to release a new model every month with ANY improvement in order to limit my liability to a smaller window of revenue.

The current system isn't perfect, but it could be much worse.


What's worse that thousands of pieces of throwaway hardware that don't get updates? How about thousands of enterprise systems where the security updates are hidden behind support contacts?

> when I push out improvements

No, if your software has a security issue, it's refundable. Write good software.

> release a new model every month with ANY improvement

Good, but that doesn't remove your liability from your last model.


>No, if your software has a security issue, it's refundable. Write good software.

There are 0 companies that can provide consumer software on the lifecycle consumers have come to expect without any bugs. You write software. Are you willing to claim that you can just "write good software" and never ship anything with a security issues?

Because otherwise you're advocating for consumer tools that use nasa's release cycle. Which like, that's cool and all but I don't want to rely on hardware from 2012 or 2005 running software that was developed from 2010-2014 and has just finished its verification process. You're advocating for a world where we just got the verifiably bug-free Nokia 3310.

And that doesn't even begin to discuss the clusterfuck that would be open-source in this situation. Am I liable for heartbleed because I use OpenSSL? Are the openSSL devs?


Same bullshit argument was made about GDPR and we survived that... there's just too much money to be made by outsourcing your shitty code's security bugs onto the customer.


Those aren't the same though.

GDPR is basically "you are liable if you are actively exploited and data is stolen". You're saying that a company is liable if they ship bugs, which the GDPR absolutely doesn't care about.


> you are actively exploited and data is stolen

Not even close, you are liable for keeping the data you collect as a data processor or controller safe.


And "encrypt data at rest" is most of what you need to do to comply with the GDPRs data security stuff.

Which again, is nothing like "write bug free code or you're liable".


> encrypt data at rest

What? No, you have to have a DPO, provide clear language on what you do with data, who it's shared with and no intrusive prompts having opt-in by default just to have a few.


None of those things have to do with the actual security of your code/data storage. They're procedural.

The GDPR focuses on procedural liabilities. You're asking for application level liabilities, which like I've said 3 times now, are a whole different ballgame.

Since you're so deadset on this, I'll just ask again: Who is liable for Heartbleed or for Meltdown? Who gets sued, and for how much, and why?


> Heartbleed

Anyone who doesn't make an effort to update. If your hardware is still Heartbleed fucked and you're selling it, you deserve to lose money.

> Meltdown

Intel and AMD.

> Who gets sued

Noone. Here's your product back, it's defective, please cut me a check, that's all.


Ah, so since android and ios are already provided for free, nothing changes for consumers?


I would agree on some of them being pretentious, specially with their tease about undisclosed vulns on twitter. But not all their members engage in this.

I disagree on everything else you say. Google P0 has definitely pushed companies like Microsoft and Apple to be better and this benefits all of us. Their 90 day policy pushes vendors to make overall improvements in their build, ship and update processes, helping us get faster updates for critical vulns. They forced the leadership at these companies to put security very high on the list.

I am also pretty sure that Google is held pretty tightly to their own 90 day disclosure policy. Google runs a full fledged bug bounty program and external researchers do find critical bugs in Chrome, Android etc. and don't get sued by Google.

With hundreds of critical bugs in major software (and hardware) their contribution to security is nothing less than stellar.


I think believing Google is pretentious and unprofessional is one thing. But the other stuff you said doesn't make any sense.

Why shouldn't they go public? Not everyone can spend their time reading patch notes. If a new patch comes out that fixes a security flaw, most people say "update later". But if you say "no really, criminals are using this attack RIGHT NOW", that can help a lot of people keep their devices safe.


Do you have any actual examples of this happening?

It should be relatively easy to prove, by showing a disclosure from Project Zero without an accompanying acknowledgment/post from the company...

So far, most of what I've seen has been good for security. Not least the whole Spectre/Meltdown mess.


> the most valuable company on Earth

Please edit to say "one of the most valuable companies on Earth," as it is absolutely not the most valuable (Market Cap). In fact, it's currently less than Apple. ($781B / $803B)


I hold the more unpopular opinion the "Responsible" Disclosure is anything but, and everything should be fully disclosed in real time

If I was running project zero I would not even give them the 30/90 days

Full Disclosure is the best way


I can understand this. Both of these options are better than what it was like in the 90's though. Back then it was "inform company about a bug, and get a non-disclosure agreement forced on you or a visit from the FBI".

People forget that one of the reasons the 90's was such a hayday for hackers was that companies tried to fight hackers with court orders instead of actually fixing their stuff.


That's exactly right. I'm firmly of the opinion that "we'll give you 90 days to fix it before announcing" is responsible disclosure, as it motivates people to actually fix their stuff as opposed to filing the report away in the circular Jira.


I can somewhat get on board until the moment a 0day is observed in the wild, then it should be disclosed.

i can not protect my systems from things I am not aware of, allowing a vulnerability to be exploited for 60 days when on day 29 of "responsible disclosure" window the vulnerability was observed being actively exploited is not responsible either.


I'm with you. On that day, I assume everyone in the world except me knows how to exploit my system, and the vendor is racing the clock.


Has this ever happened with Google or is it pure speculation?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: