Hacker News new | past | comments | ask | show | jobs | submit login
0-days exploited by commercial surveillance vendor in Egypt (blog.google)
534 points by mikece on Sept 22, 2023 | hide | past | favorite | 239 comments



It's good to get some more info, but it is a little disconcerting that they only mention patching Chrome. What was the sandbox escape on Android? Even if you had code execution inside the Chrome process on Android, that shouldn't be enough to enable persistence, so clearly there's another vulnerability.

Also in this case the attack vector was MITM of http and one time links as it was a targeted campaign, but it feels there's nothing preventing someone putting this in an ad campaign or sms/discord/matrix/whatever spam and spraying it everywhere to build a botnet or steal user credentials or whatever.


> What was the sandbox escape on Android? Even if you had code execution inside the Chrome process on Android, that shouldn't be enough to enable persistence, so clearly there's another vulnerability.

This is such a crucial point. Forced to read between the lines of the blog post (because the above information is missing), it sounds like there are currently unpatched issues in Android revolving around this?


Likely yes, they were unable to capture the following stages so they don’t know what was exploited after gaining initial execution within the chrome sandbox.

Likely there’s a chrome sandbox escape and a kernel exploit remaining “unknown and unpatched”.


> Likely there’s a chrome sandbox escape and a kernel exploit remaining “unknown and unpatched”.

There is certainly many of those that we don't know about, if this was done in Egypt, imagine what a 3 letters agency have


Every government with money has exploits for every device imaginable.

The going price was $1m for a full exploit chain on iPhone a while ago, and I’m sure it’s gone up, but I’m also sure it is minuscule compared to the amount of money governments have.

Everyone should assume this is fact, and not imagine some secret spy world. If you ever become interesting enough to hack, you will be, and there is little recourse (currently)


Kinda surprising that Apple wouldn't be the highest bidder for a full exploit chain. They've been known to give out $100,000 bug bounties, but you'd think one million would be a pretty good deal for closing a vulnerability vs having it sold to companies that professionally surveil people.


One million dollars is what some states pay _per target_.

Every major black hat group operates with the blessing of some state. It’s about more than just money. Actual exploits are probably traded for “favours” (e.g. votes in international bodies, collaboration on thorny dossiers, extraditions, etc.). The infiltration of electronic communication is a major aspect in determining a state’s level of “soft power” and - in a software-run world - its weight only increases.


>Every major black hat group operates with the blessing of some state.

Citation needed. As far as I know that's true. Yes, some groups cooperate with the state (true in Russia after the escalation of the war in Ukraine, for example). But that's certainly not true for everyone - not every group has uses as an APT unit, most are just common criminals.


I'd go a step further and state: everybody is interesting enough to fully automatically hack.

I have 0 doubt that literally everybody is being scraped by 1 or more governments and/or companies.

Because if they can, why not?


Because every time you do it, you run the risk of discovery. This can be expensive (you burn your exploit) and politically embarrassing.


The answer to “why not” is this blog post. Lots of people are likely very unhappy that their expensive exploit has been patched. This is what The Citizen Lab does!


Poignant, not really. Prescient, maybe.


Wouldn't be a stretch to assume this is forced by 3 letter agencies and it's details leaked for sale on an exclusive dark web.

Think of all the insidious corruption we find out about via declassification 50 years later. It's not like human nature has changed.


Is it possible that it was detected but without a sandbox escape? would it still be described as "an exploit" if so?


Yes.


Can be multiple vendor specific exploits.


I mean there are always unpatched issues in everything... There's nothing you can do, whether you know about it or not. You have to just assume you're always actively being exploited at some level


Read it again, no sandbox attack on Android MITM and one time link attack only .


That’s all they were able to gather.


They state that they were unable to capture the follow-on stages of the Android chain, they only got the initial execution component.

Which means there’s missing a sandbox escape and privilege elevation bug.

Also yes while delivery here was apparently ISP level MiTM using lawful intercept capabilities, there’s no reason the exploit couldn’t be delivered as a 1click via a phishing link.


There are millions of android devices out there that have been abandoned by their manufacturers, so they might be omitting those details because the flaw hasn't yet been patched (and likely never will be.)


Ride public transport in Eastern Europe and you'll see Androids running versions as old as 4; such devices are cheap, and ubiquitous phone repair shops will fix or replace components, such as batteries, speakers, and more. If the situation in Egypt is similar among less-affluent users, then there are many Android phones just waiting to be exploited.


Even with Google's own flagship device, the latest security patches date from August while this announcement is much newer, so it seems unlikely _any_ Android devices have been patched.


I belive you are mistaken.

Firat of all, some part of Android is updated via the store. So the Chrome vulnerability and system libraries were probably updated without you noticing.

Also, this started in May. Before it went public both Apple and Google had already patched some vulnerabilities. Latest round was in August/September.


I got a September security update for my Pixel 5, I think 2 days ago.


Huh, Pixel 7 Pro here and the last update, even after a manual check, is 5 August


LineageOS on OnePlus Nord, last security update: September 5


Updates occur all the time, what do mean there's none since August?


Apps get updated but what I'm worried about is that the android image view uses libwebp which means all the apps that just ask the OS to display untrusted images are vulnerable to an update.

I'm not sure the image view has been seperated out into any of the play store updated components. None of the ones name and description suggest they'll have OS UI widgets like that.


The article is mainly about the iphone exploit chain: Safari exploit -> PAC bypass -> kernel exploit.

Android version was pretty similar but I think needed two more exploits to bypass Linux kernel mitigations.

PZ has a good technical writeup.


The only Project Zero write up recently about android sandbox escapes appears to be on a different, ALSA based and Samsung specific vulnerability.


Who is PZ?


Google's Project Zero.


Im not well versed in mobile environments. Presumedly breaking out of the Chrome sandbox would land you within the underlying OS. Can you not build persistence there without abusing further vulns?


There's nested sandboxes for browsers in mobile environments. There's the inner layer which the web content is running in, but then the browser itself is sandboxed so it can't do things like access OS APIs it doesn't have permission for, install apps that run in the background, etc. This is why the iOS example needed 3 exploits chained. The fact that a similar example worked on Android, which also has app sandboxing, implies there should be an exploit chain but we've only been told of the first.


There's also SELinux on Android.


But browsers, especially Chrome, have lots of permissions (including geolocation, accessing SD card, accessing user's personal data, camera and microphone etc.). You don't need to do anything if you can run under browser's privileges.


None of the mentioned privileges should net you a persistence though, so there's clearly still another vulnerability.


But smartphones are rarely rebooted so maybe you don't need persistence that much?


You still want priv escalation if you're trying to spy on someone. The content process can't see anything you're doing in other apps, and the browser can only access a very restricted view of the storage

Living in memory or living off the land is generally a good idea, but you still want a chain of exploits anyways


Above says access to SD card. I think that means write ability. Which means persistence.


Persistence means you can write to something that'll cause your malware to run again after reboot, but external storage isn't enough to do that without another exploit in the boot path, right?


Android has clamped down hard on access to the SD card. Chrome certainly doesn't have the special All Files Access, and on my device it doesn't even have Photos or Music, since I never use those permissions with Chrome, and Android regularly turns off permissions that haven't been used recently


that sounds like terrible joke

sandbox in sandbox in sandbox in sandbox in sandbox in sandbox in sandbox

and stuff still manages to escape


That’s the thing about sand. It’s course and rough and irritating and it gets everywhere.


But there is always time for a glass of good wine!


Defence in depth. Now you need a working chain of 3 exploits instead of one. It's about raising the bar, perfect security is impossible.



Gotcha, thank you.


On Android, you usually need three exploits.

1. Chrome code execution (gain foothold inside Chrome process).

2. Sandbox escape (gain code execution outside the Chrome sandbox, with the privileges of the Chrome process, which aren’t very useful except to stage another exploit).

3. Local privilege escalation, usually a kernel bug or similar, to elevate to root where you can break the process “sandbox” and establish persistence.


Though HTTPS is better than nothing, and this attack relies on HTTP to inject the initial payload, state sponsored attackers in some countries can likely just subvert CA or CDN infrastructure instead.


You probably can't forge a certificate like that without all the browsers noticing and dropping your CA; there's protections against it.


If you're a nation you can just force the CAs that are in your jurisdiction to do whatever you want, or sneak in in various ways so they won't know.

If you use the MITM judiciously, it's very likely that nobody will notice, or that those that notice can be compelled not to say anything.


CT means a CA can’t do whatever they want. Your comment is handwavy, do you have any details on the “various ways” you’re talking about?


Physically, or by compromising employees or business owners in whatever legal or illegal means, depending on the country.

Sounds from other comments like my knowledge is out of date though, and browsers have real protections against the obvious ways that used to be possible, which is great news.


I think it’s true that they can do “whatever they want”, but only once, because they’ll lose the right once found. The issue is the time between breach and punishment.


As long as a domain has the CAA record specifying which CAs are allowed to issue certificates for it (I believe CAA checking is now mandatory in the baseline requirements for CAs), coupled with CT, a misissurance by a malicious CA should be immediately detectable.

Of course then the question is how quickly browsers can roll out an update/config to distrust all future certs from said CA.


Browsers only enforce that certs were logged to CT logs (because they will fail a TLS connection unless a certificate has valid SCTs attached to them). The actual domain owner will have to monitor the CT logs and call out when they notice a certificate being issued that they didn't request. Without that active monitoring of CT logs by the domain owner, it won't help.



https://proton.me/blog/kazakhstan-internet-surveillance (apparently they rolled back pushing this, but this shows some "various ways")


We’re discussing trusted CAs typically bundled with operating systems or browsers here. They have to follow the baseline requirements and at least maintain a semblance of innocence. Directly compromising clients with a typically untrusted root cert is out of scope, and you don’t need an evil CA in that case anyway.


The browser will notice that it's being given a cert that isn't in the CA transparency log + other hardcoded known certs for top sites and will phone home about it.


What browsers actually do that? And do all CAs support it?

Besides, if you have enough access to the CA, you can just get whatever cert is in the transparency log, though that's almost certainly harder for most nations and CAs.


> What browsers actually do that?

Chrome, Safari, Firefox.

> And do all CAs support it?

Yes, the browsers made them.

> Besides, if you have enough access to the CA, you can just get whatever cert is in the transparency log

No, the CA doesn't have the private key of certs.


That's great! I guess my knowledge is out of date, happy to be wrong about that.

> No, the CA doesn't have the private key of certs.

Woops, yeah good point.


> you can just get whatever cert is in the transparency log

That would require compromising the certificate requester.


A lot of certificate management services for enterprise customers "helpfully" store the private key files. How many cloud or SaaS vendors automatically handle the private keys as well instead of them being generated and staying securely only on the systems using them? So there are still points of centralization to attack, potentially.


Yes, state actors have been known to steal things like codesigning keys. Microsoft had that happen recently where someone with persistence on a dev machine sniffed them out of crash logs(!).

But it requires a lot more steps.


Or get someone to click on a spoofed domain, certified by our beloved LetsEncrypt! Apparently al that is needed is an HTTP 302/307 redirect response (or html redirect payload, maybe even DNS?) pointing the client toward c.betly[.]me


I'm interested at your suggestion that Digicert et al are doing some sort of "keeping the streets safe" checking. I have zero experience of using them but I thought all they were doing was confirming the applicant represented some entity that matched the domain name. If I manage to get a company registered called G00gle, buy a corresponding domain and then send them my $500 are you suggesting they're going to refuse to issue a certificate?

My impression was the CA's made the likes of Standard and Poor look rigorous, but I'm happy to learn more from actual experience of them rejecting such an application.


You are absolutely right, that (for plain, domain attestation) paid CAs are exactly as trustworthy as LetsEncrypt, and often much less (remember the Diginotar debacle, for example). "Keeping the streets safe" is not their responsibility, except in a very limited sense. The $500 extended validation was mostly paper work and snake oil.

My point wasn't to discredit LetsEncrypt, but to point out that Google's claim to mitigate the MITM attack vector with https-first wasn't a very strong argument. I mean, yes, sure: if you can't intercept or downgrade to HTTP the MITM doesn't work. But all the HTTP seems to do was redirect to a malicious payload. But you can also do a redirect in HTTPS.

So if you can spoof someone to go to https://g00gle.com/ it should be just as easy to launch the attack chain from there.


I don't see GP claiming CAs should be checking reputability for domain issuance certificates. But the thread originator mentioned subverting CAs! Something to remember about even the most advanced attackers is that they value the continued effectiveness of their tactics, tools and procedures. Even nation-states in possession of CA subversion abilities won't burn their malicious CA on someone if they can conduct the attack with a legitimately-issued certificate, and they won't bother with a legitimately-issued cert if they can conduct the attack without even involving a CA.


I was referring to this comment https://news.ycombinator.com/item?id=37615985


As far as I can tell, the verification that DigiCert performs is 1. the company exists in various Business listings 2. the phone number listed in whois has a human behind it and the human confirms the phone number belogs to the company.

Source: Have to be that human from time to time


Only if you pay $$$$ for OV/EV.

If you get the normal DV cert they don't provide any more verification than Letsencrypt.

And since browsers have moved away from indicating OV/EV certs to end users, not many organizations are paying for those anymore.


Can confirm as someone who has to renew a non-Let’s Encrypt cert every year (for reasons). The CA sends an automated email to the email address listed in WHOIS, you click a link in the email, and they issue the certificate. No human interaction necessary.


EV certs have a slightly more rigorous approach. They’ll call the registered agent for the business as registered/licensed with the state, not the phone number from whois or an email to webmaster@


Yes, I’m just talking about regular DV certificates (the same type you’d get if you just used Let’s Encrypt).


Huge difference between being tricked into clicking a link vs just browsing the web and getting owned.


The vector is an image file. Put your image in a display ad, now you don't need the user to click anything


Is there?


Yes. The move from 0 to 1 click exploits (thanks to putting Flash/Java behind a click) in the early 2000s marked a massive negative shift in attacker capabilities and ultimately destroyed multiple (black market) exploit dev businesses.


“Click to play” bypasses became incredibly valuable as an enabler for Flash/Java exploits, for a while. They were also few and far between, and if memory serves me, unreliable as fuck.


It definitely matters. Just think about what sort of how much Dr. Evil would pay for an exploit that relies on user action versus one that doesn't.

https://nvd.nist.gov/vuln-metrics/cvss/v3-calculator


I can probably avoid bring tricked into clicking a link, I've avoided many many attempts to trick me in the past. I probably can't avoid browsing the internet though.


Coincidentally, the ACME DNS verification process that LetsEncrypt uses is vulnerable to the QUANTUM attack. If NSA injects a fake DNS response in the right spot, and have the their response arrive before the official response, they can get the domain verified.

OTOH, Certificate Transparency Logs will give the game away, so there's that.


Doesn't Let's Encrypt check the DNS record from multiple widely-distributed endpoints to avoid this attack?


> certified by our beloved LetsEncrypt

Are you saying that CAs should be refusing to issue certs for potentially spoofed domains?


...or Digicert, Globalsign, the Hongkong post office, whichever CA is in your truststore.

I just mentioned LetsEncrypt because it's free and exceptionally easy to use. I'm not implying in any way they aren't providing a great service, it's just that that service also gets misused because it's cheap and easy.


It really sounds that way, FYI. I interpreted it as a dig at LetsEncrypt in particular.


Please accept my apologies :-)


Consider your apologies accepted


Note that it only took/takes one http visit (to any site) to compromise the device.


Related recent episode of Darknet Diaries about the Predator spyware: https://darknetdiaries.com/episode/137/.


What these 0days teach us is that if you are a person in the crosshairs of powerful adversaries you have to be super-paranoid and reduce your target surface to as small as possible.

If you are James Bond and you want to have secure communications with M, you are better off using a custom appliance type mobile device meant only to do that one function and nothing else. You will have to rely on anonymous 3rd-party burner devices to leave messages encoded in pre-agreed sequence of rotating geo-locked codebooks on random message boards. Any less opsec and you are toast!

If you are a normal human-being using connected digital devices normally, then you should assume anything you put on those devices is already stolen. If you want to keep things truly private, keep it off digital (paper, old-school analog tape etc). Then, at least they will have to physically steal it from you – which, depending on the situation, might be much harder for them, but not necessarily much safer for you. So, you lose either way.

Only real way out is to have well-functioning democratic governments with strong transparency and checks and balances with strongly enforced privacy laws to protect its constituents from both domestic and foreign adversaries and strong support for initiatives like Citizen Labs.


This vulnerability was most probably used by the Egyptian authorities to hack the mobile phone of the presidential candidate Ahmed El Tantawy who is competing with the current president Abdel Fatah El Sisi over presidency.

https://x.com/jsrailton/status/1705271600868692416?s=46&t=Kq...


The article doesn't mention it but Lockdown Mode on iOS blocked this exploit chain.


Here is what I do not understand:

Spyware firms and 0-day vendors both have staff dedicating to finding 0-days. Why do Google and Apple not simply poach these staff?

I am sure Google and Apple can offer very competitive salaries, so why do they not do so? Is it because the cost of basically poaching all of the skilled 0-day hunters is deemed to be greater than the cost of just issuing patches?


I am this person. I work as a researcher finding 0-days.

From the employee perspective: Wages are equal. Big Tech work is less interesting (build big bug finding machines that find have high quantity of bugs) and report the bugs that sit into some bug tracker only to maybe be fixed in 3 months. Offensive security work is more interesting. It requires intimate knowledge of the systems you research, since you only need a handful and the shallow ones get found by Big Tech. You must go deep. Additionally offensive security requires the know-how to go from vulnerability to code execution. Exploitation is not an easy task. I can't explain why engineers work for companies that I deem immoral, but that's probably because they don't feel the same way as I do.

From the employer perspective: How much does the rate of X vulnerabilities per year cost me? If our code has bugs but is still considered the securest code on the market, it may not benefit the company to increase the security budget. If the company expands the security budget then which division is getting cut because of it, and what is the net result to the company health?

If you want to fix the vulnerabilities you need to make the price of finding and exploiting them higher than the people buying them can afford. And you must keep the price higher as advances in offensive security work to lower the price of finding and exploiting them. Since defensive companies don't primarily make money from preventing bugs and offensive companies do primarily make money by finding bugs, there is a mismatch. The ultimate vulnerability in a company, or any entity, is finite resources.


I mean wages might be equal (are they, though? Big tech pays a lot as you go up) on average but there’s a lot of difference in how they pay out. Big tech usually provides compensation bands where your salary is pretty stable. Vulnerability research frequently has your compensation hinge on your performance to a much larger extent.


Would amount of critical vulnerabilities be lower if we sacrifice some performance? 10-20%?


Yes, this has been a trend for a little while now. For example this gist[1] gives linux boot parameters to make linux significantly faster and all it does is basically turn off all default security mitigations. I would make the distinction between vulnerabilities and "exploitable" vulnerabilities though. Mitigations usually give a runtime performance hit but don't remove the underlying flaws, it can just make it harder, or sometimes impossible, to escalate a little flaw into full blown code execution. But also know that offensive techniques advance along side defensive ones. For example ASLR was once considered the death of vulnerability research, but new methods and ideas were found and bypassing ASLR is now just part of the job. Each mitigation must be regularly evaluated against the state of the art, and against the cost to performance (and complexity, etc.). You ideally don't want to be paying performance costs when they aren't helping security.

Rust, Zig, and others, are additionally paying compile time performance costs to remove some underlying vulnerabilities. Which is interesting and probably a good thing for software.

[1] https://gist.github.com/jfeilbach/f06bb8408626383a083f68276f...


thanks


You don't need to cut any division, profits can also change


Most investors back companies with a kind of “extraction” mindset. They only want to Solve The Problem in so far as they can turn that into a stream of income.

Why would they give that to the employees to improve The Solution? It’s already solved as far as The Market is concerned. That would be Bloat


Google has one of the best teams money and prestige can buy:

https://en.m.wikipedia.org/wiki/Project_Zero

They also have excellent collaboration with independent researchers across the world. But given how much software is written everyday, they can still miss some issues.


Project Zero is amazing, but they 1) seem like a very small team, and 2) their mandate is far too broad (essentially to search for 0-days in anything, versus a specific system). What I am talking about is more like Apple having a dedicated team of 10 vulnerability researchers all looking into iOS 0-days fulltime.


They all do that. I've been in Offensive Security for 10+ years with several spent at FAANGS, and not only do they all have large security teams doing internal testing, they hire multiple contractors like Trail-of-Bits to audit every important service continuously throughout the year.

Apple has way more than 10 full time researchers looking at iOS all day, trust me :). They also have a really generous bug bounty. There is always bugs though.


> Apple has way more than 10 full time researchers looking at iOS all day.

Yes

> They also have a really generous bug bounty.

Hell no


Agree. Not long ago, Apple used to sue people reporting vulnerabilities to them. Imagine punishing people doing free work for you. Not a good look.


Getting punished is the default.

If you refer come across anything, keep your mouth shut.


Not only is it not generous (relatively speaking), but actually getting paid can be extremely annoying.

Used to be even worse.


I think this is similar to looking at the budget of the US government and asking why they don't simply pay off all the potential criminals such that most crime in the US is then mitigated.


That’s not equivalent at all. Paying off criminals creates an incentive for there to be more criminals. Paying more security researchers does not incentivize people writing buggy C code to write even buggier C code.


Your analogy is all messed up: paying off criminals doesn't make houses harder to break into.


Raising compensation creates an incentive for there to be more bug hunters.


Which is good


Some of them wouldn't want to work for Google and Apple in the first place, regardless of the salary.

But while they could try and poach them today, tomorrow there will be a whole load of new people working for those companies, and it'll just be a never-ending cycle.


> But while they could try and poach them today, tomorrow there will be a whole load of new people working for those companies, and it'll just be a never-ending cycle.

The number of people who successfully find 0-click 0-days for iOS/Android is very small. It's not a vastly replenishable resource.


It's a small group but a wide pool. It's not like the same person finds 10 0days. And until they do find their one exploit most of them have pretty much no credentials at all. So how do you avoid hiring 10,000 up and comers that never actually come up?


The same that it works in any other industry. By hiring those with proven track records, the best of the best. The goal is obviously not to hire 100% of the potential 0-day hunters, but by launching a concentrated poaching effort, to make a sufficient dent.


People have said Apple can buy companies like NSO for less than they probably spend on SEIMs in a year. But as soon as they do that there will be another startup doing the same thing.

The company (GreyShift) that broke the secure enclave had ex-apple security engineers working for them.


If you inject a lot of money into the industry I think it'll result in more people learning how to do it.

Though, the reason governments found the best exploits for the longest time wasn't by hiring geniuses, but by giving people long periods of time and freedom to do kinds of research nobody else was motivated to do.


They could certainly go out and try and hire some of the best iOS vuln researches. But if they hire the top 10 out there, then #11 gets a massive payrise to go and work for one of the spyware companies.

And if Apple is paying huge amounts of money and getting into bidding wars with all the other companies out there for vuln researches, that'll attract a load more people to start hunting.


> Some of them wouldn't want to work for Google and Apple in the first place, regardless of the salary.

For moral reasons do you mean? I would be surprised to learn there are a lot of people that are open to selling 0 day exploits to "bad actors" (granted that this term is doing a lot of heavy lifting here), but wouldn't want to work for Google or Apple.

> ...it'll just be a never-ending cycle

I think the idea is you pay them so well that they can work for a handful of years and remove any cash incentive reason for them to continue selling 0 days to bad actors.


> “bad actors", but wouldn't want to work for Google or Apple

- Everyone who doesn’t like US hegemony. Which happens about everywhere but US, in varied proportion, but even in Europe, and even worse in Middle-East,

- Everyone who doesn’t like monopolies. Capitalism of competition (as opposed to state capitalism, when the state borrows a trillion per semester, ahem) requires that monopolies be broken down to avoid distortion of competition. Helping bad actors can be, under their viewpoint, less bad than the damage done to a billion consumers consumers at a time. Plus monopolies impose a monoculture of occidentalism, with certain values that a firm in Egypt might consider worse than sponsoring bad actors.


If the number of skilled 0-day hunters who will work for a paycheck is > 0, then yours is a moot point, since a poaching program would still make an impact even if there are some people who work for spyware companies who would not work for FAANG.

I think you will also find that morals for many people are inversely proportional to the offered salary. An 0-day developer being compensated $130k may well abandon their particular morals if offered a $240k salary instead.


It's a good idea, but in some ways, akin to the challenge that would be presented by attempting a similar-veined "why doesn't the world's richest country just hire all the world's best military generals, leaving zero for any other country?".

The reasons why it's not possible are myriad, but boil down to the fact that the world and humanity are very big things, and one entity can't possibly get them all, or even most of them. There's too much diverse heterogenuity built into everything. Including many worldviews and loyalties that go beyond money.


> Why do Google and Apple not simply poach these staff

They do. Plenty of white hat teams hire 8200 vets, but sometimes they'd rather make their own company instead of being a cog within an amaphorous foreign corporation.


This. IIRC some famous security researcher responsible for iOS jail-breaks was poached by Apple only to leave after 3 months.

Successful and skilled security people with a proven track record, don't have the paciente of putting up with the charade such large orgs require.


Plenty of jailbreak developers have been poached by Apple. Whether they're still at their A-game still is questionable, I follow one on Twitter and there's regular tweets about depression, suicide, debt and other gloomy topics.


George Hotz


I think it’s many factors.

1. They do to some extent.

2. Which researchers are you going to hire? Lemon market, whoever wants to be hired is more likely a lemon.

3. Freelancing grayhat stuff is very rock n roll.

4. I bet some they try to hire and then the square and inflexible large corpo hiring process is just absolutely unfit for hiring such a person.


> Which researchers are you going to hire? Lemon market, whoever wants to be hired is more likely a lemon.

Not really. Most people in that space who have a “day job” are almost always open to being hired for better TC/benefits/more interesting problems.

Points 3 & 4 are largely correct.

It’s very rock and roll, but a very unstable income and most of the brokerages are comically untrustworthy. Also you may develop a conscience and find it hard to sleep at night.

Point 4… usually the people who can find such bugs reliably don’t work well in large corps past the short term. The unexplained gaps in a CV also aren’t conducive to getting past HR easily.


who's to say they're not doing this? there are a lot of security companies and researchers in the world though

or alternatively: as lovely as the hacker -> employee fairytale sounds, a certain % of the "I would never work for Google/Apple" types would come in with the sole purpose of installing backdoors from the inside


A lot of the really good hackers won’t pass HR screening, or be able to cope with corporate bullshit beyond a year.

So there’s also that.


There is also a chance at play here. Many people are trying to find a hole, some are more lucky than the other. Google has a great team, so they get lucky more, that is why these things are not too common, but at some point a bad guy gets lucky too, even though he is not the smartest in the room.


Money is just one input for why people choose to work at certain places.


Because that would eat into profit margins, and at the end of the day very very few paying customers actually make their purchasing decisions based on security. On top of that almost nobody is really knowledgeable enough to make an informed decision in the first place. So the money doesn’t get spent.


Why would you need to poach all of them, just enough to find bugs faster

Though poaching all is simply impossible, by raising prices you'll incentivize more people to become vulnerability researches, so more will always be available to the spyware firms


They probably don't want to hire criminals to work at their companies.


Already very well-paid criminals for that matter


Isn't this akin to asking why Google and Apple end up acquiring companies at very high prices if they could just hire the founders and have them build the products in-house?


Google absolutely “poach” promising startup devs tbh.


Poaching founders is an entirely different kettle of fish than just poaching line employees.


Oh, I thought finding these vulns was a highly open-ended endeavor with variable payouts.


It depends. Some companies that buy exploits have public “price lists” for acquisitions.

100k+ for a Safari on iOS code exec, another 200k for the Safari sandbox escape, then another 500k+ for the kernel exploit?

A full chain is real money. Especially when they resell this ability for 1-2M+ per user.


I want to fight terrorism. I’ll work for a company finding ways to get data from bad actors’ phones before I’d work for Google or Apple, at any price.


These exploits are weapons. Look at what governments pay for weapons. That's hard to compete with.


How much do they pay for weapons at the level of an individual weapon designer/manufacturer's employee??


Poach them to do what? There’s not much use to Apple or Google to have an implant developer around, and just having them do nothing is likely to be frustrating if the corporate lifestyle wasn’t enough already.


> Poach them to do what?

Poach them to discover 0-days in their software, as I said.


That’s not what implant developers do.


It’s not just money that motivates.


Ouch.

Apparently Firefox has "Https First" also but requires the pref dom.security.https_first to be set.

"HTTPS-Only Mode" is obviously best if you can do that.


You'd still need to resist the urge to not press "allow me anyway" and to be honest, even I'd click it knowing the risk (I just want to visit the damn site!). This doesn't solve anything unless the prompt is extremely suspicious (like the prompt showing for Google.com or some other site I know supports HTTPS).


Replying to myself but also, they could easily trick you into clicking some link and exploiting you that way. HTTP isn't the issue here, it's just being exploited so they don't have to get you to click some link.

In all likelihood they'd do that if the less direct/obvious method of transmission didn't work.


The Citizen Lab post linked from this article has the details of the MITM attack. It's almost not even an attack, the network is designed to do content injection on demand.


Google search to intellexa results in an http site which got redirected. I am now installing the update.


Ofc, the "services" have enough CA certificates private keys to legitimate any "man-in-the-middle" attack for any browsers. Worst case scenario they have root-kits they can easily inject on user systems in order to intercept some trafic before encryption. They have probably a significantly sized "library" at their disposal to work from, depending on the "targets", and it could even be kind of automated (up to a certain point) with proper finger-printing of user system/components.

Presuming the other way around would be unreasonable.


I’m not an expert, but would’t a VPN (commercial or to a server with known exit IP) prevent such attacks? It will kick out the MITM.

Also I wonder if the lock down mode could block it.


Yes and also yes.

These attacks when launched by government entities pretty much always rely on placing a box at the ISP that does the targeted interception/MiTM against a subset of subscribers.

So using a VPN would ensure your traffic is tunnelled “beyond” their reach.

Lockdown mode also would have prevented the iOS exploit chain, apparently.


If I were a government security regulator or intelligence agency, I would monitor bank accounts associated with Zerodium and similar 0day and payload marketplaces and offensive sec tools to issue a secret, internal threat forecast that marks the beginning potential of increasing, directed, high-value attacks. Probably already exists in various forms, but taking it semi-public to sensitive industries might be useful.


This is basically “can the government prosecute money laundering and tax evasion”.


I can't help but think Google cheekily outed the domains the vendor was using as a way to target some vigilante justice upon them.


I've a question. This 0-day is a 0-click that didn't require any document download or anything. Simply visiting a http site would do it. What if you have JavaScript disabled be default. Would this exploit still work?


Needing to visit a website makes it a 1-click attack, unless you can do passive redirection.


it's http interception so no, I doubt javascript matters at all


Without knowing more, that's a bit of an assumption. The vulnerability could be in image decoding, in which case an <img> tag is enough and no scripting is needed, but it could also very well require doing something funky with JavaScript.


if it required javascript then it was already an exploit without the mitm


I don't get it. If it is over http then you can play around with anything in a proxy. You have no TLS tunnel so it is not encrypted. It is by design.


That's just the delivery method, not the exploit.


Yes it is not encrypted and the response returned from server can be modified to anything and ask for password etc, but that is far from an exploit that runs native code.


Is the commit containing the fix for the v8 bug 1473247 CVE-2023-4762 available anywhere?


Slighty related, but Senator Bob Menendez was just indicted for taking bribes from people connected with the Egyptian military [0]. Gotta say, the Egyptian intelligence services are definitely punching above their weight by regional power standards.

[0] - https://www.politico.com/news/2023/09/22/egypt-guns-money-me...


> Senator Bob Menendez was just indicted for taking bribes from people connected with the Egyptian military

At a federal level law/power is continually traded for cash/favors. Heck, DoJ itself gets deployed in response to lobbyist demands (eg:copyright enforcement).

From what I see this case was egregious and involved a non-favored foreign state. Maybe that's the bar at which DoJ begins to care about political ethics.


> law/power is continually traded for cash/favors

I worked on the Hill and that's not how it works. Yes, lobbying happens, but the what Menendez is indicted for goes well beyond anything a lobbyist would do legally. On top of that, foreign lobbyists need to formally register with the DoJ, which obviously didn't happen, but that's just the icing on the cake.


>> law/power is continually traded for cash/favors

> I worked on the Hill and that's not how it works.

Your are asserting that law/power is not continually traded for cash/favors. That's a pretty clear assertion and I appreciate it.

To follow, you would also assert that this chain doesn't exist in any meaningful way:

Major campaign donations are used by legislator -> Legislator benefiting from funds is critical to creation of law/regulation or to enactment of federal action taken that is favorable to donor -> Influential/lucrative, positions that benefit the legislator (or their interest) are made available to the legislator (during/after the elected term) by the donor.

recap: You are asserting that what I describe above is not occurring on an ongoing basis, correct?


Yes, that doesn't happen.

If a legislator agrees with you, you want to keep them in the legislature, not retiring into industry.

Also, campaign donations are capped at like $5000 and most of what people think is corruption is them recklessly misreading the donation reports.

Similarly, it's Bernie and similar people who get the most donations these days because of ActBlue, and it doesn't help them win elections, because voters actually like the legislators you think are owned by corporations and actually vote for them.


> Yes, that doesn't happen.

I invested 15 seconds into a search query.

Former Members Dick Armey. Tom Daschle. Tom Foley. Trent Lott.

Once, these politicos ranked among Congress' most powerful members. Today, they share another distinction: They're lobbyists (or "senior advisors" performing very similar work). And they're hardly alone. Dozens of former members of Congress now receive handsome compensation from corporations and special interests as they attempt to influence the very federal government in which they used to serve.

ref: https://www.opensecrets.org/revolving/top.php?display=Z

This is an incomplete list of just one body of lawmakers who went to work for just one industry. It's a very limited reference to the much, much larger whole.

>If a legislator agrees with you, you want to keep them

Although I included people tied to legislators (and their interests) you opted to limit your response to just legislators and even that seemed more platitude than substance.

> Also, campaign donations are capped at like $5000

So what? This falsely implies no donor can get more than $5k into any one campaign fund. Tossing it out there as primarily defining detail seems disingenuous. It omits non-cash contributions. It omits donations to traditional PACs, Super PACs, 527s, political parties, 501(c)4,5 & 6.

It omits all the possible avenues of getting money to candidates that someone with a CapHill political background almost certainly knows.

It does dovetail nicely with an alternate narrative about revolving doors not existing, however.


> This is an incomplete list of just one body of lawmakers who went to work for just one industry. It's a very limited reference to the much, much larger whole.

Those people are a combination of 1. really old and 2. lost an election. You can't keep people around forever, even if you might want to. (It also implies they're effective as lobbyists, which I don't think is necessarily true.)

> This falsely implies no donor can get more than $5k into any one campaign fund.

Those other things aren't the campaign fund, they're largely separate funds running separate campaigns and not controlled by the candidate. So they can't be used to directly pay the candidate.

Also, as I said, 1. money doesn't actually win elections and 2. if it did, it's Bernie and fellow non-corporate-Dem candidates who'd actually be winning, because small donor fundraising is more effective than this stuff is.

There are some straight up bribery scandals, but I think Bob Menendez is an exception that proves the rule and is going to lose his office based on his literal piles of gold bars bribery.

It is, however, actually the case with SCOTUS judges that they straight up take bribes and nobody can stop them.


> involved a non-favored foreign state

Egypt is not 'non-favored'. The US has very close ties with Egypt's dictatorial regime[0], despite its awful domestic human rights record[1].

[0]https://thehill.com/blogs/congress-blog/foreign-policy/58552...

[1]https://www.amnesty.org/en/latest/news/2022/09/egypt-human-r...


> Egypt is not 'non-favored'.

The WTO sets the MFN list and Egypt isn't on it so strictly speaking you aren't correct.

Besides incurring WTO favor, the US also bestows it's own preferential treatments to those same nations. Egypt has long received many of those preferences so you're right in the ways that are most relevant.

That said, I wouldn't place Egypt on US's BFFs! Top partners in crime list - the one that includes 5/8 eyes nations and Israel.


From the Google disclosure I can't tell what Egypt has to do with this though. Intellexa is a Greek firm founded by an ex-IDF (aka Israeli) guy. In general, while Egypt has definitely been caught using tech like this, it rarely has the sophistication to develop it itself.


Hey, they nearly destroyed the Ottoman Empire in the 1840s...


Probably part of the long running (now peaceful) rivalry they have with Israel.


Yep! Totally forgot about that!


Another company founded by ex-Israeli intelligence.

The funny thing about exploits is, once hundreds of employees or soldiers have access to the exploit, they don't need to physically copy the code. They just need to understand how it works, to then open 10 other companies that use the same exploit, or sell it to 20 other companies on the dark web.

Although the IDF is great at stopping people from copying files outside of their networks, it can't stop people from remembering what they did during their service


For every 1 zero day, there are around 10-20 others that haven't been publicized. You can make plenty of money by trying to find a niche and concentrating on that (eg. android exploitation, iOS exploitation, Windows exploitation, APAC buyers, US Defense buyers, Middle Eastern buyers, EU buyers, etc).


One of these days people will wake up and realize that carrying a networked gps tracker with a microphone in their pocket is a really dumb idea.


Just your regular reminder that for the only security certification that Apple advertises on their website for iOS [1][2] Apple only achieved the lowest possible level of security assurance, EAL1. A level only fit for products where [3]: "some confidence in the correct operation is required, but the threats to security are not viewed as serious" which does not even require "demonstrating resistance to penetration attackers with a basic attack potential" [4]. This is four entire levels lower than "demonstrating resistance to penetration attackers with a moderate attack potential" [5].

Apple has never once, over multiple decades of failed attempts, demonstrated "resistance to penetration attackers with a moderate attack potential" for any of their products. To be fair, neither has Microsoft, Google, Amazon, Cisco, Crowdstrike, etc. It should be no surprise that the companies, processes, and people who lack the ability, knowledge, and experience to make systems resistant to moderate attackers despite nearly unlimited resources are regularly defeated by moderate attacks like commercial surveillance companies. They certify that they absolutely, 100% can not.

[1] https://support.apple.com/guide/certifications/ios-security-...

[2] https://support.apple.com/library/APPLE/APPLECARE_ALLGEOS/CE...

[3] https://www.commoncriteriaportal.org/files/ccfiles/CC2022PAR... Page 14

[4] https://www.commoncriteriaportal.org/files/ccfiles/CC2022PAR... Page 16

[5] https://www.commoncriteriaportal.org/files/ccfiles/CC2022PAR... Page 20


Please don't post "regular reminder" style comments - they're too generic, and generic discussion is consistently less interesting. Good threads require being unpredictable. The best way to get that is to respond to specific new information in an article.

https://news.ycombinator.com/newsguidelines.html


> neither has Microsoft, Google, Amazon, Cisco, Crowdstrike, etc. It should be no surprise that the companies, processes, and people who lack the ability, knowledge, and experience to make systems resistant to moderate attackers

Companies create a separate SKU for products that meet higher levels of security assurance for Common Criteria. I know for a fact that the companies you listed offer SKUs that meet higher EA levels (EAL4+) for Common Criteria. You just gotta pay more and purchase via the relevant Systems Integrators.

A consumer product line like the Apple iPhone isn't targeting DoD buyers. That's always been Blackberry Ltd's bread and butter


I said, "resistance to penetration attackers with a moderate attack potential". EAL5 is the first level at which you must demonstrate that as can be seen in my 5th link [1] which bolds the diffs from the previous level.

None of those companies has ever once certified a product to that level as far as I am aware. The failure is so complete that it is generally viewed as impossible to fix the structural defects in products that failed a EAL5 certification without a total rewrite. It used to say that in the standard somewhere, but the standard revisions have moved it so I can not quote it directly.

[1] https://www.commoncriteriaportal.org/files/ccfiles/CC2022PAR... Page 20


Going EAL5 and above doesn't make sense from a cost to security ratio UNLESS the customer is open to paying more for that level of verification.

Certain agencies and bureaus within the DoD do ask for this and pay for it, but most are good enough with EAL4.

Most attacks can be resolved by following the bare minimum recommendations of the MITRE ATTACK framework (marketing buzzwords aside).

Least Priviliged Access, Entitelement Management, Identity Enforcement, etc are all much easier wins and concepts that haven't been tackled yet.

Companies will provide EAL5+ if the opportunity is large enough, but it won't be publicized. Iykyk. If not, chat with your SI.


No. The US government briefly had procurement requirements for high security deployments.

They were forced to relax them because Microsoft could not make bids that met the minimum requirements for DoD and high security projects and that made their Senators mad. They relaxed them to EAL4+ because that was the most that Microsoft could do.

They since relaxed them further to EAL2 because that is all the most large AV and cybersecurity appliance vendors could achieve. They justified it under the "swiss cheese" model where if you stack multiple EAL2 then you get EAL4 overall, which is insane. The government has since relaxed them even further since none of the companies want to do any certification since none of them can achieve a decisive edge over the others that they can write into the requirements thus disqualifying their competition, so certification is just a zero-sum game.


EAL5 is mainly about having a semi- formal description and for 6-7 you also need formal verification.

Outside some very limited cases, we don't have the tools to go there yet. EAL4+ is what people should aim for.


EAL4+ is useless against the prevailing threat actors as can be seen time and time again. There is no point at aiming for inadequate; even if you get there you still get nothing.

EAL6-7 certifications are basically the only known, existing certifications that have any evidence supporting that they are adequate to defend against the known and expected threats. As far as I am aware, there are no other certifications even able to distinguish products that can viably protect against organized crime and commercial spyware companies. Existing products max out every other certification and we know for a fact those products are ineffective against these threat actors. Therefore, we can conclude that those certifications are useless for identifying actual high security products adequate for the prevailing threat landscape.

Sure, if we had some other certification that could certify at that level and was more direct, that would be nice. But we do not, the only ones that we know to work and that products have been certified against are Common Criteria EAL6-7 (and maybe EAL5). We can either choose certifications that are cheap and do not work, or ones that work. Then, from the ones that work, we can maybe relax the requirements carefully to identify useful intermediate levels, or identify if some of the requirements are excessive and unnecessary for achieving the desired level of assurance.

However, the key takeaway from this is not whether we can certify products to EAL5 and higher or whether those certifications work or the cost-benefit of that certification process. The key takeaway is that EAL4 is certainly inadequate. Any product in commercial use targeting that level or lower is doomed to be useless against the threat actors who we know will attack it.


> EAL4+ is useless against the prevailing threat actors

Hold on a second. Assurance level is about, well, level of assurance the developers can provide. It is in most cases just paperwork.

CC has a different mechanism to define attacker capability & resources (cant recall what it's called) and set the security goals accordingly


The AVA_VAN (vulnerability analysis) Security Assurance Requirement (SAR). AVA_VAN.4 requires “resistance to penetration attackers with a moderate attack potential”. AVA_VAN.4 is only required for EAL5 and higher.

You could individually incorporate a higher AVA_VAN into a lower EAL as a augmentation, but few do that. You also do not get any of the other conformance assurances that a higher EAL gives you. There is a reason we use EAL as a whole instead of just quoting the AVA_VAN at each other.

Though maybe you are talking about the Security Functional Requirements (SFR) which define the security properties of your system? That is somewhat orthogonal. You have properties and assurance you conform to the properties. Conformance more closely maps to “level of security” as seen in the AVA_VAN SAR. However, the properties are just as important for the usage of the final product because you might be proving you absolutely certainly do nothing useful.


I feel like you're arguing that these certifications are useless and uncorrelated with security but then you're trying to say that Apple and others are bad for not having them.


Low certification levels certify low levels of security. High certification levels certify high levels of security.

EAL4 is known to be too low against modern threats that will attack commercial users. We know this from experience where EAL4 systems are routinely defeated. Higher certification levels, such as the SKPP at EAL6/7, are known to be able to resist against much harder threats such as state actors like the NSA (defeating a NSA penetration test was a explicit requirement tested in the SKPP by the NSA themselves).

Low certification levels, like EAL4 and lower, that are the limit of the abilities of companies such as Apple and Microsoft are known to be useless against commercial threats. They are uncorrelated with protection against commercial threats because they are inadequate in much the same way that having a piece of paper in front of you is uncorrelated with surviving a gunshot. Systems that can only be certified to EAL4 and lower are certifiably useless.


> Low certification levels certify low levels of security. High certification levels certify high levels of security.

I guess I don't know enough to say but I just doubt that, knowing what I know about other certifications. I expect that they're perhaps lightly correlated with security.


You said that I was arguing the certification is useless. I was arguing that certifying to low levels is useless. Those are not even close to the same argument.

For example, a squat test is a reasonable measure of leg strength. Only squatting 20 kg means your leg strength is extremely weak. The test procedure is fine, getting results like that is not. If that is all you can do, that is quite problematic.

As to the certification itself, it is pretty good. Easily hacked products like iOS, Linux, and Windows are consistently unable to certify as moderately secure. That is vastly different than basically every other certification where products like Windows pass with flying colors even though we all know that is nonsense.

So, at the very least, low certification levels like EAL4 provide high confidence of lackluster security. You can withhold judgement of high assurance levels corresponding to high security if you like, but low assurance levels corresponding to low security is pretty clearly established.


I mean, it occurs to me that maybe all of these companies aren't doing this for a reason - because common criteria and compliance are often stupid and don't represent real security. Perhaps these policies are the exception? But I've managed SOC2 for example and I can definitely say that there are plenty of ways to get your SOC2 without giving a shit about actual security.


They failed. Repeatedly. For decades. They spent billions trying. The failed so much that the standard writers determined the only logical conclusion is that it must be practically impossible to retrofit a system that failed EAL5 certification to achieve EAL5 or higher certification without a complete rewrite and redesign. It says so right there in the standard [1]: "EAL4 is the highest level at which it is likely to be economically feasible to retrofit to an existing product line". That was added due to the decades of experience where everybody who ever tried to do that failed no matter how much time or money they spent.

We also have plenty of evidence that it does matter, they just can not do it. Here is Google touting their Common Criteria certification for the Titan M2 security chip hardware which is EAL4 + AVA_VAN.5 (resistance against penetration attackers with a high attack potential) [2]. Note that this is only the hardware (software was not certified; a critical severity vulnerability was actually disclosed in the software allowing complete takeover if I remember correctly) and only cherry picks AVA_VAN.5 so is still only EAL4, not a holistic EAL6 certification. Getting that certification was a deliberate effort and cost. If they literally did not care about the Common Criteria then they would just certify to the checkbox level like everybody else. It is because they could certify it to a higher level than most other can achieve that they chose to do it because then they could tout their unique advantage.

Basically everybody gets a certification and basically everybody displays their certification on their page. There is something to be said about them opting for a EAL1 over a EAL4. It is basically assumed that any serious vendor could probably get a EAL4 with some effort. So, there is no differential advantage to displaying a EAL4 since everybody could get it. It is just a zero-sum game to pay for certification if everybody knows nobody has a true advantage. However, if you can achieve EAL5 or higher, then you do have a unique advantage because basically nobody else can do it. The fact that none of the major vendors attempts EAL5, shows that they can not do it.

[1] https://www.commoncriteriaportal.org/files/ccfiles/CC2022PAR... Page 18

[2] https://security.googleblog.com/2022/10/google-pixel-7-and-p...


> Apple has never once, over multiple decades of failed attempts, demonstrated "resistance to penetration attackers with a moderate attack potential" for any of their products. To be fair, neither has Microsoft, Google, Amazon, Cisco, Crowdstrike, etc.

So, OK I guess?

It's worth noting that CC evaluation does not score the actual practical security of a device or system, but the level of testing it was submitted to, which is consistent with pretty much every single governmental certification out there.


Sure it does, it is just that EAL4+, the highest level any of them can reach, does not certify "resistance to penetration attackers with a moderate attack potential". Guess what, commercial hackers have "moderate attack potential".

You are complaining that the 40 cm high jump test does not score actual jumping ability. You are right, it is a low bar that they should all be able to pass. You can not use the 40 cm high jump test to distinguish them. What you need to do is use the 100 cm high jump test. Some can pass it, but none of the large commercial vendors can. Sure, it would be nice if we had more gradations like the 60 cm and 80 cm tests, but we do not really know how to do that, so the best we can do is the 100 cm test.


I'm not really complaining, though. I'm just saying that security certifications are more about compliance than actual proof that a system cannot be easily compromised. In other words, they are more about legal requirements than guarantees.

It is also misleading to assert that a device or a system are less secure because they haven't been certified. Vendors submit requests to validate against specific levels or certifications, and it is not the goal of the certification authority to determine "how high" they score.


They can not certify against useful assurance levels. They have tried repeatedly for decades and spent huge gobs of money. It is not a choice, they are incapable of it.

I am judging them by their maximum ability ever demonstrated under the most favorable circumstances and they still can certify resistance against moderate attackers. They have never developed systems that can protect against the prevailing threat landscape and they can not develop such systems. Their best is not good enough.


> They have tried repeatedly for decades and spent huge gobs of money. It is not a choice, they are incapable of it.

First of all, I don't think that's true, and if it is, I would like to see proof of Apple submitting their products for evaluation.

Second, you are judging an entire industry. This is not about Apple shortcomings, there isn't a single vendor doing what you say needs to be done.

Regardless, and this is more a personal opinion than a hard fact, most certifications out there are BS. PCI-DSS is basically a checklist of best practices, CC goes from common sense stuff to essentially impossible to achieve unless designed for the specific purpose, etc. Yes, all these helped create a very healthy - and profitable - industry where consultants have thrived on Powerpoints and PDFs, without really creating any tangible value.


Isn't EAL1 what you get for just showing up?

Basically, here is the product. Here are some design documents. We don't have anything more. Can we get our EAL1 please?


Yup. Want to have a laugh? Here is the Apple iOS certification report [1].

On PDF page 26 (document page 21) they describe the rigorous AVA_VAN.1 vulnerability analysis certification process they faced. The evaluation team basically typed in: "ios vulnerabilities" into Google and then typed in "ios iphone" into the NVD and verified that all of the search results were fixed. AVA_VAN.1 certification please.

To explain why, AVA_VAN.1 does not require a independent security analysis, it only requires a survey of the public domain for known vulnerabilities [2]. You need AVA_VAN.2 (which is only required in EAL2 and EAL3) before they actually attempt to look at for vulnerabilities themselves.

[1] https://www.commoncriteriaportal.org/files/epfiles/st_vid112...

[2] https://www.commoncriteriaportal.org/files/ccfiles/CC2022PAR... Page 154


Common Criteria EALs have nothing whatsoever to do with practical security. I'd be surprised to hear anybody on this site who has managed a CCTL security review for a product saying anything positive about the program.

A fun exercise: find a list of commercial mainstream products with "high" EAL audits, and then look at their vulnerability histories.


Since it is fun can you link some of these EAL5 or higher products with sordid vulnerability histories?


The archetypical EAL5 product is a smartcard or cryptographic coprocessor (same thing, different package). They're certifiable because they don't do much.

But if you'd like an example from the EAL4 list: start with FortiOS.


EAL4 and under is junk (with respect to security). I already said that. The standards committee has also always maintained that there is no meaningful security at EAL4. Earlier drafts of the standards said EAL4 is only meant to protect against “casual and inadvertent attacks”.

The general crappiness of EAL4 goes all the way back to the Orange Book where EAL4 maps approximately to Level C2 (contemporaneous projects got certified to those levels simultaneously). That level was intentionally meant for toys before they put on their big person pants and release a grown up product with actual security [1]. It is a mystery why anybody thinks EAL4 and security belong in the same sentence.

[1] https://www.stevelipner.org/links/resources/The%20Birth%20an...


If you're saying "EAL4 and below is junk" (I'd say EAL* is junk, but whatever), all you're really saying is that minimal-function cryptographic coprocessors are safer than operating systems and applications. Well, yeah, sure. I don't think you needed the farce of Common Criteria to tell you that, though.

But upthread, you knocked Apple for not achieving an adequate assurance level. As you can see now, and from your own last comment, that doesn't make any sense. It's possible (though deeply silly) that there's some iPhone configuration that could "achieve" EAL4, but you yourself don't believe that has any meaning. I don't either.

I don't think EAL5 or EAL6 do, either, except that if you tell me your product is EAL5, I'll assume it's a small fixed-function device.


No. The Separation Kernel Protection Profile (SKPP) defines a model for a operating system kernel at EAL6+. So all I am really saying is that safe operating systems are safer than non-safe operating systems. You could also look backwards to the TCSEC and the comparable certified Level A1 systems for other operating systems designed for actual high security work.

You keep calling it a farce, but you keep pointing at EAL4 and lower systems. Yes, those levels are farces, that was the whole point. Those are the levels for the certification of toys where documentation and paperwork is all that is needed, not proper design.

Complaining about the Common Criteria in the context of EAL4 and lower systems is like complaining about tissue paper manufacturers putting their tissue paper through bulletproof vest testing and certifying that it does not stop bullets. Yes, that is pretty stupid, farcical, and probably a waste of time. But no, the test is not stupid. It can test actual bulletproof vests, you just keep seeing stupid waste of time tests proving a useless fact that everybody already knows, the EAL4 quality system sucks and has no place in a serious security organization.


This is like comparing the L4-based SEPOS to macOS. They share the name "operating system", but they are not the same thing.


I can't help but pick a couple of general points from area standards-arguer man.

One is 'problems of standardization in high-development-velocity fields':

https://news.ycombinator.com/item?id=3577837

The other is "this is worse in security engineering broadly and outright catastrophic in cryptography engineering specifically"

https://news.ycombinator.com/item?id=25451351

There's probably better/longer, I just looked at the first page of "standards" search. The points are strong, like messageboard-argument-winning bear, though.


What'd I do here? You have me worried. Did I manage to back into an argument that standards processes are good? Really I'm just trying to say two things here:

* The Common Criteria process is a farce, and things that are secure by dint of being EAL5+ are really secure because they're so small you can almost prove them with formal methods (which is not to say that everything EAL5+ has usefully applied formal methods).

* The meaning of the word "operating system" is different when we're talking about EAL5+ stuff; seL4 is an "operating system" that has been proven secure with formal methods, and it is secure, but it's secure in large part because it does almost nothing; it isn't comparable to Windows, macOS, or Linux. You couldn't build an iPhone experience on top of it (you could --- and people do --- host Unix kernels on top of L4 OS's, but when you do that you keep many/most of the security issues of that Unix kernel).


No no, sorry, it wasn't intended as backhanded snark.

You've had a running critique of standards stuff here for years and it's good and the other person should read it. A bit of an unsuppressed "well, maybe if they looked at the principles, then they'll see the light" nerd-response on my part. You're not going to make an iPhone out of seL4, you're also not going to come up with a Secure Enclave or an IOMMU or an encrypted serial bus or whatever by reading standards/meeting certification reqs either because, generally, that's not how standards and certifications work. For multiple developed versions of this argument, see 'tptacek!

I'm sure you're right about the details of Jor-EAL5+ and the bottle city CRYSTALS-Kandor, just saying "also right in this other way but with less lore". Not that there is anything wrong with lore.


Well, you'll make the iPhone fingerprint reader out of seL4. :)


So what you are saying is that EAL >=5 does not imply secure, it is the formal methods that imply secure?

Well good news, that is not a mistake. EAL >=5 implies formal methods of various degrees. The standard demands the thing that works, amazing. It might be too hard for consumer companies to achieve, but that does not make the process a farce as long as there are companies that can certify useful functionality according to the process. And, although you seem to think the functionality that can be certified is limited, I doubt you consider secure cryptographic co-processors useless or unimportant.


In that paragraph, the words "of various degrees" is load-bearing.

I don't think you're making a coherent case for the security or insecurity of an iPhone with any of this stuff. Real security in complex products simply has nothing to do with the CCTL process. Which, I'm sorry to say again, is farcical.

If it's not clear: I've had the displeasure of working with this process in my career, most of which I've spent in vulnerability research.


Yes, various degrees; we are discussing three distinct levels. Why on earth would they all demand the same degree of formal methods?

Up to EAL4 all you need is a informal specification; what is basically useless paperwork.

It is only at EAL5 that you are required to supply a semi-formal design, ADV_TDS.4, and interface specification, ADV_FSP.5. At EAL6 you must also supply a formal model of the security policy, ADV_SPM.1, and a complete mapping between design and implementation, ADV_IMP.2. By EAL7 you are required to supply a complete formal design, ADV_TDS.6, and specification, ADV_FSP.6.

You then have a formal model of the security policy, the interface enforcing that policy, the design backing the interface, and a mapping from the formal design to the implementation. That is pretty dang exhaustive. I guess you could go further and demand a formal, machine-checkable correspondence between the implementation and the design?

Since you bring up that you have worked with the process, what products at EAL5 or higher have you had the displeasure of working on?


This thread keeps trying to escape down little rabbitholes of abstraction. I'll be clear: if Apple wanted to ship an EAL5 product, the very first product decision they would need to make in service of that would be to stop rendering HTML. No browsers. No rich media. No installable apps that could do any kind of IPC.

This is what we mean when we say an EAL5+ product is a different kind of thing. It's why this whole CCTL EAL thing is such a farce. The methods you're blaming big tech companies for not using are all things that preclude most of the products users want to use.

Can we stop pretending that any vendor in the industry has the option of serving customers with formally verified "EAL6" products? They don't. This isn't a thing, and hearing "EAL6" and "penetration testing" mentioned in the same sentence is grating. You don't penetration test a formally verified product. You do verification and assurance work for it, but nobody in the field would ever call it pentesting.


I am blaming them for selling products that are inadequate for the threat environment they are expected to operate in and lying and/or insinuating that they are adequate for that threat environment, especially when they know for certain that they are not and certify as such. If the customers truly want those products and features, security be damned, like you say then they will do that even if the companies are completely truthful. The companies do not because they know that it will hurt their margins or make their products non-viable.

In addition, the lies suck all of the air out of the room for actual secure products because why go through the extremely hard work of actually making something secure when you can just lie about it. This has happened time and time again. There were multiple TCSEC Level A1 certified implementations when the DoD demanded real security. But, as soon as they lowered requirements to allow consumer systems that were inadequate for the threat environment, all of the effort and funding behind actual secure systems dried up. What we are left with now is a total wasteland of insecure products controlling systems too big for their britches and endless attacks getting closer and closer to total societal disruption. The only saving grace is that it is taking time for the attackers to scale to exploit this entire greenfield opportunity.

As to your other points, the SKPP explicitly included a penetration test done by a NSA team of the formally verified product under certification [1]: "AVA_VLA_EXP.4.3E The NSA evaluator shall perform independent penetration testing.". So, actually, "EAL6" and "penetration testing" do belong in the same sentence.

If Apple did want to ship a EAL5 product, the very first product decision would be starting over. HTML support or not does not even figure into the list; they have to tackle much more basic defects before getting to that. And I do not see how HTML and rich media support is even a challenge unless you made your security properties depend on exact rendering. You just use a separation kernel architecture to isolate the untrusted HTML renderer into a ephemeral sandbox that takes a HTML file and outputs an image. You might then say, "Apple already sandboxes the renderer.". Yeah, but their sandbox sucks and is regularly defeated. Almost as if having formally verified separation kernels as an underpinning might allow the obvious solution to work; assuming the same minds that built on sand do not add in more harebrained ideas. Hard to put it past them when they made so many other terrible security decisions.

Also, you did not answer what EAL5 or higher products you worked on that informed your disgust with the high assurance Common Criteria process.

[1] https://www.niap-ccevs.org/MMO/PP/pp_skpp_hr_v1.03.pdf Page 118


I am blaming them for selling products that are inadequate for the threat environment they are expected to operate in and lying and/or insinuating that they are adequate for that threat environment

You can make this argument and remove all the Jor-EAL5 stuff after and it's the exact same argument, in fact, a better one because you don't have all that superfluous stuff. It's a parasitic red herring.


> I am blaming them for selling products that are inadequate for the threat environment they are expected to operate in and lying and/or insinuating that they are adequate

You lost me here. Where has Apple "insinuated" anything else but "secure enough for consumers"? Really, I want to know where they promote their products as the choice to protect oneself from nation-state adversaries, because so far, the only security offers they have for iPhone users are threat notifications [0] and lockdown mode [1], and they make no guarantees on either of them.

Samsung [2] and Google [3] make similar assertions.

I believe you are targeting a strawman here, especially given the fact that there are zero consumer grade phones out there that are CC certified to a level that you may consider adequate.

> In addition, the lies suck all of the air out of the room for actual secure products because why go through the extremely hard work of actually making something secure when you can just lie about it.

This is blatantly false.

There aren't consumer ready operating systems that may replace Apple's, and are EAL5+ certified.

Unless your point is that some day you may be able to install System Z in your laptop, that is.

[0] https://support.apple.com/en-us/102174

[1] https://support.apple.com/en-us/HT212650

[2] https://www.samsungknox.com/en

[3] https://landing.google.com/advancedprotection/


There is a near zero chance that being EAL4 or higher certified would’ve prevented these attacks.

CC might be better than PCI-DSS, but not by much.


Do you by any chance have this data on Google, Samsung, Huawei, LG, and other cell phone manufacturers? I’ve never looked into these certifications and I wouldn’t know where to start looking. Do the above companies publish the results like Apple?


https://www.commoncriteriaportal.org/products/index.cfm?

Generally only components are EAL certified. For example, the iPhone is not on there, but the security protecting access to Apple Pay on the iPhone 13 with A15 Bionic running iOS 15.4.1 (19E258) is EAL2+.


You as a private consumer wouldn't be able to buy one of these EAL4+ products without a relationship with a defense and security oriented reseller.


Sure. The Common Criteria for Information Technology Security Evaluation [1] is the foremost internationally recognized standard (ISO 15408) for software security that most large companies certify against for at least some of their product portfolio. I believe there are US government procurement requirements to that effect, so many systems will have certifications of some form.

For many companies you just search: "{Product} Common Criteria" and they will usually have a page for it on their website somewhere.

You can also go directly to the certified products page: https://www.commoncriteriaportal.org/products/

For smartphones you can see them there under "Mobility".

Unfortunately, it is fairly hard to parse if you are not familiar with the terminology. The general structure of Common Criteria certifications is Security Functional Requirements (SFR) which are basically the specification of what the product is supposed to do and the Security Assurance Requirements (SAR) which are basically how you certify the SFRs are met (and what level of assurance you can have that the SFRs are met). SARs can be bundled into Evaluation Assurance Levels (EAL) which define collections that reasonably map to levels of confidence. You can add SARs beyond the current EAL which is how you get a EAL level with a +, but it is important to keep in mind that just cherry picking certain SARs does not necessarily give you a holistic assurance improvement.

SARs and SFRs can be further pre-bundled into Protection Profiles (PP) which basically exist to provide pre-defined requirements and testing methodologies instead of doing it one-off every time. Some Protection Profiles support variable EAL/SAR levels, but these days people generally just certify against a Protection Profile with a fixed SAR bundle. This is what PP Compliant means. If you want to see what they certified against, you would need to look at the Protection Profile itself.

For smartphones, the standard Protection Profile for the phone itself is Mobile Device Fundamentals. If you look at the SAR bundle there you will see that they correspond to EAL1 + a small number of EAL2, resulting in a overall level of EAL1+. As they are in-between EAL1 and EAL2 I just classified it as EAL1 for my earlier post. If you peruse further you will see that basically every Protection Profile that companies certify to as PP Compliant are basically the same EAL1+ or thereabouts. So, if you see PP Compliant, it probably means EAL1+ or so.

Hope that helps.

[1] https://www.commoncriteriaportal.org/cc/


I am pretty sure I was hit with this. I had some REALLY weird redirects coming from text msgs. NOT from Egypt. Maybe paranoid. Offline / on new Linux devices for now.


Text messages with links that do weird redirects aren’t too unusual. What makes you think you were a target?


Some men just want to watch the world burn.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: