Hacker News new | past | comments | ask | show | jobs | submit login
Mac malware signed with Apple ID infects activist’s laptop (arstechnica.com)
125 points by shawndumas on May 17, 2013 | hide | past | favorite | 54 comments



Headline says "sighed with Apple ID" as if that's a bad thing, but it's actually a good thing, as it means Apple has already revoked the Apple ID (and presumably the associated certificate), so as soon as your computer updates its certificate revocation list, it will refuse to run the application (even if you try to bypass gatekeeper).


Good and bad. Actually, bad right up until Apple revokes the cert.


Gatekeeper really can't do much against a spear phishing attack. If someone's running a scam specifically to target a single person or a small group of people, and has spent the time and energy figuring out how to get them to open a malicious file, having an Apple ID is a $100 speed bump.

Gatekeeper is meant to prevent a wide-ranging attack. If you've got someone custom crafting an attack vector for you specifically, you've got some serious problems. I don't think Gatekeeper adds any more false security than an antivirus app would, and I don't know of any software that'd prevent an attack like this.

At best, it adds a small amount of information for authorities to try to track the attack. I doubt it'll be fruitful, but it's better than nothing.


What I don't like about Gatekeeper is that there is no way to revoke Apple's cert and replace it with another. Not that that would protect the activist in question, but it would certainly make a company machine secure against this kind of attack.


If I understand you correctly, it would also prevent you from launching any codesigned application, including the OS-provided ones, since they would all be considered to have an invalid code signature.

So yeah, I guess in a certain light, a machine that can't launch apps is definitely secure.


Good point. I think you half misunderstood; their certificate should be replaceable so that you disallow third-party software from developers in their developer program and allow it from your company only or vendor A only. Obviously one would need to implicitly allow certain OS processes and Finder for the system to work, but my point is basically that we should not be beholden to Apple beyond that, the CA should be something we can choose.


I'd go as far as to say, if someone is spending the money to hire a competent foe to attack a specific person who isn't a paranoid security expert, there isn't much chance they'll escape uncompromised.


Exactly. We're all lazy because we only expect general attacks, we expect that we won't be the first person attacked.


This doesn't make sense. The effectiveness of the spear-fishing attack had nothing to do with Gatekeeper. Signing the app with an Apple ID provided for a straight-forward way to disable the attack, which made Gatekeeper a pure win.


>it will refuse to run the application (even if you try to bypass gatekeeper).

That sounds kind of shitty to me... Is there any simple way around that?


You can bypass Gatekeeper for unsigned apps. And you can run signed apps. However, Gatekeeper refuses to let you bypass it for an app whose code signature is damaged or otherwise invalid. This is a feature. If you really truly want to run the app anyway, you can always strip off the code signature.


I don't know if there's a way around it but my first thought is that if you're hardcore enough to be needing to run code that's signed with a blacklisted ID, you're way more than hardcore enough to be able to easily make an unsigned (so can't be blacklisted) version of the binary. So if I were Apple I wouldn't waste much time or UI space on it.


That's a very valid point. I'm just worried about Apple's blacklisting mechanism being subverted, but I suppose stripping signatures isn't hard.


You can hardly call that good. It's like saying that Diginotar being compromised was good because then we could untrust their CA.

It's true that we could do that, but their original purpose was to protect us in the first place. The same is true for code signing certificates, to a certain extent.


The purpose of requiring code signing certificates for default trusted execution on OS X was to make Apple able to blacklist IDs used for malicious purpose (and perhaps to create at least some financial/identification trail). What more would you have wanted code signing to do here?


I posted numerous stuff on Facebook. Critical of both our government (US) and Israel. Then I noticed a lot of Spear phishing emails ostensibly sent from my facebook friends. I use an alias on facebook and they addressed me by my alias name (something friends wouldn't do) so I knew what the nature of the email was. Seems like governments are investing a lot in electronically targeting outspoken individuals.


I post nearly nothing on Facebook and am not an activist, and I've been getting the same kind of emails very frequently lately. Some (but not all) of my friends have reported the same. It's likely that it's just random phishing spam.

Edit: To be more specific, the emails use the names of my Facebook friends, but have incorrect sender addresses, indicating that they're just spoofing the name rather than hacking the email accounts of my friends. They're usually fairly short, with an obvious phishing link to a gibberish domain. They come from Facebook friends that I don't necessarily communicate with, and I've confirmed with at least two of those friends that their accounts have not been hacked.


Or perhaps your Facebook friends had their own accounts randomly hacked and phishing messages were sent to every one of the account's Facebook friends.

At any given point in time, half of America is posting some sort of critical message about the government online.


So that was my first thought. I accused them of carelessly falling for phishing and other stuff. But they had not seen any emails from their facebook friends or any other phishing attempts. And it seem like I was being targeted by various different facebook friends. Which increased the odds of me being the target.


> But they had not seen any emails from their facebook friends or any other phishing attempts

Maybe they just have a better spam filter than you


Or… it could just be spam.


Facebook plays fast and loose with data, perhaps one of your friends used a FB integrated service at some time which captured their friend list (e.g. you).


>perhaps one of your friends used a FB integrated service

Again that's what I thought too. But it was coming from different friends. Which means all of them must have done the same thing.

I also sent the email to my friend who works at facebook for investigation in case it was some service that wasn't playing by the rules.


This is apparently normal these days. I've been receiving phishing e-mails ostensibly from various of my Facebook friends, and I don't think I've pissed off any governments yet.


In a way, I prefer malware to be signed. If nothing is signed, essentially everything has full permissions, so we'll ignore that part for now, and just look at the differences once malware is signed.

First and foremost, it cost $100 to get the signature. It was paid somehow. Hello money trail, this is way more information on malware authors / pushers than we tend to get. If they somehow obfuscated every bit of data in that account to the point that it's worthless, then it's merely identical to it lacking a signature, no worse.

Second, it can be revoked. This severely limits the spread, reducing the total damage. Sure, the people prior to this are impacted, but they would be if it didn't have a signature, so again, no worse, no matter what.

Third, people click 'yeah, let this program do whatever the hell it wants' all the time, so the lack of a signature really doesn't prevent its spread / limit the damage. Maybe for the techy-elite, but they're less likely to get this anyway. Probably more likely to run unnoticed because it's signed, but I'd argue not by much. Slightly worse.


There was a relevant update to iTunes last night (or earlier this week) for both OS X and Windows. It is usually these types of updates i keep an eye out for, as it is most importantly an update to certificate validation.

CVE-2013-1014 as it impacts iTunes for Mac OS X v10.6.8 or later, Windows 7, Vista, XP SP2 or later (http://support.apple.com/kb/HT5766) -

"Impact: An attacker in a privileged network position may manipulate HTTPS server certificates, leading to the disclosure of sensitive information

Description: A certificate validation issue existed in iTunes. In certain contexts, an active network attacker could present untrusted certificates to iTunes and they would be accepted without warning. This issue was resolved by improved certificate validation."

There were almost forty other CVEs for iTunes on Windows. And just a last bit - the discussion and quality of submissions here at Hacker News has taken a substantial fucking nose dive in the last year. I change my name every so often, but i can tell you that i've been here long enough to say that.


>the servers used to receive pilfered data from infected machines has been "sinkholed," Intego said. Sinkholing is the term for taking control of the Internet address used in malware attacks so white hats can ensure that compromised computers don't continue to report to servers operated by attackers.

I'd be interested to know how this works? How can you just "take control" over a server/IP address like that? Is there some law that allows botnet control servers to be seized?


mailto:abuse@isp.com?subject=ToS%20Violation


I was confused why the submission title specifically mentioned that the laptop belonged to an activist, but the end of the article indicates the persons life might be endangered as a result. I can't decide if this is sensationalized or not.


It seems to be quoting a tweet by Jacob Appelbaum. I think that the idea could be that they don't know who put the malware there. As this person was an activist, it's possible that it was there by the government this person was opposing. How does the Angolan government treat activists?

I imagine that if a similar thing happened to an Egyptian activist during Mubarak's time in power that it would not be such a stretch to say such a thing.


I first read that as Apple's ID And thought it was like the Microsoft certificate attack.

Looks like Macs market share is growing. Was this distributed in the store?


It was signed with a Developer ID; thus, by definition, it could not have been distributed in the store.

Anything distributed through the App Store is signed by Apple. Developer ID signed binaries can only be distributed outside of the store.


Probably not. The article said it was from a link in n email. As this was a spearphishing attack, the attacker probably doesn't care that the developer account doesn't work anymore.


Exactly. Everybody with 100 USD can get such an ID and then let it be revoked once discovered.


I thought the point of requiring a payment was that the ID could be traced back to a real person or company, so law enforcement could follow things like this up?


There's no real ID check when you sign up for a paid Apple developer account. A stolen credit card and an email address is enough.


The point of requiring a payment is to make money. There is no meaningful mechanism to require a true "real human being" identity. It's no different than the presence of a TLS cert on an arbitrary domain, all it tells you is that the attacker cared enough to expend resources on the attack.


If Apple wanted to actually 'make money' from its developer program fees, it'd cost a lot more than $99 - even more than it cost before the Mac App Store. I realize that $99 may be a lot of money for people like you, so I appreciate that this might be difficult to grasp at first. Keep trying, I've got faith in you!


this isn't reddit.



1) What does that have to do with this?

2) FTA:

>Q: What can or should individuals do?

> Stone-Gross: Do not allow installation of applications that are not distributed through the official Google Play marketplace on the device

So this malware isn't effective unless the user explicitly makes their device vulnerable, doing something normally only developers or hard core users - people who are likely to spot this attack - would do.


People were saying for years that the superior UNIX design and the bad Windows code is the reason that Windows had a huge malware problem but Linux and Mac did not.


People were mostly talking about drive-by infections and the fact that an unpatched Windows machine idling on the Internet could be infected in an hour or so.

There's nothing anyone can do if a user installs software. UNIX design or not, if the user runs a program, it can access everything that the user can. Nothing that this malware did needed special access (e.g. root exploit).


That's a highly UNIX-centric attitude. There is no reason in general that an app should be able to access everything the user can access. For example, Mac and iOS apps cannot access the user's contacts without the user's explicit permission, even though the user can access the Contacts app themselves just fine.

The idea that all apps run under a single "user" and all share the permissions of that "user" is just how UNIX does it, not how things must work. I don't think we've figured out proper app sandboxing yet (Mac, iOS, and Android all have their problems with it, and differently) but it seems to be the way to go.


The security is still in the hands of the user. An app can ask for a handful of permissions that the user can just agree to. The user is still compromised, but now you can pin even more blame on the user. That doesn't help any.


And for the last several years plus, the very vast majority of Windows malware has been similar in nature. But that's never really stopped the Apple faithful from being in willful denial about the same.

- a recent convert to Apple


A good question to ask would be, "How many local privilege escalation bugs have popped up in Windows vs. Linux vs. OSX?"


If you put in a lot of effort, you could try mounting /home and other user-writeable areas as noexec, use SELinux/AppArmor to do funny things to confine administrator-installed programs etc. etc.

However, this will break nearly everything – and I am rather positive that Windows offers similar security measures, if required.


This is basically sandboxing by hand.


It was. Windows has since improved its security by a large amount. Windows XP and Windows 7 are two completely different beasts, and even XP is far more hardened than it was at release. (I'm not an expert, but I'd point to the ASLR in Vista as the harbinger of improved Windows security.)

Now the malware problems on both platforms rarely rely on privilege escalation. They use trojan horses instead, and wait for you to install them.


All the inherent goodness or badness of the design or implementation of $OS or $APP is ultimately trumped by the user when it comes to security. It is entirely possible to run Windows without any anti-virus/malware protection or firewall, and have the system remain clean for months or years. It is similarly possible to be careless and irresponsible with a Mac or Unix system and have it get compromised in short order. It all depends on the actions of the person running the computer.


That doesn't mean it's equally easy for the user to be careless on all of those systems, and I think that's important.


At the time when people started saying that, this kind of attack was a lot easier on Windows, because by design installing arbitrary code off the Net was a single click away and the attacker had a huge amount of control over the contents of the request. Since then, Microsoft has improved but Apple has somewhat repeated Microsoft's old mistakes.


Derp, my girlfriend was at that conference with her macbook air. I feel like I should put a condom on mine now.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: