I was there that day, sitting near several of the people deeply involved. I'm not really a security guy, so I was mostly a morbidly curious bystander. Early on, I saw a bunch of SeriouslyScary(tm) stuff in chat, and decided to see what was up. I was shoulder-surfing while they were looking at the url/endpoint, and when we found the code, and then the diff that put it into the codebase, the collective "oh shit" was something I won't soon forget.
Yeah, the moment we realized what was going on, it was like one of those horror stories you tell as a kid: "...the call was coming from INSIDE THE HOUSE."
The only way an attacker could have come across this URL would be if they had access to our codebase specifically - the string in the "extra_log" param was hardcoded in the PHP endpoint. It didn't even occur to me that they might have placed it there. Only when someone pointed out that this param was actually md5("october") did we start to wonder if it might be a drill.
The engineer's computer was compromised using a real zero-day exploit targeting an undisclosed piece of software.
What the diddly ding dong is Facebook doing with real 0-day exploits (besides using them in fire drills)? More importantly HOW did they get their hands on 0-day exploits? And what other exploits do they have/buy/finagle? Is it on a regular basis?
Trustwave was the outside party running the penetration test. 0-days are commonly used by pentesters, and are available for purchase by subscription as well as one offs that are made available via mailing lists etc.
This is standard practice for the industry and quite common. But do note that not all 0-days are created equal: a 0-day that effects 1000 users is 1000x the significance of one that effects only one user (with some notable exceptions). Also realize that 0-day is often used misleadingly - anything that was first used in the wild is a 0-day, even if that event was months ago, the vendor has been informed, and crucially - even if a patch has been issued by the vendor. Pentest firms often oversell their "0-days" in an effort to appear more advanced to their clients.
(I work at Facebook but don't know anything about the event in question or if this post is accurate)
I suspect it wasn't actually a "0-day" in that sense, but rather a disclosed but unpatched vulnerability, and described as "a real 0-day exploit" in the article because of the typical reduced fidelity of press articles.
But then what about this :
"The engineer's computer was compromised using a real zero-day exploit targeting an undisclosed piece of software. (Facebook promptly reported it to the developer.) It allowed a "red team" composed of current and former Facebook employees to access the company's code production environment. (The affected software developer was notified before the drill was disclosed to the rest of the Facebook employees)."
Does that mean they used the discovery of the vulnerability as an opportunity to create the drill (as a "might as well use this" scenario) or was the drill planned with the 0-day and then the developer was notified?
Which came first here, the vulnerability or the plan for the excercise? I would imagine priority would be to patch the system rather than plan a drill, no?
How did they get their hands on an 0day? Probably found it. It isn't like you have to buy plutonium from some Libyan nationalists -- Facebook has plenty of smart engineers who can audit the software they use.
I've seen 0days found by engineers at other tech companies, so I find it likely that somebody at Facebook could run across one if they tried.
This is what worries, me. Facebook is buying up exploits? I know there's a steady supply, but without knowing what software they're referring to or what the exact nature of the exploit was, it's hard to know what to think. Privilege escalation, arbritary code execution or what? Was it in the OS or some application they themselves developed? If it's the OS, then that would be really disturbing.
I'd be moderately pissed off if I got stuck in a drill for 24h+ without knowing it was a drill, unless it was a known thing that drills would be run routinely. There is stuff I'd do for "real" (missing one-off personal events, etc.) which I wouldn't do for training. I'd skip out on a wedding (well, I always do anyway), funeral, etc. for a real security issue, but would quit the next day if I had done so for training without my knowledge.
I was one of the people involved here (the guy quoted as saying "which means that whoever discovered this is looking at our code").
As the article noted, they started the whole drill relatively early in the morning on a workday (a Wednesday, iirc, which are the days where we do not have meetings). About half an hour after we'd fixed the obvious problem and were starting to dig deeper, the guys organizing the whole thing stepped in and let us know it was actually a drill, but that we were going to keep treating it as if it were real.
It actually ended up being a super interesting and eye-opening experience, and drove good changes to some of our infrastructure. I had no idea we'd go so far as buying a 0-day and using it to test our own systems and response, but I think it shows that we don't screw around when it comes to making sure we're secure.
The article says that in an earlier test, "the organizers made an exception, however, when early in the drill, an employee said the magnitude of the intrusion he was investigating would require him to cancel a vacation that was scheduled to begin the following week. McGeehan pulled the employee aside and explained it was only a drill and then instructed him to keep that information private." I'd hazard a guess that they wouldn't keep you there if you had something important going on, but I can see the issue if it becomes a regular occurrence. Employees would be complacent and potentially always play the "vacation" card at some point to test to see if it was real or not.
These teams usually work in rotations, so if you have prior knowledge of a personal event then you'd take yourself out of the rotation for that time period.
"In 2010, hackers penetrated the defenses of Google...The hacks allowed the attackers to make off with valuable Google intellectual property and information about dissidents who used the company's services. It also helped coin the term "advanced persistent threat," or APT,"
Sorry Ars but the term "Advanced Persistent Threat" was not coined in 2010. Businessweek was using the term in 2008[1] and that was hardly the first time it appears in the literature.
It appears the term possibly came about after the DoD was attacked by malware in early 2008. This magazine, from literally a day before that Businessweek article, refers to the DoD as the source: http://books.google.com/books?id=bmAEAAAAMBAJ&lpg=PA13...
A more interesting response test would have been to drop the less-realistic FBI alert email and find out how long it would have taken them to find the backdoor without it
Rest of world (including many governments) "we are not allowing use of Facebook for the intelligence threat it poses against our entire societies". Techy people: "Facebook isn't good for your privacy, internet users!"
Facebook PR puff piece: "Look, we take security very seriously, we even dumped some serious money on it!"
Bottom line: you can have great people but when you are such a high profile target holding the personal information of millions, it's not going to stop you from being abused or strong-armed by your host-government.
Fundamentally, centralization of anything to the level that Google or Facebook represent is a bad thing.
I love the facebook story but installing a camera with high resolution and zoom ability in an area where it's just supposed to be general watching is like putting an ICBM on a router to the internet (and I sure hope that's just physically impossible).
A little off-topic question - how do stories like this get reported (got picked-up by arstechnica)? This isn't some standard Press Release or entry in the companies' blog. Is it initiated by companies (FB in this case) themselves? Or is it the journalists constantly sniffing companies for such stories?
It's something I've always been curious about coming across such stories. I am assuming there is standard PR practice for such things (for example I wonder how did that FBI e-mail snapshot got shared by arstechnica, despite the blurring and e-mail being ultimately set-up, there must be strict policies in terms of what to share and what not...) Someone please shed a light ~
>> The engineer's computer was compromised using a real zero-day exploit targeting...
Why so complicated? Zero-day exploit? After all, Facebook is not Iran's nuclear facility. And in case of large software companies social engineering is generally easier and more effective than zero-day exploits.
I'd suggest simulating more realistic attack by anonymous, with attempts to social-engineer facebook employees out of their pa.. laptops.
Facebook is probably more of a target than Iran's nuclear facilities. Having an omniscient view of Facebook's users would be extraordinarily valuable to anyone in power, not to mention the ability to spearfish.
Spear phishing is a specifically targeted phishing attack that appear to come from a legitimate source... often one of authority within the targeted organization.[1]
The vast majority of these sorts of exploits are delivered via spearfishing, which is a form of social exploit in that a human being is fooled into clicking a link or opening a file that contains malicious code. The article doesn't specify, but I would bet that was the vector in this case too.
Also, Anonymous is far from the most sophisticated attacks a company like Facebook will see. They tend to stick to DDOS and easy SQL injections.
While FB may not be a nuclear facility, I can pretty much guarantee you that people who use nuclear facilities (or their equivalent) have FB accounts. And that hacking those accounts and/or the computers that are used to access them would probably be a not good thing.
Facebook has on the order of a billion users. That's a huge cache of interesting content and access no matter how you slice it.
> If it were any other industry and it was any other critical function of a product not doing this you'd have people screaming that [the companies] were negligent and wanting to sue them left and right.
Meh. Facebook likes to keep employees on "emergency drill" mode-- keeps people engaged. This sounds like the usual exploit, but with the addition of an FBI-agent email to add drama.
Without drills, how would you suggest Facebook tests the response times and standards of their security teams? If you want to know how the team will react under pressure, you essentially have two options:
- make up a fake security alert
- wait until a real attack is underway
Perhaps I'm missing something, but I do not see a connection to Office Space.