> We have zero tolerance for misuse of credentials or tools, actively monitor for misuse, regularly audit permissions, and take immediate action if anyone accesses account information without a valid business reason.
Okay, so who has been fired?
That's what "zero tolerance" means: no excuses, not even "someone tricked me." And no punishment but the maximum.
Anything less would involve some degree of tolerance, and when you say "zero" that means no tolerance whatsoever.
It's obviously stupid to manage any organization that way, of course. It's a fatuous, dishonest phrase.
So stop talking about "zero tolerance" since all it means is "we make hyperbolic claims that we have no intention of living up to."
>That's what "zero tolerance" means: no excuses, not even "someone tricked me."
That doesn't necessarily follow, as it depends on exactly what they have zero tolerance for. They say they have zero tolerance for "misuse of credentials." Misuse conceivably may not include insecure storage of credentials or accidentally exposing them, but only actively using them, eg logging in and using them for an inappropriate purpose.
I'm not trying to split hairs or be a Twitter apologist here but there is a meaningful distinction here. Intentional misuse of credentials is ultimately subordination (which is immediately fireable in most situations), whereas accidental exposure is a mistake. Twitter is effectively reinforcing that employees are forbidden to puruse private data. They are not making the point that they will fire anyone accidentally involved in a security breach.
> That doesn't necessarily follow, as it depends on exactly what they have zero tolerance for.
It does depend on that, you're right. "Zero tolerance" sounds so clear, it even has a number in there! But, nevertheless, one can rationalize just about any outcome by invoking it.
Specifically, any administrator who hasn't worked out a detailed meaning will have to crystalize their understanding when it comes time to apply the idea. This process of rationalizing will be different depending on the situation and their biases.
The supposedly clear policy becomes capricious or arbitrary. And if it's not arbitrary because they have some actual doctrine that can be consistently applied, then it would make more sense to use that doctrine.
> I'm not trying to split hairs or be a Twitter apologist here...
Splitting hairs is the raison d'etre of this site.
I'm not annoyed at Twitter specifically as they're hardly the inventors of the phrase. My issue is with concept itself, and the broader mindset that you see in legal concepts like strict liability.
> Intentional misuse of credentials is ultimately [in]subordination...
Well, intentional is your head-canon since they didn't use that word. But intent is useful to the discussion; let me explain why I don't think zero tolerance allows for intent and other mitigating factors.
The point of tolerance is that some harm is done, and the injured party is going to limit their response to it.
Law typically breaks it out as the action that caused the harm, the intent to cause that harm, and the certainty of your knowledge of the facts.
As soon as you bring intent into the equation, you're willing to tolerate a great deal of harm. Someone can get hurt in a car crash, and if it's clearly an accident, the injured party is generally not going to hold a grudge.
If there's sufficient uncertainty, we aren't even sure we can direct our response to the harm at the correct party. Then we're stuck tolerating it, or taking it out on some scapegoat. And I'd even argue that the fact that we inevitably have to tolerate some harm makes the concept of zero tolerance fundamentally contradictory.
Tolerance is what civilized people do in response to real life situations, and when they don't you get feuding and war. This isn't a new problem, the point of "an eye for an eye" in Mosaic law was to limit vengeance and vigilanteism with a doctrine of proportionality. Not surprisingly, people still didn't get it, which was why Christ revised it to "turn the other cheek."
It’s possible to have zero tolerance and not fire anyone here. My understanding is that no employee misused their credentials or tools. The attackers misused them. I suppose you could argue that accidentally exposing credentials is misusing them, but I don’t think that’s what Twitter means there.
It’s like when motorcyclists say “safety first” about wearing a helmet and other protective gear. If they really put safety first, they’d choose a safer form of transportation. They mean “given that I’m going to engage in this risky activity, I’m going to try to make this activity as safe as possible”.
In this case “zero tolerance” is short for something like, “except for understandable slip-ups that aren’t fully your fault, we’re not going to tolerate any slip-ups”.
I think you can honestly say you follow safety first in terms of what is available to safely ride your bike.
Just like when I used to rock climb, I felt like we were basically following safe practices - but there was no one to adjudicate them, probably far less testing of various practices with stats than bikes. Also, where we climbed there wasn't expected rockfall. I had barely heard of that being an issue, and we never wore helmets. Later on I realized that was something I might have missed out on. And then the next step was "what other safety practices was I unaware of" ;-)
And of course I get that rock climbing is much more dangerous than hiking.
I don't really trust Twitter at all. When you consider how they shape public opinion on so many things, it gets scary.
They have been caught banning people based on their political stances, and refuse to remove the algorithmic timeline sorting method which is designed to strip adolescents (and easily persuaded adults) of their critical thinking skills.
Well besides claims of restricting access to the tools they obviously need to further have restricted access to certain user accounts. We have support persons whose account support is locked from superior accounts or special users and this seems like something Twitter should be doing.
Perhaps have another layer where each use of specific functions require unlocking through an incident management tool?
Indeed... one can imagine scenarios were maybe attackers/phishers are the same people who's accounts are being used? This seems like the easiest way to get away with misusing your access. Just send yourself a phishing mail....
> the attackers targeted 130 Twitter accounts, ultimately Tweeting from 45, accessing the DM inbox of 36, and downloading the Twitter Data of 7
So much effort for so little gain... With proper preparation (i.e. a simple app ready to download everything from an account), they could have made out with the whole data of 130 accounts, silently, before tweeting the hopeless scam message. Instead, this seems like a mostly-manual effort, done in haste.
Just dumb thieves pulling off the scam of their life, or cover for a targeted attack of one or two of those 7 they actually siphoned properly? I hope US authorities will figure stuff out beyond the usual "it was a Chinese/Russian/Eastern European gang", which is just code for "fuck knows".
I wish I knew why security does this character assassination routine with low-skill hacks. They clearly got in, bumbled attempt or not. Is it somehow helping everyone to know they fucked up? Whose expectations are we changing here?
Who is “security” here? In my experience, if you ask most working security professionals, they would agree that social engineering is by far their largest vulnerability. All the cryptography stuff is fun and cool but really the work is all about preventing phishing, so I don’t know any working security engineers who would call this a low-skill hack.
Well, I see 2 reasons here:
1/ when an attack is deemed a low-profile one, it is a way to say that the targeted company might be to blame, at least in part, for not having adequate security protocols;
2/ in this instance, some people might be frustrated that the DM of high profile people won't leak on the internet because of a lack of preparation from the attackers.
Most of the time, the "character assassination routine" is to emphasize the first point.
I imagine that the limited damage the attackers did will help them evade being caught. Ultimately, they just stole some 100k$ in Bitcoin and that's it, so investigations by the FBI et al. will probably not go to great lengths to locate the hackers. On the other hand, attempts to blackmail someone like Tim Cook or Elon Musk would probably be riskier. For the same reason, I suspect the attackers may have intentionally avoided breaching or tweeting from Trump's account, too much risk of attracting serious FBI and Secret Service attention.
As it is, the only people who actually lost something are a bunch of Bitcoin owners that got scammed, and it's just not a big deal.
In the great scheme of things, knowing what Trump (or any other US politician) says to other people is worth relatively little.
Having insider knowledge of what certain billionaires say to whom, though, can be immensely valuable. It might well be that a few of them will lit a fire under FBI and friends. We'll see.
Most of those famous people don't manage their own accounts though. Some do, like Musk. That's why I'd consider stealing his DMs to be more dangerous than running a Bitcoin scam - it could easily upset Musk enough that he bugs the FBI to investigate seriously or, more likely, uses his own considerable resources to run an investigation. See how Bezos hired investigators when his data was leaked, and they did get to the bottom of it.
Trump is a bit of a special case because he's POTUS. I'm pretty sure there are no remotely interesting DMs on his account, but if there's one thing the US doesn't like, it's anyone even perceived of messing with national security. An attacker merely logging into Trump's account could be seen as a national security issue, and attract an entirely different level of law enforcement attention.
As someone who works to stop these, the most frustrating part is how even infosec people thik enough training or $vendor's email security solution will stop this. It's like boy scouts that think they will stop navy seals. There is too much focus on entry point of an attack,especially by news media.
Speaking from experience at a major infosec company, the impression I got internally was "we offer phishing tests, but we don't recommend them, because phishing succeeds 100% of the time".
So I'm confused by the idea "even infosec people think training will stop this".
I disagree. There are services that regularly send fake phishing emails on a regular basis. If they click a link or fail to flag enough emails, their boss gets notified that more training is necessary.
At the bank that I see this used at, the employees are far less trusting of emails and such.
Yes, I've been involved with such a program before, and it definitely helps a lot. Phishing email click rates go way down.
This is carefully planned phone-based spear phishing, though, and that's a lot tougher to protect against. It can be easy for a skilled con artist to gain someone's confidence over the phone, no matter how much you warn about vishing (voice phishing). I'm sure training can still help there, but attackers just keep trying again and again until they find someone it works on.
Absolutely. It's just that highly motivated, targeted, and sophisticated social engineering is really tough to totally prevent. It just takes one person to fall for it, and the attackers can keep cycling through people (quickly, to get ahead of company-wide warnings about the social engineering attempts) until they succeed.
Training does work to reduce the amount of succesful untargeted attacks. For spear-phishing it's hit and miss on how good the attack is, but a good enough attack will work against almost anyone. As someone that sees really well crafted phish, I can tell you I myself will fall for a good phish. It has to do with eliminating the element of surprise, if I didn't expect the email I will assume it's a phish. But if rapport is built and the subject is something very specific only a few people are privy to then my guard will be lowered. Business email compromise comes to mind, they just reply to an existing thread with a link to a trusted site like onedrive
Fake phishing emails work to remind users to check emails. Ask me how I know :-)
I also believe if a real phishing email makes it to a user then there’s a problem. Some of the real ones I get were easy to spot, “we tried to deliver a package” or “your order is on its way” type stuff. Spam filters should’ve picked them up.
What infosec people think $vendor email security solutions are going to solve phishing attacks? I was under the impression that the people that buy those solutions (like many security solutions) are primarily non-infosec people that want to paper over their real problem without fixing it.
Granted, there is a place for some of these things temporarily while working to fix the actual problem, but that's a mitigation, not a solution.
Nope, seasoned pros I respect think trainig+$vendor is good enough. If it isn't, blame the user or the vendor!
There are shops where the goal is to have someone to blame when you get owned and there are rare shops where the goal is to do it right to catch/stop bad guys even if it means you get blamed (because management understand security is not absolute)
That's exactly how they think and it's b.s.! The whole point of this comment thread is that most people will fall for a phish if the phish is good enough.
FWIW email security training is something you'll probably be forced to provide, to some degree, as a matter of compliance. It's another case of compliance wasting time by driving companies to do security work that isn't meaningful.
>There is too much focus on entry point of an attack,especially by news media.
That depends on what you mean by "entry point". If you define the entry point as a person, then yes don't focus on that. But if you define the entry point as phishable credentials, then focusing on that is good, it will prompt companies to switch to phishing-resistant credentials (U2F security keys).
Even u2f isn't fool proof (exploitation and cookie theft techniques). Execution,privesc,lateral movement are things focus should be on. You can't control the facg that people need to use email and they will for for a phish,but you can control your authentication system, alerting system,etc...
You're right about those. But malware can be blocked by blocking support people from installing any new software. Cookie theft is indeed tricky to stop, but it's much harder to pull off than regular credential phishing, so blocking regular credential phishing is still a good win. And there are techniques to stop cookie theft (binding them to a non-bearer-token, such as channel ID, token binding, TLS client cert, IP address).
The one your thinking of is a malware attack right? The intended attack results in victims running malware from the attacker. So that's not "phishing" by any definition I recognise.
And even in that attack, the victim's long term credential is protected if they use FIDO authenticators - the bad guys can't use the authenticator without help from the legitimate user and they don't gain any enduring credentials.
So you need to do the attack live and then hope the victim not only doesn't realise you just infected them with malware, but conveniently signs into something at the moment you need a signature, for which you can hijack their expectation to press contact on the authenticator. Then you get one authentication. If you need another one, for any reason (timeout, subsequent operation asks to re-authenticate, anything) you have to do it again because you do not gain enduring credentials.
The malware doesn't need to there "moment you need a signature". The malware can just grab an existing cookie or use an existing cookie. Sure it's not an enduring credential. But whether the malware is there at that moment or an hour later, it won't make much difference.
And this specific Twitter attack might not have needed enduring credentials. It seemed to happen over a short time period.
You were mentioning an attacker trying to steal a touch from the FIDO device using malware. My point was that's pointless because cookie theft is easier and gives the attacker the same thing.
If the attacker wants a cookie, then stealing a cookie gets them the cookie, but it is not necessarily the case that the attacker only wants a cookie.
Nothing compels Twitter to design their user administration tool so that it says "Oh you have a cookie well then it's fine for you to change Elon Musk's email address and switch off his 2FA".
For example it's perfectly easy to have a "Confirm" step for a privileged operation that requires WebAuthn authentication. But if you're the attacker that means a cookie doesn't help you.
Use U2F for 2FA. If Twitter had all their employees using U2F keys it's very unlikely they'd be phished.
With U2F it's impossible to "enter" a 2FA code on the wrong domain, making you immune to phishing attacks by most definitions. This Kerbs article from awhile back says that Google had zero phishing incidents after making this switch: https://krebsonsecurity.com/2018/07/google-security-keys-neu...
I keep trying to read and understand the attacks that have all happened so far... after you understand the vectors that are currently in use, hopefully it'll stoke your imagination to see how you might attack other systems and make up new vectors. All the current attack write-ups will usually link to vulnerabilities as well. See HackerOne disclosures, Google Project Zero or the "How I hacked X" disclosures you see on HN.
I don't think published textbooks are very useful — attackers also have access to them, and if the attack has been written down it'll likely be encoded into a firewall software or security process rulebook already (though it might still work for smaller companies lagging behind on the curve).
That's the thing, you can't,not with the way current tech is. But you can read up about having good monitoring/detection and hardening on your endpoints.
Microsoft for example recommenda privileged access workstations. If twitter's employees used a separate set of credentials and workstations for privileged twitter moderation than their regular account/machine used for email and day to day stuff I bet the attack wolf have failed.
There are probably Twitter employees whose job it is to reset emails all day long. Having 2 separate computers and accounts, one for for resetting emails (which is done all day) and one for responding to email sounds like quite a burden on employees. How are they going to get the name of the account from one computer to the other? Copy and paste won't work. Retyping from one computer to the other surely will result in typos.
The typos themselves could be a vector for attack. The attacker asks for a reset for one account with a capital I and maybe gets a reset for a different account with a lowercase l.
What I find most problematic about the attac is the incident response by Twitter.
As people pointed out here, hijacking Twitter accounts can lead to big stock market crashes, mass panics ("bomb found at XXX") and maybe even military escalations.
Under this circumstances, leaving a platform with an unknown number of compromised accounts online, seems irresponsible to me. In such a case you must stop the bleeding ASAP, either by locking up "important" accounts (what they eventually did, after a few hours!) or taking the site offline.
Freaking Twitter needs a serious auth infra upgrade. Unless phishers hijacked employee devices, they accessed the tools remotely, meaning there's no form of client authentication?? Something like U2F which by now is pretty old seems like it would prevent this kind of attack
That said keep in mind they also do PR/damage control so we only know what they tell us. For all we know maybe they have u2f and an employee still did bad stuff while a phone was somehow involved. Or whatever else.
Social engineering will never be stopped, people want to be helpful. And generally speaking the cost for stopping it at non-secure businesses is going to be too high until a security incident happens.
Phishing email attacks? Why do employees have business emails at all?
Phishing phone attacks? Why would employees have phones with external access?
Front of the house (dealing with users) should probably be disconnected from back the of the house (admin access).
Before you know it you're in DoD or Bank territory. No Wifi allowed etc, where's your badge buddy?!
> Social engineering will never be stopped, people want to be helpful.
They really do. And so you should design security systems with the assumption that your employees will actively undermine security "to be helpful" to adversaries.
> And generally speaking the cost for stopping it at non-secure businesses is going to be too high until a security incident happens.
The cost for Yubico's "Security Key" is $20 and there is a volume discount. You should buy each employee a key, and if there's no secure means by which they can be re-authorised when they inevitably lose it, a second one to keep safely for that case.
The attackers correctly anticipated that while "Can you get me Jenny in user assistance's phone number?" is just being helpful, "Can you disable Elon Musk's 2FA and give me control over his account?" is a bit... obvious. So they got themselves credentials to do that stuff. But there is no need for Twitter employees to be able to give away those credentials.
For example if a Twitter user was logged in with a dongle but the attacker had access via social engeneered remote desktop access a dongle still could mean access to private data.
But yes. As far as I know Google and Facebook require them. Also Google sometimes require permission of an other co-worker to access data.
>if a Twitter user was logged in with a dongle but the attacker had access via social engeneered remote desktop access a dongle still could mean access to private data
It depends on the dongle. YubiKeys and similar devices require the user to physically touch/tap it to enable U2F auth, and it automatically powers down after a timeout to prevent remote desktop attacks.
I would hope Twitter already had this kind of setup, but their blog posts about this are all targeted at a more general audience, so I doubt we'll get that kind of detail anytime soon.
>It depends on the dongle. YubiKeys and similar devices require the user to physically touch/tap it to enable U2F auth, and it automatically powers down after a timeout to prevent remote desktop attacks.
How often is the tap needed? Is it needed on every action or 1/day or 1/month? It would stay valid via browser cookies valid for that period. If it's 1/day the employee might have tapped it in the morning, then went to lunch, then the attackers hit with the remote desktop attack.
Every time you login with a Yubikey you must tap it. It does not maintain its own session on the key or anything like that.
If the app maintains a session, then that depends on how long the app allows sessions/tokens to live for at that point. The Yubikey won't come into play until login is required again. So, I think you're getting at a different part of the security model at that point.
My point is that essentially all apps maintain a session and a remote desktop attack can make use of that session. So Yubikey doesn't really protect from remote desktop attacks.
Fair enough! I didn't comprehend the context well enough. Seems right though, the Yubikey won't protect sessions. At least I don't see any reason it would.
Note that not all hardware security devices are safe. U2F security devices are safe against phishing; OTP security devices are not safe against phishing.
what about a phishing attack that involves the attacker talking the target through a login, which also involves the u2f part? and then obtaining the session token from memory/disk instead of username/password
How would the attacker get something from memory or disk? Malware? If there's malware involved I don't consider that credential phishing. It's a matter of debate whether malware can be considered a form of phishing. Maybe I should be been more clear that U2F stops credential phishing, not malware phishing (if that even exists).
I guess one option would be to ask the victim to read out the token from memory or disk. That seems pretty hard though. It's debatable whether that would be considered credential phishing.
A more likely method would be to trick someone into going into devtools and copy and pasting something from there, possibly a curl command, like in this epic "bug report"[1]. That's also debatable whether it would be credential phishing.
U2F protects against that because the signature is tied to the hostname. The browser reads the hostname. The browser is infallible when it reads the hostname, unlike a human.
In most scenarios a FIDO authenticator (for U2F/ WebAuthn) won't even sign your login attempt for the wrong site at all, because of how it works. We'll look at the original FIDO (second factor only) scenario because that's cheapest and apparently Twitter was very budget conscious on security?
This FIDO authenticator has absolutely no idea what your per-site keys are. Instead, the random-looking ID provided to the site when you register and then given back by the site when logging back in actually is your private key for that site... encrypted using AEAD with symmetric keys only the FIDO authenticator knows.
One of the ingredients for decrypting the key is rpIdHash which is SHA256(dnsName) where dnsName is the FQDN of the site you're looking at or some suffix of that FQDN chosen by the site. So here it could be news.ycombinator.com or ycombinator.com (Public Suffixes like com or co.uk are prohibited). The browser is responsible for calculating rpIdHash.
Thus on a phishing attempt usually the AEAD fails, the authenticator not only doesn't give the phishing site a signature that can be used to sign in on a different site, it will ignore this ID and act as though the user doesn't have a FIDO authenticator plugged in at all.
The private key is embedded in the hardware key, there's no way to extract it without an advanced attack involving tearing apart the key.
But a practical attack along the lines of what you mentioned would be to ring someone up and convince them to disclose their cookie. Check out[1] in which the victim disclosed their cookie without the attacker even asking for it.
Dongles are rare here in the US. But I know that bloomberg uses them. I was shocked when I learned that retail banks in Singapore give everyone dongles to log in. In the US that's tyranny Lol
I work for a crypto currency company and it was the first time in my career that I was issued a YubiKey (I once had an RSA 2fa token for vpn access). It took some getting used to but now I just keep in on my keychain and I always have it with me. I need it for SSO, git, VPN, and basically all internal services.
They aren't sufficient by themselves however, they don't protect from is malicious internal employees.
Preparing for malicious internal employees seems to me like preparing for "the big one," in the northwest.
Do a cursory amount of preparation. Outside of basic measures, you're probably doing more harm to the business than good. The likelihood of internal malicious attackers is very low in the grand scheme of things, and the attack surface is huge.
Most companies are going to be compromised by outside attackers—its there that you should focus your energy. If internal attackers are your biggest threat, you've done a fantastic job.
The annual DBIR, which collects incident reports, has ~1/3rd marked as insider ;-)
From a defense-in-depth perspective, agreed: most attacks involve privelege escalation on the inside as soon as they switch from attack vector to breach, even if just host-level, so teams should absolutely "assume breach". Attackers will phish folks, get on their devices, get root, and then have fun there and potentially elsewhere. Ransomware is a more common goal than what Twitter got hit with as it is easily profitable, and it means a takeover. Controls on what most users can do and the ability to scope & report is part of growing up (in the US). It's good Twitter was able to map the attack - I bet many popular social networks couldn't, esp outside of the US or non-top-10.
Shameless plug: A lot of folks use our tool for mapping network logs, and I always encourage to also map out host / app / cloud logs as well, such as logins and the oftentimes black hole that is winlogs.
We're talking about companies, not users. Competent companies can and absolutely do require dongles (or equivalently trustable corporate hardware) to log in to their systems.
And as a Norwegian it boggles my mind that banks in the US only require username and password to access their bank. We have moved on to a authenticator living on a phone sim card, and you use either that or a "real" hardware dongle for all logins and (most) transactions to confirm your identity.
Same as other computer equipment, get another one from tech support and revoke the old one. Or at least, that's how it worked when people still went to the office.
Employees can revoke access for any of their keys/dongles once they realize they are missing. The company can also send an automated warning and auto-expire a key that's not been used for a while.
We will do that now. We will start the competitive bidding process, and we expect the RFP paperwork to be returned by October, 2021. After that, if there are no injunctions filed because of the bidding process, preliminary design documents will start being created. Preliminary design review will occur August 2022. ...
Seriously I worked on projects subject to government scrutiny and it was ok, but you had to be the right kind of person.
Account for your time in 6-minute increments. Milestones I recall off the top of my head were preliminary design, detailed design, 3-5% of your time coding, software integration, hardware software integration, acceptance.
It was stable, predictable, and (to me) very soul-crushing.
Technical question for you, how does this time tracking work in practice? Do you pause every 6 minutes and note what you’re doing? Or just roughly remember at the end of the hour/day?
You had to charge each project you worked on and record your time to .1 hour precision.
So recording 1.1 against project 3456 would charge that project for 1 hour and 6 minutes of your time.
You had to do the same thing for a dentist appointment. 1.0 hours for an "overhead project".
(I should mention this was years ago)
Also, lots of the people who worked there were ex-government employees and were fine with it, because software folks got to go home to their family every day at a predictable time, you would get training at regular intervals and although the pay wasn't super competitive it was a good job, indoors and in an air-conditioned office.
It becomes a reflex. Commercial lawyers all do it, tracking time this way is a fact of life, not least because if you end up in court arguing about costs the judge is going to throw out hand-wavy "I spent about a week on this" claims from professional lawyers who should know better.
I vaguely recall part of it was that you had to sign your timesheets. It's been a long time, and I'm pretty certain things have gotten better. It was harder for me as a young software guy out of college to accept 3-5% only of your time was coding.
> Why isn't there a whitehouse.gov ActivityPub instance that no single admin can censor or subvert?
Legislation is sorely needed for public institutions to make public announcement messages (microblog posts, or "tweets") using publicly managed and controlled infrastructure, contributing back to the commons[1].
Stop building into broken commercial services, the standards exist today to rebuild a commons-oriented Internet.
This. Just as surely as email, texts and tweets are now admissible in contract law and as we can sign documents online that are binding, we need a national realtime information-streaming infrastructure --text and URLs today, multimedia in the future...all over the (hopefully) secure domestic 5G network. Set the standard, setup a court system for inevitable violators, regulate the gateways allow everyone to connect.
Because the general public has no clue what ActivityPub is or the desire to learn how to consume feeds from dozen of instances when they could just download a single app onto their smartphone where all the celebrities already are.
How do you propose people consume 100s of different ActivityPub feeds without learning the details of how to configure an app to do so, while also staying away from a centralized platform to make it easy to consume the feeds?
Sorry if I am taking this on a tanget, but in one of the HN threads regarding this exact security incident, it was recommended that this is why something called as "Blast Radius" needs to be implemented.
Anyone here with any literature / sessions one could go through for a good gist of things with respect to Blast Radius?
"Blast radius" is a general term for the worst case impact of a specific type of breach of a given system.
The recommendation you read was probably about limiting the blast radius. It's a general security best practice, and you implement it through techniques like federating (compartmentalizing) services away from each other, limited lifetime credentials, attribution, SSO for single point of control for invalidation of credentials, principle of least access (PoLA), privilege separation with role-based access control (RBAC), session logging/audit logging, etc. Most importantly the underlying system needs to have a well-defined and pentested authentication/authorization architecture. The hallmark of systems that limit the blast radius is that they have well-defined limits on how much they trust each other.
OWASP (https://owasp.org/) is a great starting point for reading about this stuff.
I did a quick search on Algolia for HN (hn.algolia.com) and found several comments in this thread and threads related to the recent Cloudflare incident. This could be a starting point in your quest. [1] I don’t have a blog post or article or documentation though.
> We will be slower to respond to account support needs
So, they limited the account support access to smaller trusted team. This I can understand but it might also cause the delay in identifying such attack, if it happens today.
> We’re acutely aware of our responsibilities to the people who use our service and to society more generally. We’re embarrassed, we’re disappointed, and more than anything, we’re sorry. We know that we must work to regain your trust, and we will support all efforts to bring the perpetrators to justice. We hope that our openness and transparency throughout this process, and the steps and work we will take to safeguard against other attacks in the future, will be the start of making this right.
It's not easy to say those words about your own company.
Slightly OT: I use private tabs and write my banks URL by hand and log in via 2FA. Are there cases or have there been attempts to poison the users address bar history with malicious URLs for fishing?
Are account support tools available off premises? I know nothing about security for big companies like Twitter but it seems like tools that enable you to post from any verified user (outside of Trump, someone here once mentioned he had additional account controls) should only be accessible from secure offices regardless of individual credentials.
Original poster said on premise. An appropriately secured vpn may or may not help security, but it is still "virtual" and does not meet the definition of on premise.
The VPN credentials could be phished. Of course appropriately-secured might mean U2F, in which case it likely would have prevented the attack. But they also could have used U2F without a VPN, and that also would have likely prevented the attack. So the VPN isn't really a benefit.
Account access is one thing, yes, and a hardware 2FA key can help with that.
But - what is the reason to allow support personnel to pose as specific users and send tweets from their accounts? There is more than a security issue here. There is a complete security breakdown.
I don't believe there was a tool allowing support personnel to pose as users. I believe the tool allows support personnel to reset emails on accounts. Then the attackers used did password resets on the accounts then logged into the accounts and tweeted.
I didn't see that in the page for this article. But, that's a good point.
The specific text is, "Using the credentials of employees with access to these tools, the attackers targeted 130 Twitter accounts, ultimately Tweeting from 45, accessing the DM inbox of 36, and downloading the Twitter Data of 7."
So, this doesn't say what actually happened. If it was employees posing as users in order to post, that is an permission which should not be granted. If it was as you suggest, a password reset, then there is a separate issue with 2fa that would be expected on these accounts.
Either way, there are serious security issue. This is similar to Oracle calling itself "Unbreakable" and then getting broken. If Twitter cannot safeguard against so many accounts getting injected with tweets, then something is broken with Twitter's security model.
> The social engineering that occurred on July 15, 2020, targeted a small number of employees through a phone spear phishing attack. A successful attack required the attackers to obtain access to both our internal network as well as specific employee credentials that granted them access to our internal support tools. Not all of the employees that were initially targeted had permissions to use account management tools, but the attackers used their credentials to access our internal systems and gain information about our processes. This knowledge then enabled them to target additional employees who did have access to our account support tools. Using the credentials of employees with access to these tools, the attackers targeted 130 Twitter accounts, ultimately Tweeting from 45, accessing the DM inbox of 36, and downloading the Twitter Data of 7.
No disrespect to those challenged with protecting such a huge target, but why do admin tools even have these capabilities? I could see needing to disable a user account or change some attributes, but why would an admin ever need to tweet from it? There shouldn't be tools with God privileges even for admins. Not surprising human error was involved in a breach this huge. So, how many people had access to this tool? Is there a killswitch for the tool itself available to very few, really very few, persons? edit: I dont know if the tool can tweet but surprised 2FA can be stripped without a human being confirming (ie.. the acct owner's social media person), especially for famous people.
The admin tool was used to change the email on an account, then the attacker reset the password and got full access to the account. Apparently having 2FA enabled did not stop this attack (admin tool probably had the power to strip 2FA from accounts).
So while the tool did not directly have the ability to tweet, it effectively did.
I work in customer service supervising entry level employees. The amount of power they have at any given time is astounding and it's by sheer ignorance or benevolence that more isn't embezzled en masse or this information isn't used for personal gain. My entire team of newly trained staff have access to your bank account information, where and when your payment was posted by IP, and we can strip 2fa or mobile numbers at whim. This coupled with inexperienced agents often leaves multiple accounts compromised. Having a select few engineers who don't work weekends always helps. We don't train the agents to tell them that they could potentially ruin customers' weeks by pushing the wrong button and it happens way too often to be standard, but as long as investors are happy and banks are good to reverse charges with no penalties here we are. Tech companies are good to throw caution to the wind.
That seems like it was the case, but the attackers got access to lower privileged accounts and used them to find who had that access so they could target them.
The key being "proper training". Those few god-level admins should be drilled enough to defeat a phone-phishing campaign. In fact, they should probably have custom procedures to look after their own credentials.
I've coded/supported a number of admin tools for a number of large companies (banks, telcos, etc) so I'll take a stab at "why" First, god mode can be implemented cheap and fast. God mode also makes day to day support easy as admins/support staff can do just about everything fast and easy - so they typically love it and will often fight to keep it. More than once I have had managers turn down proposals to tighten up security because of the cost and false beliefs that because the tool is behind a firewall, better security is not needed. Taking a schedule and/or budget hit to implement tighter security is not going to get them a promotion or bonus. Sad but true. In the vast majority of cases I've seen, paying for security improvements becomes incentivized for managers after their company gets burned by a breach.
Also, admin tools are often "afterthoughts", there is usually a motley collection of them, and often considered as an expense/cost to be minimized and not a revenue generating asset that gets more budget and attention.
My uninformed guess is that there isn't a "tweet as this user" button (because obviously there's no legitimate use case for that), but there is a "change this user's email address" button (because you might need to do that in order to help someone who's locked out of their account), and if you can do that you can take over someone's account. Obviously something like this would be detected quickly, which makes it less scary in some ways than a "tweet as this user" button, but of course this particular attack did not seek to evade detection once it was launched.
Of course, some of the targeted users presumably had 2FA enabled. How to do account recovery with 2FA in a consumer context is a complicated problem and I'm not aware of any good answers, but there's certainly an argument that the protections in place there weren't adequate and I wouldn't be surprised to see them changing soon.
I would also hope that rank-and-file support staff can't change users' email addresses, and the attackers had to spear-phish one of a smallish number of people whom more complicated account-recovery cases are escalated to. But who knows if that's how it works.
> How to do account recovery with 2FA in a consumer context is a complicated problem and I'm not aware of any good answers
I've always wondered why there isn't more use of time delays for this sort of thing.
If there was a notification e-mail and a 7-day wait, that would offer a fair chance for the real account holder to cancel the change. Not 100% - the user might be on holiday - but it would catch a lot, and hence decrease attackers' motivation. And while a 7-day wait is inconvenient, for services like Twitter and Steam losing access for a week isn't the end of the world.
We had a running gag in our social media startup of tweeting "poop" from people who left their phones/computers unlocked ... someone did it to an employee that was logged in as a customer (corporate brand) context and that was the end of that 'joke'.
Did they tweet directly from the admin tools? My impression was that they used the admin tools to reset the password and then take over the account, ultimately tweeting like any normal user would do.
Do we know for sure that the admin tools can do all this? My understanding was that the tools enabled password resets, which allowed the attackers to tweet from the accounts themselves.
> For 45 of those accounts, the attackers were able to initiate a password reset, login to the account, and send Tweets.
It is my understanding that they used the tools to update the email of the account, then reset the password to log into and make a new password such they could log in and tweet. Do you have any source that says that they could use the support tools to tweet directly from?
Internal admin tools grow over time. They don't spring fully-formed into the modern company with all the correct access controls and auditing they ought to require at that point. They carry a lot of cruft from things that were needed early on and aren't needed anymore.
Furthermore, they're a classic cost center, not a lot of love or budget goes into reducing their tech debt, or bulwarking them up against a sophisticated adversary. Red teaming yourself full time is expensive and not profitable. What's the worst that happens from a breach like that? Well, Equifax is still going strong!
I recall being party to an amusing conversation at a major network services provider at a team meeting for people with access to such tools, to the effect of:
- Alright, we're modifying <internal tool A> to lock down access to accounts related to <major political figure>. You will no longer be able to use <internal tool A> on <accounts>, only select supervisors will have that access.
%%% ah, okay, that makes sense
# uh, hey, regarding <internal tool B>, which allows us to look up <thing that would provide equivalent access to internal tool A>? does that still work the same?
- Uh, yeah, it does.
# okay?
%%% ... silence ...
- Alright, next item!
To the best of my knowledge, that was never addressed. <internal tool A> has audit logs. <internal tool B> doesn't.
You have a need for a tool allows you to see the UI “as the user does” so you can respond to support requests and maybe someone thinks it’s easier to just copy the cookie.
This isn’t the right way to do it, but given they work at Twitter I could imagine this isn’t the first big mistake they’ve made.
The question was interpreted as “why do admin tools have these features” on account of that was in the first sentence, and not, as you may have imagined, a request for a twitter employee to explain all of their tools, or a request for a twitter employee to justify creating these tools.
it's not copying the cookie, but there are absolutely third party UIs via a user's API key, it's not a huge leap of faith to assume Twitter has similar internally.
I doubt the admin tweeted from the tool. The admin changed the email on the account, then did a password reset, then logged in as the account then tweeted.
I'd like to know more about these tools. That there's at least one which can bypass a user's 2FA settings without notification suggests that there are additional tools in the same vein.
Google requires its employees to use a security key for access to all internal systems including admin tools, source code and email. Every since google started enforcing this policy the number of successful phishing attacks has gone down to basically zero.
WFH has caused many companies to ease up on restrictions involving location, ip, and sometimes a broader need for software. Granted, nobody should be this easy to bamboozle, but I get why now more than ever this may have been an issue.
Those legal requests aren’t serviced with a password reset in order to log into the account. It seems more likely that there’s an internal tool to help people who have lost their second factor, but that’s just a guess.
Without details it's hard to say. They talk about "phone spear phishing". It's within the realm of that description to say they were hit the same way someone I know recently was - someone called and said "Can you just install Teamviewer on your desktop for me and give access to a logged on session".
Maybe? I imagine a spear fishing attack could entail the target is sent to a false panel to login where it sends a true 2FA request. The target then freely gives this 2FA code to the attacker.
Significant forms of phishing are stopped by U2F (as used by Yubikey and others), by crytographically binding your credential to the domain name , and only issuing an approval signature if the site matches. Obviously, this stops credential phishing.
The way it is worded can also mean that there were XSS vulnerabilities in the internal tool since they are saying "gained information about how our processes work". I feel like that's a strange and vague thing to say. The right kind of xss vulnerability would enable them to bypass 2fa too, maybe steal backup codes even.
Yeah I guess you're right, it could be like an exploit chain where 1 link in the chain is phishing to gain access to something and xss is the next link for lateral movement.
But I don't know what "The right kind of xss vulnerability would enable them to bypass 2fa too" means. If the attacker doesn't have 2FA I would think the attacker can't log in, thus meaning the first link of the chain has no purpose.
But I also think XSS in this case is not very likely. From interviews with the attackers it sounds like they're social engineering experts who hang out on social engineering forums, not XSS experts[1][2][3].
AFIK, in a Zero Trust Architecture a VPN is considered a perimeter and therefore it becomes a vector of attack to access systems of authoritative decision.
Many security researchers have already established that the benefits of a VPN especially in the modern distributed world are marginal at best.
Basically, yes a VPN makes you a tiny bit safer but it also adds a lot of networking complexity and adds more friction to the job of your employees. It also becomes an attack vector for malicious parties, since once they get VPN access they can theoretically access at least the first layer of protected resources.
So in layman's terms an attacker just needs to phish for VPN credentials, maybe steal an OTP token and they will have access to a non-trivial amount of network protected resources.
On the other hand if every service you use has its own authentication then the attacker needs to target each service and to know what services to attack they need knowledge that is possibly contained in another system that also requires authentication and is definitely not guaranteed for the attacker that all the systems will have the same password and/or have 2FA disabled.
Honestly, in my opinion VPNs are just an excuse to monitor traffic. This is a bit of cynical take, but I'm convinced that companies that use VPNs are more interested in seeing what goes in and out their network than in protecting their resources.
If your enterprise is a global network with millions of nodes operating a blend of modern and legacy systems accumulated through hundreds of acquisitions in 100+ countries over the course of the last 50 years, a VPN with hardware tokens isn't a bad additional layer. It isn't even mutually exclusive with zero trust, it's just another layer of auth and access.
Twitter? Largely a different story and commando zero trust might be a viable option. As observed many other places, this sounds like a poor authentication model and probably poor governance for highly privileged access. Presumably they will take a look at their authentication, which sounds like it's making some bad assumptions, and improve.
> On the other hand if every service you use has its own authentication...
This would be a nightmare for the people managing any nontrivial system. There are good reasons to use something like Active Directory and tie systems and applications to it for easier policy enforcement and management. There are good reasons to avoid this centralization for certain things too. Either extreme would be an exercise in frustration.
Certainly. That’s why things like Okta make sense. It allows people to use it as a Password Manager while keeping certain level of sanity in managing resources but without giving up individual authentication against services.
I’m not so sure that it works that well once it becomes the actual authentication middleware. But as a single sign on directory it definitely reduces the complexity for the employees and for IT departments.
Either way I think more than systems, people need training. I know there are sophisticated phishing attacks but someone who has been trained to understand and acknowledge these situations should be able to detect when someone is trying to steal information.
I think Twitter’s failure was to not properly train their employees especially when they are such a visible and juicy target for bad actors.
>Many security researchers have already established that the benefits of a VPN especially in the modern distributed world are marginal at best.
Yes, with the (wrong) assumption that after you have connected to a VPN, all other services are free for the taking, without any further authentication.
I have worked at companies that used VPNs. After you authenticated and logged with the VPN you had access to several resources with no further authentication. Granted I would always use the company issued computer so I don't know if there was another non-transparent authentication in the background but overall seemed that just being within the network was enough to access things.
There's a lot you can do with vpn to make it more secure.
On our vpn we require a non-exportable certificate in the tpm chip, normal user credentials, then we have a captive portal that forwards to our SSO that requires a yubikey.
What if the attackers phish the VPN credentials too? Does Zero Trust imply phishing-resistant credentials? What Twitter needed was phishing-resistant credentials (security keys, aka U2F).
Zero Trust != VPN. Zero Trust means that the network is not what determines trust.
Consider this:
* You go to your office, connect to the network
* Now you have access to internal services, by virtue of being on the network
In a Zero Trust network it does not matter what network you are on. Trust is handed out individually, based on the identity/ role of the user and the context of their session (is their os patched? running security tools?).
> How does the site know the user's OS is patched? The User Agent?
User agent is a great place for a version 0, sure. 99% of your assets aren't compromised, so worrying about a bypass isn't important to most of them. For a v0 just knowing that most of your boxes are patched is a huge win.
Of course you'll want client certificates on devices, or some sort of TPM, which is how Chromebooks work. The attacker having a box is not enough - identity is a key principal of zero trust networks.
What they need for you is some Network ai (dark something) to see change in user behaviour eg access accounts and honey pot accounts that any internal staff would raise alert immediately
Frankly, I love Twitter. They've always seemed like the "scrappy" big player in the tech space that's played just the right side of the norm in many cases. An appropriately filtered experience on Twitter as a user is actually quite fun.
That said, much of this is atrocious news. For all of their engineering prowess, seeing poor opsec failures combined with the lack of basic security principles like "containment of blast radius" and "fast response to critical failures" is not something you can easily forgive at this scale.
But who am I kidding... it would be especially rich if some of these takeovers were enabled by simjacking-like attacks. Not that long ago, the only two-factor auth mechanism that worked for me was SMS.
It is inexcusable that Twitter is employing people who are susceptible to social engineering attacks like this. This is simple training and seriousness.
You too could be social engineered. The worlds foremost security specialists are not immune, good chance that there is some social engineering vector that would work on you.
Admitting that to yourself is a huge step forward in being able to detect it. Believing yourself immune increases your chances of being spearfished.
Where I work there is training software that is somewhat effective at preventing phising - it actually sends out phising emails itself. Then employees who fall for it are given extra training (in a no fault sort of way).
Perhaps, but im also wary of these types of things, because i worry that people will feel embarassed at being tricked, and will (maybe subconciously) see the internal security team as the enemy, which is also a bad outcome.
I also worry that the emails might not represent real attack emails, and we end up training users to identify the test emails but not real attack emails.
Nothing is 100% secure. Having users fail to spot a pishing mail, is a very good training on general awareness, but no guarantee, that they will not make misstakes under pressure.
I would actually be interested in seeing some studies on that.
My gut feeling is for engineers, the phising training that most companies use is wholly ineffective at doing anything, and in particular it is especially ineffective against targeted attacks. But i have yet to see any research one way or another.
I suspect less technical users might benefit from such training a bit more (but still not that much)
I will freely admit that I fell for a phishing campaign. I’d just bought something on eBay (this was a while ago). I got an email about something in my account later that day that made it through my spam filters. I clicked on it, signed in, and then realized I’d done the deed. Nothing happened or was lost, but yes - it just takes one quick mistake.
Some password databases involve copy and pasting or autotyping. If you want automatic hostname verification you need a password database integrated with your browser. On mobile many browsers don't support extensions so integrating my password database into the browser would be hard.
In short, I do not know my ebay password, but I could have fallen for this phishing attack.
On mobile this is possible even without browser extensions - enpass, lastpass etc work just fine in Chrome or any other app, if it detects a password field.
Okay, so who has been fired?
That's what "zero tolerance" means: no excuses, not even "someone tricked me." And no punishment but the maximum.
Anything less would involve some degree of tolerance, and when you say "zero" that means no tolerance whatsoever.
It's obviously stupid to manage any organization that way, of course. It's a fatuous, dishonest phrase.
So stop talking about "zero tolerance" since all it means is "we make hyperbolic claims that we have no intention of living up to."