Hacker News new | past | comments | ask | show | jobs | submit login

As someone who works to stop these, the most frustrating part is how even infosec people thik enough training or $vendor's email security solution will stop this. It's like boy scouts that think they will stop navy seals. There is too much focus on entry point of an attack,especially by news media.



Speaking from experience at a major infosec company, the impression I got internally was "we offer phishing tests, but we don't recommend them, because phishing succeeds 100% of the time".

So I'm confused by the idea "even infosec people think training will stop this".


I disagree. There are services that regularly send fake phishing emails on a regular basis. If they click a link or fail to flag enough emails, their boss gets notified that more training is necessary.

At the bank that I see this used at, the employees are far less trusting of emails and such.

Training works if it's done right.


Yes, I've been involved with such a program before, and it definitely helps a lot. Phishing email click rates go way down.

This is carefully planned phone-based spear phishing, though, and that's a lot tougher to protect against. It can be easy for a skilled con artist to gain someone's confidence over the phone, no matter how much you warn about vishing (voice phishing). I'm sure training can still help there, but attackers just keep trying again and again until they find someone it works on.


Same principle can apply though. If email phishing can be simulated and used as training, voice can be added to that training drill.

Any successful attack vector can be turned into a training scenario and repeated until better responses are trained into the target group.

Military casualty drills are very effective at instilling near instinctive responses... same principle applies.


Absolutely. It's just that highly motivated, targeted, and sophisticated social engineering is really tough to totally prevent. It just takes one person to fall for it, and the attackers can keep cycling through people (quickly, to get ahead of company-wide warnings about the social engineering attempts) until they succeed.


Training does work to reduce the amount of succesful untargeted attacks. For spear-phishing it's hit and miss on how good the attack is, but a good enough attack will work against almost anyone. As someone that sees really well crafted phish, I can tell you I myself will fall for a good phish. It has to do with eliminating the element of surprise, if I didn't expect the email I will assume it's a phish. But if rapport is built and the subject is something very specific only a few people are privy to then my guard will be lowered. Business email compromise comes to mind, they just reply to an existing thread with a link to a trusted site like onedrive


Fake phishing emails work to remind users to check emails. Ask me how I know :-)

I also believe if a real phishing email makes it to a user then there’s a problem. Some of the real ones I get were easy to spot, “we tried to deliver a package” or “your order is on its way” type stuff. Spam filters should’ve picked them up.


I think badrabbit should have said "even some infosec people think training will stop this".

As-is the grammar is ambiguous whether badrabbit meant "some" or "all".


I meant many not all


Maybe infosec company is different than infosec at $bigcorp. I worked at a few places and the sentiment is the same.


What infosec people think $vendor email security solutions are going to solve phishing attacks? I was under the impression that the people that buy those solutions (like many security solutions) are primarily non-infosec people that want to paper over their real problem without fixing it.

Granted, there is a place for some of these things temporarily while working to fix the actual problem, but that's a mitigation, not a solution.


Nope, seasoned pros I respect think trainig+$vendor is good enough. If it isn't, blame the user or the vendor!

There are shops where the goal is to have someone to blame when you get owned and there are rare shops where the goal is to do it right to catch/stop bad guys even if it means you get blamed (because management understand security is not absolute)


Well in the end we are all just human. We can't expect to blame things on each thing a human does to human.


I agree,but tell that to the people that fire employees for failing phishing tests


If they fire for one failed test, they need to understand that people learn from mistakes.

If they fire for repeated failed test, perhaps the person who is failing is not very well suited for a role where you have to resist phishing.


That's exactly how they think and it's b.s.! The whole point of this comment thread is that most people will fall for a phish if the phish is good enough.


FWIW email security training is something you'll probably be forced to provide, to some degree, as a matter of compliance. It's another case of compliance wasting time by driving companies to do security work that isn't meaningful.


I think some amount of email security training is worthwhile. I was specifically talking about so-called "solutions".


>There is too much focus on entry point of an attack,especially by news media.

That depends on what you mean by "entry point". If you define the entry point as a person, then yes don't focus on that. But if you define the entry point as phishable credentials, then focusing on that is good, it will prompt companies to switch to phishing-resistant credentials (U2F security keys).


Even u2f isn't fool proof (exploitation and cookie theft techniques). Execution,privesc,lateral movement are things focus should be on. You can't control the facg that people need to use email and they will for for a phish,but you can control your authentication system, alerting system,etc...


You're right about those. But malware can be blocked by blocking support people from installing any new software. Cookie theft is indeed tricky to stop, but it's much harder to pull off than regular credential phishing, so blocking regular credential phishing is still a good win. And there are techniques to stop cookie theft (binding them to a non-bearer-token, such as channel ID, token binding, TLS client cert, IP address).


> Even u2f isn't fool proof (exploitation and cookie theft techniques)

What is "Exploitation" standing for here? Exploitation of... what? How and by who?


Browser exploitation using a phishing link. For a publicized example look at the coinbase attack last year or so.


The one your thinking of is a malware attack right? The intended attack results in victims running malware from the attacker. So that's not "phishing" by any definition I recognise.

And even in that attack, the victim's long term credential is protected if they use FIDO authenticators - the bad guys can't use the authenticator without help from the legitimate user and they don't gain any enduring credentials.

So you need to do the attack live and then hope the victim not only doesn't realise you just infected them with malware, but conveniently signs into something at the moment you need a signature, for which you can hijack their expectation to press contact on the authenticator. Then you get one authentication. If you need another one, for any reason (timeout, subsequent operation asks to re-authenticate, anything) you have to do it again because you do not gain enduring credentials.


The malware doesn't need to there "moment you need a signature". The malware can just grab an existing cookie or use an existing cookie. Sure it's not an enduring credential. But whether the malware is there at that moment or an hour later, it won't make much difference.

And this specific Twitter attack might not have needed enduring credentials. It seemed to happen over a short time period.


If stealing a cookie is all you needed then FIDO isn't relevant to the picture at all.


You were mentioning an attacker trying to steal a touch from the FIDO device using malware. My point was that's pointless because cookie theft is easier and gives the attacker the same thing.


If the attacker wants a cookie, then stealing a cookie gets them the cookie, but it is not necessarily the case that the attacker only wants a cookie.

Nothing compels Twitter to design their user administration tool so that it says "Oh you have a cookie well then it's fine for you to change Elon Musk's email address and switch off his 2FA".

For example it's perfectly easy to have a "Confirm" step for a privileged operation that requires WebAuthn authentication. But if you're the attacker that means a cookie doesn't help you.


Hey,

Any good literature which you'd recommend to read to avoid something like this?


Use U2F for 2FA. If Twitter had all their employees using U2F keys it's very unlikely they'd be phished.

With U2F it's impossible to "enter" a 2FA code on the wrong domain, making you immune to phishing attacks by most definitions. This Kerbs article from awhile back says that Google had zero phishing incidents after making this switch: https://krebsonsecurity.com/2018/07/google-security-keys-neu...


"If Twitter had all their employees using U2F keys it's very unlikely they'd be phished."

I don't know that they even need to go that far. Just U2F on the god-mode admin tool would have been reasonable.


I keep trying to read and understand the attacks that have all happened so far... after you understand the vectors that are currently in use, hopefully it'll stoke your imagination to see how you might attack other systems and make up new vectors. All the current attack write-ups will usually link to vulnerabilities as well. See HackerOne disclosures, Google Project Zero or the "How I hacked X" disclosures you see on HN.

I don't think published textbooks are very useful — attackers also have access to them, and if the attack has been written down it'll likely be encoded into a firewall software or security process rulebook already (though it might still work for smaller companies lagging behind on the curve).


That's the thing, you can't,not with the way current tech is. But you can read up about having good monitoring/detection and hardening on your endpoints.

Microsoft for example recommenda privileged access workstations. If twitter's employees used a separate set of credentials and workstations for privileged twitter moderation than their regular account/machine used for email and day to day stuff I bet the attack wolf have failed.


There are probably Twitter employees whose job it is to reset emails all day long. Having 2 separate computers and accounts, one for for resetting emails (which is done all day) and one for responding to email sounds like quite a burden on employees. How are they going to get the name of the account from one computer to the other? Copy and paste won't work. Retyping from one computer to the other surely will result in typos.


I'd say a typo rate is an acceptable tradeoff for air-gapping privileged access.


The typos themselves could be a vector for attack. The attacker asks for a reset for one account with a capital I and maybe gets a reset for a different account with a lowercase l.


Sounds like that's a font selection issue for the admins - there are good choices of font that are unambiguous, they've been linked here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: